News-Meta-EU-Child-Safety-Violation-2026

From AI Law Wiki
Jump to navigation Jump to search

European Union regulators found that Meta Platforms violated EU law by failing to adequately prevent children from accessing and using its platforms, according to reports on April 28, 2026. The finding represents a significant enforcement action under the EU's Digital Services Act (DSA), which imposes strict obligations on very large online platforms to protect minors.[1]

The violation finding could lead to substantial fines under the DSA, which permits penalties of up to 6% of a company's global annual turnover. Meta had previously faced scrutiny over its age verification practices, and this determination confirms that its existing measures were deemed legally insufficient by EU authorities.[1]

Broader Context

The decision is part of a broader EU regulatory crackdown on platform responsibilities toward minors. Alongside the DSA, the EU AI Act's provisions on high-risk AI systems that interact with children also impose obligations on platforms deploying AI-powered recommendation and content moderation systems that affect minors.[1]

Significance

The ruling adds to Meta's growing regulatory headwinds in Europe, following fines under the GDPR and ongoing investigations under the Digital Markets Act. It also signals that the DSA's child safety provisions will be aggressively enforced, which may accelerate AI companies' compliance investments in age assurance technologies.

References