Meta Deploys AI to Identify and Remove Under‑13 Users from Facebook, Instagram

Key Points
- Meta introduces AI that scans text and visual content to detect users under 13.
- Visual analysis looks for height, bone structure, and other age‑related cues, not facial recognition.
- Suspected under‑age accounts are deactivated; users must verify age to reactivate.
- Teen‑account system for 13‑ to 15‑year‑olds launches on Instagram in Brazil and 27 EU countries.
- Facebook will adopt teen‑account features in the US first, then EU and UK.
- WhatsApp now offers parent‑managed accounts for children under 13.
- EU regulators have opened a Digital Services Act investigation into Meta’s child‑safety measures.
Meta announced new AI-driven tools designed to spot and delete accounts belonging to children under 13 on Facebook and Instagram. The technology scans text for clues such as grade level or birthday mentions and analyzes photos and videos for visual cues like height and bone structure. When a potential under‑age user is flagged, the account is deactivated pending age verification. The rollout begins in select markets and will expand globally, while the company also introduces automatic teen‑account placement for 13‑ to 15‑year‑olds and parent‑managed WhatsApp accounts. Regulators in the EU have opened an investigation into Meta’s compliance with the Digital Services Act.
Meta disclosed a suite of artificial‑intelligence tools aimed at keeping children under the age of 13 off its flagship platforms, Facebook and Instagram. The company’s blog post details how the new system blends textual analysis with visual scanning to flag under‑age accounts more reliably than before.
On the textual side, the AI looks for contextual hints in user‑generated content. Mentions of a school grade, birthday celebrations, or other age‑related language in profiles, posts, and captions trigger a closer review. Simultaneously, a visual‑analysis engine examines photos and videos for physical indicators such as height and bone structure. Meta stresses that the process is not facial recognition; the algorithm estimates a general age range without identifying a specific individual.
When the system suspects a user is under 13, the account is automatically deactivated. The user must then provide proof of age—such as a government‑issued ID—to regain access. If verification does not occur, Meta wipes the account entirely. This dual‑step approach aims to reduce the number of under‑age accounts that slip through manual checks.
The visual‑analysis feature is currently live in a handful of countries, with Meta saying it will broaden the rollout as the technology matures. In parallel, the company is extending its AI‑driven age detection to the 13‑ to 15‑year‑old bracket. Detected teens will be shifted into dedicated teen accounts that include parental controls and additional safety features. The pilot for this teen‑account system launches on Instagram in Brazil and across 27 European Union member states.
Facebook will receive the teen‑account upgrade next, starting in the United States before expanding to the EU and the United Kingdom in the coming month. WhatsApp, meanwhile, has introduced parent‑managed accounts that let children under 13 use the messaging app under adult supervision.
Meta’s moves come amid mounting regulatory pressure. The European Commission recently released preliminary findings from an investigation into Facebook and Instagram, suggesting the platforms may be violating the Digital Services Act by failing to adequately prevent under‑age participation. Meta now has an opportunity to review the Commission’s findings and implement corrective measures.
Industry observers note that Meta’s reliance on AI marks a shift from purely manual moderation toward automated, scalable solutions. By combining textual cues with visual age estimation, the company hopes to close gaps that previously allowed under‑age users to slip through verification processes. Critics, however, caution that algorithmic judgments can produce false positives, potentially wiping legitimate accounts.
Regardless of the debate, the rollout signals Meta’s intent to align its platforms with global child‑protection standards while navigating a complex regulatory landscape. The next few months will reveal how effectively the AI tools perform at scale and whether they satisfy the demands of regulators and privacy advocates alike.