Meta will use AI to analyze height and bone structure to identify if users are underage
Meta has started using artificial intelligence to analyze users’ height and bone structure to determine if they are underage. The system is currently active in select countries and aims to expand further. This visual analysis approach is part of Meta’s effort to keep younger users safer by improving age verification on platforms like Facebook and Instagram.
This is significant because age verification has been a major challenge for social media platforms. Many underage users can create accounts with falsified birth dates, putting them at risk and exposing platforms to regulatory scrutiny. By using AI to assess physical features associated with age, Meta hopes to catch users who might be slipping through the cracks. This could improve online safety and compliance with laws designed to protect minors.
The move comes amid growing concerns over child safety online and mounting pressure on tech companies to better police their user bases. Traditional age checks rely on self-reported information, which is easy to manipulate. Leveraging AI for visual cues is a newer approach that tries to provide an additional layer of verification. Such technology analyzes factors like facial structure and body proportions, which can correlate with age, although it remains complex and sensitive to deploy accurately and fairly.
This development also highlights the expanding role of AI in content moderation and user management. As AI models improve at interpreting images, platforms can automate more functions that used to require manual review. However, this raises questions about accuracy, bias, and privacy. Estimating age from photos is not foolproof and could result in false positives or negatives, potentially blocking legitimate users or letting some underage users pass. Meta’s rollout will likely prompt ongoing discussion about the ethical use of AI in social settings.
Watching how Meta scales this technology will be important. Success could encourage more platforms to adopt AI-based age verification, shaping internet safety standards. Users and regulators will be paying attention to how transparent Meta is about the system’s limits and safeguards. The next steps may involve integrating AI outputs with other verification methods or enhancing user controls. This signals a growing trend of AI not just identifying harmful content but actively managing user identities.
— AI Quick Briefs Editorial Desk