Key takeaways
Instagram has rolled out artificial intelligence technology to detect teens using adult accounts in Australia, marking a significant escalation in efforts to protect young users ahead of the country's landmark social media age restrictions.
The AI-powered age verification system, which launched Monday (September 22), represents Meta's proactive response to Australia's upcoming ban on social media access for children under 16, scheduled to take effect December 10, 2025.
AI technology targets age misrepresentation
Mia Garlick, Meta's regional policy director, said the new feature intends to "ensure teens have safer, age-appropriate experiences on Instagram". The technology has already been successfully implemented in overseas markets including the United States.
"Understanding age online is a complex, industry-wide challenge, especially if people misrepresent how old they are," Garlick explained. "We've spent many years and invested heavily to refine our AI technology to identify, in a privacy-preserving way, whether someone is under or over 18."
The system analyzes behavioral patterns, content interactions, and account creation data to determine whether users are actually teenagers, regardless of the birthdate they provided during registration.
When the AI identifies a suspected teen account, it automatically converts it to a Teen Account with enhanced safety features.
Teen Accounts include default private settings, messaging restrictions limited to followers and existing connections, filtered sensitive content, and time management controls.
Users flagged as under-18 receive additional protections that blur nudity, restrict live streaming capabilities, and limit exposure to content depicting violence or promoting cosmetic procedures.
Australia's groundbreaking legislation drives change
The deployment comes as Australia prepares to implement the world's first comprehensive social media ban for users under 16.
Prime Minister Anthony Albanese announced the legislation in November 2024, stating: "I know there are many mums and dads who have been pushing for change in this area... There's no going back to a world without technology, and the internet has given all of us access to a world of knowledge and culture that can be such a force for good. But too often, social media isn't social at all."
The Australian law places enforcement responsibility squarely on social media companies, which face penalties of up to $49.5 million for failing to prevent, detect, deactivate, or remove underage accounts.
Under the legislation, "age-restricted social media platforms will have to take reasonable steps to prevent Australians under 16 years old from having accounts on their platforms", according to Australia's eSafety Commissioner.
Julie Inman Grant, Australia's eSafety Commissioner, told NPR: "When this law takes effect, on Dec. 10, 2025, there's not going to be some switch that's flipped off. Every user under 16 will not automatically have their apps disappear. The first thing we've tasked social media companies with doing is identifying who all the under 16-year-old users are on their platforms."
Industry response and implementation challenges
Meta has argued that age verification should occur at the app store level rather than on individual platforms.
The company contends that requiring parents to verify children's ages during app downloads would be more effective and privacy-preserving than platform-level verification systems.
However, both Apple and Google have resisted taking on age verification responsibilities, citing data security and privacy concerns.
This has left social media companies to develop their own technological solutions.
Meta reports that overseas, nine in 10 accounts have remained in their AI-assigned category, suggesting the technology's classifications are generally accepted by users.
The company provides options for users to change their account settings if they believe they've been incorrectly classified as teenagers.
Global impact and future implications
Since introducing Teen Accounts in September 2024, Meta has enrolled at least 54 million teens globally into the protective accounts on Instagram. The company plans to expand Teen Accounts to other Meta platforms, including Messenger and Facebook, in 2025.
The Australian legislation has drawn international attention as governments worldwide grapple with protecting children online while balancing free speech and privacy concerns. Similar proposals are under consideration in the United States, United Kingdom, and other jurisdictions.
Research conducted by Australia's eSafety Commission found that "84% of 8- to 12-year-olds are already on social media," with "80% of them" indicating that "parents or any adults" were aware of their account creation and "in 90% of cases, it was parents that helped them set up their accounts".
Read more: