SAN FRANCISCO: In a bold move to strengthen online safety for children, YouTube has launched an advanced artificial intelligence (AI) system designed to detect underage users and prevent them from accessing adult or age-inappropriate content regardless of the age listed on their profiles.
The new AI-driven system, currently rolling out across the United States, is part of YouTube’s broader efforts to build a safer digital ecosystem, especially as more children are accessing content through smartphones, tablets, and smart TVs. The technology goes beyond basic age verification by analyzing patterns in user behavior, such as watch history, interaction habits, and other account signals, to estimate a user’s likely age.
Unlike older methods that simply ask for a birth date, this system proactively flags accounts that appear to belong to minors even if they’ve falsely claimed to be older. Once flagged, YouTube may restrict content access and prompt users to verify their age through official means like government-issued ID, a credit card, or a real-time selfie.
“YouTube is committed to making the platform safer for kids and teens, and we’re using AI to do that more effectively,” said James Beser, Director of Product Management for Youth and Family at YouTube. “This new system allows us to intervene when someone’s behavior suggests they’re underage, even if their account says otherwise.”
The AI model has already undergone testing in several countries, where it successfully identified underage users and adjusted content access accordingly. Initial results showed a significant reduction in the number of minors exposed to adult content, prompting YouTube to begin wider deployment.
This rollout comes at a time when global concern over online child safety is rapidly escalating.
Governments around the world are tightening digital protection laws for minors. In Australia, new legislation set to take effect from December 10 will completely ban access to platforms like YouTube, Instagram, and TikTok for children under 16. Australian regulators have cited disturbing figures showing a sharp increase in children being exposed to explicit or harmful online material.
Read More:World Photography Day 2025: Every click tells story
YouTube, owned by Google, has long maintained that it is a video-sharing platform rather than a social media site. However, the growing volume of content and the rise in child viewers has led to mounting pressure for stricter safety protocols. The platform already offers YouTube Kids, a separate app tailored for young users, but millions of minors continue to access the main platform, often bypassing existing age filters.
The introduction of AI-based age detection is seen as a necessary evolution in online content moderation. By combining machine learning, behavioral analysis, and multi-step age verification, YouTube hopes to close the gaps that have allowed children to access unsafe or mature content for years.















