On August 13, 2025, YouTube introduced a new artificial intelligence system in the United States to estimate whether viewers are adults or minors by analyzing the types of videos they watch. The system was tested on a limited group of users and could expand nationwide if results were similar to those recorded in other countries. The AI worked only for logged-in accounts and checked ages regardless of the birth date listed during registration. YouTube, owned by Google for almost twenty years, applied existing safeguards once the system determined a viewer was under 18. These safeguards included reminders to rest from screen time, privacy alerts, and blocked recommendations for certain videos. Personalized advertisements were also disabled for accounts identified as minors, reducing intrusive marketing and inappropriate content exposure.

If a viewer was classified incorrectly, age verification could be completed using a government-issued ID, a credit card, or a selfie. Experts stated that the aim was to strengthen safety measures while respecting user privacy. However, some digital rights groups expressed apprehension about risks to personal data and freedom of expression. Despite contentious debates, the proactive adoption of AI in age checks was viewed as a paradigm shift in online safety, offering a more robust and streamlined system to protect younger audiences. This development was regarded as a significant move toward balancing security with privacy in the digital environment.