the dark side of ai for children

Young People’s Rights in the Age of AI: Why Their Voice Matters Now

Young People’s Rights in the Age of AI: Why Their Voice Matters Now Artificial intelligence already shapes the daily lives of billions of young people worldwide. Whether they are scrolling through social media, chatting with AI companions, or using educational apps, children and teenagers are immersed in AI-powered technology. But as this digital transformation accelerates, one critical question demands urgent attention: Are young people’s fundamental rights being protected in this new landscape? The answer is complex and concerning. Online or offline, children’s rights to privacy, free expression, education, and safety must be safeguarded. Yet current evidence suggests significant gaps between the promise of AI and the protection of young users. The Scale of the Challenge Nearly three quarters of teens report using AI chatbots, according to recent research. This is not a distant future scenario it’s happening right now. Young people aren’t treating AI as an experiment; for them, it’s simply part of everyday life. From classroom learning tools to companion chatbots, AI technologies have been rapidly deployed to hundreds of millions of children and teens. But this rapid adoption has come with documented harms. Widespread issues have been reported including mental health problems, financial harm, medical harm, emotional dependence, manipulation and deception, and even cases involving self-harm and suicide. The statistics are sobering: approximately one in three teen users of AI chatbots report feeling uncomfortable with something an AI chatbot has included in an output. Privacy Under Pressure Children’s data has become a commodity in the AI era. The updated Children’s Online Privacy Protection Act (COPPA) rules that took effect in June 2025 recognize this reality. The Federal Trade Commission stated that using a child’s personal information to train or develop artificial intelligence technologies requires separate, verifiable parental consent. The regulations have also expanded protections. The definition of “personal information” now includes biometric identifiers like voiceprints and facial templates, and operators are prohibited from indefinitely retaining children’s data. These changes acknowledge that in an AI-driven world, a child’s voice, face, and behavioral patterns are as sensitive as their name and address. Yet enforcement and compliance remain ongoing challenges. States have stepped in with their own regulations. California recently enacted legislation requiring companion chatbot platforms to clearly disclose when users are interacting with AI, with additional protections for minors including break reminders and measures to prevent exposure to sexually explicit content. The Danger of AI Companions Perhaps no area illustrates the urgency of protecting children’s rights more starkly than AI companion chatbots. These systems are designed to form emotional bonds with users through simulated empathy and personalized interactions. For vulnerable young people, the consequences can be devastating. More than seventy percent of American children are now using these AI products, according to lawmakers who introduced protective legislation in 2025. The bipartisan GUARD Act represents growing recognition that AI companions pose unique risks to minors. The proposed legislation would ban AI companies from providing AI companions to minors and create new criminal penalties for companies that knowingly make available to minors AI systems that solicit or produce sexual content. The concerns extend beyond chatbots. AI-generated synthetic media has created new categories of harm. A federal lawsuit filed in October 2025 against, a website using AI to create nonconsensual nude images, highlights the severe privacy violations possible with generative AI technologies. The case may become the first major application of the TAKE IT DOWN Act, the new federal law requiring platforms to remove AI-generated or real intimate images shared without consent. AI-Generated Child Sexual Abuse Material: A Growing Threat One of the most disturbing developments in AI technology is its use to create fabricated child sexual abuse material. AI algorithms now possess the ability to generate realistic but entirely fake explicit content involving minors, creating what child protection advocates call an unsettling blur between authentic and fabricated material. This technology poses risks that extend far beyond the creation of illegal content itself. According to the Child Rescue Coalition, AI-generated material introduces a new dimension to online threats by amplifying the potential for sextortion. Predators or even peers can exploit these AI-generated images to threaten or coerce children into compliance with their demands, whether sending money, complying with threats, or engaging in sexual acts to prevent the release of fake content. The implications are chilling: predators no longer need actual explicit photos of a child to exploit or threaten them. They can now create convincing fabricated versions using publicly available photos from school yearbooks, sports teams, or social media profiles. This reality fundamentally changes the risk landscape for every child with any online presence. The challenge for parents and law enforcement is equally complex. When fabricated content looks shockingly real, distinguishing between what’s authentic and what’s AI-generated becomes increasingly difficult. This complicates both the prosecution of offenders and the protection of victims, as the traditional markers used to identify and combat child sexual abuse material become less reliable in an AI-generated context. Deepfakes and Digital Impersonation Beyond the creation of explicit material, AI-powered deepfake technology enables sophisticated forms of impersonation that can be used to manipulate and deceive children. Deepfakes involve the manipulation of visuals and audio to create convincing yet entirely fabricated content, allowing predators to create fake identities or impersonate other children. This capability becomes particularly dangerous when predators impersonate someone familiar to a child—a classmate, friend, or peer from online communities. By exploiting pre-existing connections and trust, predators can lower a child’s defenses and manipulate them into compromising situations. The impersonated identity might be used to request explicit content, arrange meetings, or establish relationships built entirely on deception. As Phil Attwood, Director of Impact at Child Rescue Coalition, emphasizes, parents cannot ignore the concerning impact of AI on child sexual abuse and online exploitation. Staying informed, having open conversations with children, and actively monitoring their online activities are crucial steps. By taking a proactive role, parents contribute to creating a safer digital space for their children in the face of evolving technological challenges. The sophistication of these impersonation tactics

Young People’s Rights in the Age of AI: Why Their Voice Matters Now Read More »