Young People’s Rights in the Age of AI: Why Their Voice Matters Now

Young People’s Rights in the Age of AI: Why Their Voice Matters Now

Artificial intelligence already shapes the daily lives of billions of young people worldwide. Whether they are scrolling through social media, chatting with AI companions, or using educational apps, children and teenagers are immersed in AI-powered technology. But as this digital transformation accelerates, one critical question demands urgent attention: Are young people’s fundamental rights being protected in this new landscape?

The answer is complex and concerning. Online or offline, children’s rights to privacy, free expression, education, and safety must be safeguarded. Yet current evidence suggests significant gaps between the promise of AI and the protection of young users.

The Scale of the Challenge

Nearly three quarters of teens report using AI chatbots, according to recent research. This is not a distant future scenario it’s happening right now. Young people aren’t treating AI as an experiment; for them, it’s simply part of everyday life. From classroom learning tools to companion chatbots, AI technologies have been rapidly deployed to hundreds of millions of children and teens.

But this rapid adoption has come with documented harms. Widespread issues have been reported including mental health problems, financial harm, medical harm, emotional dependence, manipulation and deception, and even cases involving self-harm and suicide. The statistics are sobering: approximately one in three teen users of AI chatbots report feeling uncomfortable with something an AI chatbot has included in an output.

Privacy Under Pressure

Children’s data has become a commodity in the AI era. The updated Children’s Online Privacy Protection Act (COPPA) rules that took effect in June 2025 recognize this reality. The Federal Trade Commission stated that using a child’s personal information to train or develop artificial intelligence technologies requires separate, verifiable parental consent.

The regulations have also expanded protections. The definition of “personal information” now includes biometric identifiers like voiceprints and facial templates, and operators are prohibited from indefinitely retaining children’s data. These changes acknowledge that in an AI-driven world, a child’s voice, face, and behavioral patterns are as sensitive as their name and address.

Yet enforcement and compliance remain ongoing challenges. States have stepped in with their own regulations. California recently enacted legislation requiring companion chatbot platforms to clearly disclose when users are interacting with AI, with additional protections for minors including break reminders and measures to prevent exposure to sexually explicit content.

The Danger of AI Companions

Perhaps no area illustrates the urgency of protecting children’s rights more starkly than AI companion chatbots. These systems are designed to form emotional bonds with users through simulated empathy and personalized interactions. For vulnerable young people, the consequences can be devastating.

More than seventy percent of American children are now using these AI products, according to lawmakers who introduced protective legislation in 2025. The bipartisan GUARD Act represents growing recognition that AI companions pose unique risks to minors. The proposed legislation would ban AI companies from providing AI companions to minors and create new criminal penalties for companies that knowingly make available to minors AI systems that solicit or produce sexual content.

The concerns extend beyond chatbots. AI-generated synthetic media has created new categories of harm. A federal lawsuit filed in October 2025 against, a website using AI to create nonconsensual nude images, highlights the severe privacy violations possible with generative AI technologies. The case may become the first major application of the TAKE IT DOWN Act, the new federal law requiring platforms to remove AI-generated or real intimate images shared without consent.

AI-Generated Child Sexual Abuse Material: A Growing Threat

One of the most disturbing developments in AI technology is its use to create fabricated child sexual abuse material. AI algorithms now possess the ability to generate realistic but entirely fake explicit content involving minors, creating what child protection advocates call an unsettling blur between authentic and fabricated material.

This technology poses risks that extend far beyond the creation of illegal content itself. According to the Child Rescue Coalition, AI-generated material introduces a new dimension to online threats by amplifying the potential for sextortion. Predators or even peers can exploit these AI-generated images to threaten or coerce children into compliance with their demands, whether sending money, complying with threats, or engaging in sexual acts to prevent the release of fake content.

The implications are chilling: predators no longer need actual explicit photos of a child to exploit or threaten them. They can now create convincing fabricated versions using publicly available photos from school yearbooks, sports teams, or social media profiles. This reality fundamentally changes the risk landscape for every child with any online presence.

The challenge for parents and law enforcement is equally complex. When fabricated content looks shockingly real, distinguishing between what’s authentic and what’s AI-generated becomes increasingly difficult. This complicates both the prosecution of offenders and the protection of victims, as the traditional markers used to identify and combat child sexual abuse material become less reliable in an AI-generated context.

Deepfakes and Digital Impersonation

Beyond the creation of explicit material, AI-powered deepfake technology enables sophisticated forms of impersonation that can be used to manipulate and deceive children. Deepfakes involve the manipulation of visuals and audio to create convincing yet entirely fabricated content, allowing predators to create fake identities or impersonate other children.

This capability becomes particularly dangerous when predators impersonate someone familiar to a child—a classmate, friend, or peer from online communities. By exploiting pre-existing connections and trust, predators can lower a child’s defenses and manipulate them into compromising situations. The impersonated identity might be used to request explicit content, arrange meetings, or establish relationships built entirely on deception.

As Phil Attwood, Director of Impact at Child Rescue Coalition, emphasizes, parents cannot ignore the concerning impact of AI on child sexual abuse and online exploitation. Staying informed, having open conversations with children, and actively monitoring their online activities are crucial steps. By taking a proactive role, parents contribute to creating a safer digital space for their children in the face of evolving technological challenges.

The sophistication of these impersonation tactics means that children may not recognize warning signs that would have been apparent in previous generations of online threats. A video call that appears to show a trusted friend, voice messages that sound authentic, or images that seem legitimate can all be fabricated using AI technology available today.

AI-Driven Grooming: Automated Predation

Traditional online grooming relied on the instincts, patience, and social engineering skills of individual predators. AI has transformed this threat by enabling automated, data-driven approaches to identifying and targeting potential victims.

AI-driven grooming uses advanced algorithms to analyze children’s online activities, communication patterns, and personal information. This allows predators to tailor their approaches to exploit specific vulnerabilities with unprecedented precision. The algorithms can detect patterns of behavior, identify interests and hobbies, and even assess emotional states based on social media posts and online interactions.

This technological enhancement makes the grooming process more efficient and more effective. Instead of casting a wide net and hoping to find vulnerable targets, predators can use AI to identify children who exhibit specific characteristics or vulnerabilities. The technology can help predators understand which approaches are most likely to succeed with particular children, how to time their interactions for maximum impact, and how to customize their personas to be most appealing or sympathetic.

The sophistication of AI-driven grooming extends to the creation of tailored messages and content that align precisely with a child’s interests or emotional vulnerabilities. The goal remains establishing a false sense of trust and connection, but the pathway to that goal is now data-driven and optimized through machine learning. This makes it easier for predators to manipulate children over time, gradually escalating the relationship toward exploitation while maintaining the child’s trust and compliance.

Education: Promise and Peril

In educational settings, AI presents both extraordinary opportunities and significant risks. The technology can personalize learning, provide real-time feedback, and make education more accessible to students with diverse needs. AI-powered tools can offer text-to-speech functionality, adaptive learning pathways, and assistive technologies that help remove barriers for students with disabilities.

Research from Harvard Graduate School of Education provides important insights into this complex landscape. Ying Xu, an assistant professor at Harvard, frames the central question: whether children can benefit from AI interactions in ways similar to how they benefit from interacting with other people. Her research, along with that of many others, demonstrates that children can learn effectively from AI when it’s designed with learning principles in mind.

AI companions that ask questions during activities like reading can improve children’s comprehension and vocabulary, Xu notes. However, she emphasizes a critical limitation: while AI can simulate some educational interactions, it cannot fully replicate the deeper engagement and relationship-building that come from human interaction, particularly when it comes to follow-up questions or personalized conversations that are essential for language and social development.

Xu acknowledges both the excitement and the concerns surrounding what she calls the “AI generation.” While AI has potential for personalized learning and helping students develop skills for an AI-driven society, numerous unanswered questions remain. Will children’s ability to find answers and learn independently be affected? Does using commands like “hey” to activate AI systems diminish children’s understanding of politeness? And perhaps most worryingly for many: will children become more attached to AI than to the people around them?

These questions underscore the need for what Xu calls AI literacy—teaching children to understand the limitations and potential misinformation from AI, and promoting critical evaluation of AI-generated content. This responsibility falls on both developers and educators.

However, schools are often adopting AI-powered educational technology at scale with limited vetting. This creates potential risks around data privacy, algorithmic bias, and the effectiveness of the tools themselves. Policymakers have called for child-rights impact assessments, evidence of educational value, and clear privacy protections before AI tools are deployed in classrooms.

The federal government has recognized the need for comprehensive approaches to AI in education. An executive order on AI education issued in April 2025 emphasizes the importance of AI literacy for students while calling for proper training and resources for educators. The challenge is ensuring that AI enhances rather than replaces the human connections essential to learning.

Why Young Voices Must Be Heard

Perhaps the most fundamental issue is participation. Young people are not simply passive recipients of AI technology—they are stakeholders whose lives will be profoundly shaped by decisions made today.

Research on children’s experiences of privacy in the digital environment shows that kids of all ages are very concerned about how companies are collecting and using their data. These concerns extend beyond privacy to include questions about fairness, transparency, and the right to participate in shaping the technologies that affect them.

UNICEF’s updated guidance on AI and children, released in its third version in 2025, emphasizes this point clearly. The guidance was developed through consultation with diverse stakeholders including children themselves, across twelve countries. It provides ten requirements for child-centered AI, grounded in the Convention on the Rights of the Child, which affirms that every person under 18 has the right to participate in decision-making processes that impact them.

Some governments are already taking steps to include youth perspectives. South Korea, for example, has implemented an online platform allowing young people to provide feedback on digital policies, vote on proposed regulations, and suggest ideas. This recognition that the future shaped by AI belongs to today’s youth represents an essential shift in governance approaches.

Practical Steps for Parents and Guardians

While systemic change requires action from governments and corporations, parents and guardians play a crucial role in protecting children from AI-related risks today. The Child Rescue Coalition offers concrete guidance for families navigating these challenges.

Open and Ongoing Communication: Initiate honest conversations with children about their online activities, not as interrogations but as genuine interest in their digital lives. Encourage them to share experiences, express concerns, and understand the potential risks associated with explicit content online. Establishing trust is essential—children should feel comfortable coming to parents with concerns rather than hiding uncomfortable situations out of fear of punishment or judgment.

Digital Citizenship Education: Take time to educate children about responsible digital citizenship, emphasizing the importance of privacy, respectful online behavior, and the potential consequences of sharing explicit content or personal information. This education should be age-appropriate and ongoing, evolving as children develop and as technology changes.

Promoting Healthy Skepticism: Instill a sense of skepticism in children regarding online interactions. Encourage them to question the authenticity of messages, even if they appear to be from someone they know. Teach them to seek verification before responding to unusual requests or sharing sensitive information. In an era of deepfakes and AI impersonation, healthy skepticism is a critical protective factor.

Setting Clear Boundaries: Establish clear family guidelines regarding the sharing of personal information and explicit content online. Encourage children to pause and think critically before posting or sharing anything that could potentially be misused. These boundaries should be discussed and agreed upon together rather than simply imposed.

Leveraging Privacy Settings: Familiarize both yourself and your children with privacy settings on social media platforms and online services. Ensure that profiles are set to private when appropriate, limiting the exposure of personal information to a carefully selected audience. Regularly review these settings together, as platforms frequently change their privacy options and defaults.

Appropriate Monitoring: Implement age-appropriate monitoring of online activities. For younger children, this might include parental control software to restrict access to potentially harmful content. For older children and teens, monitoring should balance safety with respect for developing autonomy. Regular check-ins and conversations about online experiences can be more effective than surveillance alone.

Reporting and Response Protocols: Educate children on the importance of reporting suspicious or uncomfortable online encounters promptly. Ensure they know how to use privacy settings to block and report individuals who make them feel uneasy. Establish clear family protocols for what to do if they encounter explicit content, receive threatening messages, or experience any form of online harassment or exploitation.

A Rights-Based Framework Forward

Protecting children’s rights in the age of AI requires a comprehensive, rights-based approach. This means moving beyond a narrow focus on safety to address the full spectrum of children’s rights: privacy, expression, education, participation, and protection from exploitation.

Key principles should guide this work. First, AI systems used by or affecting children must be designed with their rights at the center from the outset, not as an afterthought. Second, policies must be grounded in evidence about how children actually use and experience these technologies, not assumptions based on adult perspectives. Third, children and adolescents must be genuinely involved in the research, development, and governance of AI technologies.

This isn’t about banning AI or retreating from technological progress. Young people themselves recognize AI’s potential benefits for learning, creativity, and solving global challenges. The goal is ensuring that this powerful technology serves their interests rather than exploiting their vulnerabilities.

Governments must establish clear rules and enforce them consistently. Technology companies must prioritize children’s wellbeing over profits and design systems with robust safeguards. Educators and parents need resources and training to help young people develop AI literacy and critical thinking skills. And crucially, young people themselves need real seats at the table where decisions about AI governance are made.

The Time Is Now

AI is not waiting for us to get our policies right. Every day, millions of young people interact with AI systems that collect their data, shape their experiences, and influence their development. The documented harms are real and growing. The need for action is urgent.

But there is reason for hope. Awareness of these issues has increased dramatically. New laws and regulations are being developed and implemented. Privacy advocates, former regulators, and child rights organizations are working to apply existing legal frameworks to AI systems while pushing for necessary new protections.

Most importantly, young people themselves are speaking up. They’re not asking to be shielded from technology—they’re demanding the rights and protections they deserve as full participants in a digital world. They’re calling for transparency, accountability, and the opportunity to help shape the AI systems that will define their futures.

The question is whether adults in positions of power in government, in technology companies, in educational institutions—will listen and act accordingly. Young people’s rights matter, online and offline. The technologies shaping their lives must reflect and protect those rights. And young people must have a real say in building the AI-powered future they’ll inherit.

The time for half measures and voluntary commitments has passed. The time for comprehensive, rights-based AI governance that centers children’s voices and protects their fundamental rights is now.

 

 

This article draws on recent research, policy developments, and advocacy work from organizations including UNICEF, the Electronic Privacy Information Center, the Federal Trade Commission, and youth-led initiatives worldwide. For more information on protecting children’s rights in the digital age, consult resources from child advocacy organizations and digital rights groups.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top