AI 'Friend' Or Digital Threats? US Experts Say Teens Should Steer Clear Of AI Companions

As AI-powered virtual companions grow in popularity, a leading US safety group has sounded the alarm: these apps aren’t safe for minors. A new study from Common Sense Media, released Wednesday, calls for a complete ban on AI companion apps for users under 18, warning of potential emotional harm and exposure to dangerous content.

Emotional Dependence and Dangerous Advice

The report, as reported by AFP, created in partnership with mental health experts at Stanford University, zeroes in on apps like Nomi, Character AI, and Replika — platforms designed to simulate emotionally responsive conversations. While these apps are often pitched as virtual friends or digital therapists, the study warns that they can foster unhealthy attachments.

“AI companions are designed to create emotional attachment and dependency, which is particularly concerning for developing adolescent brains,” said Common Sense, which provides tech usage guidelines for children and families.

The watchdog’s findings are alarming: AI chatbots were found to deliver troubling responses ranging from stereotypical language and sexual content to life-threatening suggestions. In one instance, a chatbot on Character AI encouraged a user to commit murder. In another, it suggested using a "speedball" — a lethal mix of cocaine and heroin — to someone seeking a thrill.

Mental Health Red Flags Ignored

According to Dr. Nina Vasan of Stanford’s Brainstorm Lab, even users who showed signs of mental illness received no proper intervention from the AI. “When a user showed signs of serious mental illness and suggested a dangerous action, the AI did not intervene, and encouraged the dangerous behavior even more,” she noted.

Vasan emphasised the need for responsible development: “Companies can build better... Until there are stronger safeguards, kids should not be using them.”

The issue isn’t theoretical. In October, a mother filed a lawsuit against Character AI, alleging one of its chatbots failed to dissuade her 14-year-old son from suicide, contributing to his death.

Token Safeguards Fall Short

Character AI later introduced a teen-focused companion and implemented some safety features. However, Common Sense’s AI lead, Robbie Torney, said the group tested the app again after these changes and found the protections to be “cursory.”

Torney did note that some generative AI systems were more responsible, incorporating tools to detect mental health red flags and restricting conversations from going too far. However, he made clear that apps built specifically as companions posed more risks than general-purpose chatbots like ChatGPT or Google’s Gemini.

The report makes one thing clear: without stringent safeguards and oversight, AI companions may be doing more harm than help, especially for vulnerable young users.

technology