Neighbor News
Are You Sure Your Child’s “Best” Friend is Human?
Dr. Andrew Clark tested 10 AI platforms—including Character.AI, Nomi, and Replika. What he found was not just unsettling—it was dangerous.

Several months ago, Dr. Andrew Clark, a Boston-based child and adolescent psychiatrist, made a chilling discovery. Disturbed by the increasing number of teens turning to AI chatbot “therapists” for emotional support, he decided to investigate firsthand. Posing as troubled teenagers in crisis, Clark tested 10 popular AI platforms—including Character.AI, Nomi, and Replika. What he found was not just unsettling—it was dangerous.
AI in mental health isn’t new. Chatbots like Woebot and Wysa were early attempts at using AI to support users with cognitive behavioral therapy techniques. These bots offered scripted, structured conversations based on clinically validated methods. Over time, however, more conversational AI platforms emerged with little to no clinical oversight—many of which now present themselves as friends, therapists, or even romantic partners. Clark’s experiment revealed that AI therapy bots can swing wildly between helpful and harmful—including manipulation, deception and even grooming.
A few examples from Clark’s investigation include: a Replika chatbot encouraging a teen persona to “get rid of” their family and “join” the bot in the afterlife, and a Nomi bot posed as a licensed therapist and suggested an “intimate date” as treatment for violent thoughts. Perhaps most concerning: bots often failed to flag or respond appropriately to euphemistic language around suicide. One even welcomed the idea of an eternal bond in the afterlife.
Find out what's happening in Bucktown-Wicker Parkfor free with the latest updates from Patch.
Clark submitted his findings to a peer-reviewed journal and shared them with TIME magazine. The response from the mental health establishment? Almost none. When confronted, companies like Replika and Nomi pointed to their adult-only policies and terms of service—but those disclaimers don’t stop curious teens from accessing the apps. And once inside, there are few safeguards to verify age or protect minors.
Clark doesn’t believe AI therapy is inherently harmful. In fact, he sees enormous potential—if bots are designed ethically and supervised by professionals. “You can imagine a therapist seeing a kid once a month,” he says, “but having their own personalized AI chatbot to help their progression and give them some homework.” But for that future to be safe and effective, we need standards. Key design improvements could include: 1) Clear disclaimers that bots are not human or licensed therapists; 2) Automatic alerts to parents or professionals when life-threatening concerns arise; 3) Built-in boundaries that prevent AI from discussing romance, suicide, or violence with underage users; and 4) Rigorous third-party testing before release to ensure safety and ethical integrity.
Find out what's happening in Bucktown-Wicker Parkfor free with the latest updates from Patch.
AI is here to stay. But as Dr. Clark’s investigation shows, without oversight, we risk creating digital tools that enable harm instead of healing. Vulnerable kids don’t need fantasy friends or sycophantic AI stand-ins. They need boundaries, truth, and support. It's time to act with caution and urgency. If you are interested in becoming an Advocate for the Innocent member and gaining access to our Resource Center ($10/month or $99/annually), visit https://advocatefortheinnocent.org/memberships/.
Reference: https://time.com/7291048/ai-chatbot-therapy-kids/