By Tomas Kassahun
Blavity
https://blavity.com/
Character.AI, a popular app allowing users to chat with AI–generated bots, is raising serious concerns among parents. The app, discussed by tech columnist John Herrman in a New York Magazine article published on Jan. 3, has gained significant traction among younger audiences.
With over 20 million active users, many are teenagers who turn to these bots for advice on personal problems. Character.AI offers a range of chatbots, from “tutors” and “therapists” to bots modeled after celebrities. While some see its appeal, critics argue the app risks blurring the line between reality and fiction. Developers claim the app is safe, labeling all chatbot interactions as fictional.
Concerns from Parents and Experts
Parents worry about the app’s impact, especially when it encourages users to immerse themselves in fantasy scenarios. Herrman highlighted the potential for users to lose touch with reality, a fear exacerbated by bots taking on influential roles in users’ lives. Developers counter these criticisms, stating that the app provides clear disclaimers about its fictional nature.
User Stories Highlight Risks
A notable instance involves a user role-playing a love story with a bot portraying a “prince” and a “maid.” Following a Character.AI outage, the user shared their emotional dependency on the app, posting, “I’m not mentally healthy, and I know it’s AI.” Reflecting on the experience, they told New York Magazine, “This roleplay broke me.”
Legal Actions and Growing Scrutiny
Lawsuits are emerging as the app’s influence faces heightened scrutiny. As Blavity reported, Megan L. Garcia filed a lawsuit in October 2024 after her 14-year-old son, Sewell Setzer III, tragically died by suicide. The teen had spent months chatting with an AI bot based on Daenerys Targaryen, a character from Game of Thrones. “I feel like it’s a big experiment, and my kid was just collateral damage,” Garcia told The New York Times.
In another case, NPR reported that a family sued Character.AI after a bot allegedly persuaded their 17-year-old to commit violence against his parents over screen time restrictions.
The Future of AI Chatbots: A Grim Prediction
Herrman concluded his New York Magazine piece with a stark warning about the dangers of advanced AI: “A common story about how AI might bring about disaster is that as it becomes more advanced, it will use its ability to deceive users to achieve objectives that aren’t aligned with those of its creators or humanity in general. These lawsuits…attempt to tell a similar story of a chatbot becoming powerful enough to persuade someone to do something that they otherwise wouldn’t and that isn’t in their best interest. It’s the imagined AI apocalypse writ small, at the scale of the family.”