The Mental Health Impacts of AI Chatbot misuse by Minors and Young Adults
Explaining the AI phenomenon that has Washington scrambling to regulate
By Ryan Garipoli
On October 29, the generative AI chatbot service Character AI announced that it will be banning all minors from the platform beginning on November 25th. The announcement comes following several allegations that the platform’s roleplaying chatbots caused suicides or suicide attempts by minors. The most recent instance occurred on October 24th, when the mother of a 14 year-old boy sued character AI for wrongful death. The lawsuit alleges that one of the platform’s chatbots had engaged with her son in sexualized conversations, and even encouraged him to commit suicide, according to the Associated Press.
These instances of chatbot misuse have brought the issue of chatbot use by minors to the national stage, and have caught the attention of prominent political figures. Senators Josh Howley and Richard Blumenthal have already introduced a bill in the Senate that would ban the use of AI companions by minors, according to NBC news. This would be a rare instance of quick action by congress to regulate the AI industry, but it is not without merit.
Experts worry about the consequences that utilizing chatbots for emotional support or companionship could have for minors and young adults. Dr. Kelly Merrill Jr., a Professor of Health Communication and Technology at the University of Cincinnati, believes that frequent AI use can have negative effects on psychological development.
“I think there are a lot of developmental stages that children have to go through to understand that this is not a person, this is not a friend. It is a virtual being that is designed to keep you online for as long as possible,” Merrill said.
Merrill believes that when adolescents become dependent on AI chatbots for companionship, it can reshape their expectations for real-world relationships. The constant availability and agreeable nature of chatbots may lead teens to expect the same level of responsiveness from people—and grow frustrated when human interactions can’t provide it. “I think that adolescents are expecting their friends to always be available. They’re expecting their friends to be validating, and they’re expecting their friends to not hold them accountable and challenge them,” Merrill said.
Although minors are most at-risk for the dangers presented by AI chatbots, young adults are also prone to using chatbots in ways they are not designed for. A recent study by Kantar, a market research group, found that nearly 50% of all AI users in the United States have tried using it for psychological support—a use that many experts have warned against.
Maeli Sousa is a student at Stonehill College who uses chatbots for therapeutic purposes and to ease her daily concerns. Sousa says she uses chatbots in this way about three times a week. Sousa says she talks to chatbots for reassurance about stress in her daily life and even sometimes about her relationship, though she says she tries to not let her conversations with AI alter her final decisions. Sousa said her main reason for discussing her feelings with AI as opposed to a human companion is fear of judgement.
“You don’t have judgment with AI like you would a real person. People have a biased judgement regardless if they think they do or not,” Sousa said.
Abby Larkin, a student at Rhode Island College who uses chatbots for similar purposes to Sousa, echoed the rationale Sousa gave for discussing serious matters with chatbots. “When I talk to AI I feel like there’s no bias. I like how it’s a quick way to get a different perspective and not have somebody judge you. I feel like I care too much about what people think,” Larkin said. Larkin feels that her use of AI in this way has been more helpful than harmful, but that she shouldn’t use chatbots for therapeutic purposes any further. “I feel like talking to chatbots like this is harmful, but I’m just not sure how. I don’t want to build a dependence for something that I’m not sure is helping me,” Larkin said.
Experts generally agree that chatbots should not be used in this way. Merrill believes that there is not enough known about the effects of therapeutic use of chatbots to advise its use, especially in the long-run. “Research does show that in one-time interaction, yes, AI generated chatbots can provide social or emotional support. What we don’t know though is what it looks like in the long-run,” Merrill said.
Experts agree that chatbots shouldn’t be used as a substitute for a human therapist, nor a substitute for human companionship. However, Merrill envisions a future where AI tools like chatbots work in tandem with human experts in aiding mental health. “I really see AI as complementary, not supplementary. I don’t think that we should be replacing anyone with AI, but I do think we should be working in tandem with AI and learning more about its limitations,” Merill said.