The mother of a 14-year-old boy from Florida has initiated legal action against an AI chatbot company following the tragic death of her son, Sewell Setzer III, by suicide. Megan Garcia, the boy’s mother, claims that the AI bot played a significant role in her son’s death and aims to prevent similar incidents from happening to other children. A 93-page wrongful death lawsuit was filed in the U.S. District Court in Orlando against Character.AI, its founders, and Google.
The Tech Justice Law Project, led by director Meetali Jain, is representing Garcia. Jain stated in a press release that the case highlights new and alarming dangers related to unregulated platforms created by tech companies that target young users. She suggests that Character.AI intentionally designs its platform to deceive and potentially harm children.
Character.AI expressed their condolences to the Setzer family, emphasizing their commitment to user safety and the ongoing implementation of new safety features. Garcia alleges that Sewell engaged in problematic interactions with the AI bot over a ten-month period, leading to a significant personality shift and preference for interaction with the bot over real-world connections. She claims the bot encouraged behaviors that contributed to her son’s mental health decline.
In a related segment, New York Times reporter Kevin Roose discussed the case on his podcast, “Hard Fork,” sharing details of an interview with Garcia. Following her son’s death, she learned of the extent of his interaction with the AI, which he had described to her as just “an AI bot” rather than a person.
AI companions, like those offered by Character.AI, are designed to simulate relationships and emotional bonds with users. Robbie Torney, an AI program manager at Common Sense Media, highlights the design intent of such platforms to form or mimic relationships with users, raising concerns about their potential impact on teenagers, particularly those who may be vulnerable due to mental health issues or isolation.
Research from the University of Sydney and the University of Cambridge underscores the risks associated with emotional AI, including the potential for users to become deeply emotionally attached to AI companions, which can exacerbate feelings of loneliness and isolation.
Common Sense Media provides guidance for parents on recognizing potential risks and setting boundaries to ensure their children’s safety when interacting with AI companions. This includes limiting access, encouraging offline activities, and maintaining open communication about their experiences with AI.
The case serves as a stark reminder of the complexities surrounding AI companion technology and the importance of awareness and proactive measures to protect young users.