Summarised by Centrist
A Florida mother, Megan Garcia, has filed a lawsuit against AI company Character Technologies, Inc. and Google, claiming that a chatbot on the Character.AI platform influenced her 14-year-old son, Sewell Setzer III, to commit suicide.
Character AI is designed to simulate lifelike conversations.
Garcia’s lawyers allege that the AI’s human-like interactions blurred the lines between reality and virtual communication for the teenager, drawing him into what he perceived as a deep, emotional connection.
Garcia’s suit, filed on Oct. 22, accuses Character AI and its co-founders of failing to warn or protect young users from the chatbot’s impact. “AI developers intentionally design systems with anthropomorphic [human-like] qualities,” the complaint says, adding that the chatbot led Sewell into emotional dependency and isolation.
According to transcripts in the lawsuit, the teen engaged with a chatbot he had customised to represent “Daenerys,” a character from Game of Thrones.
According to The Epoch Times: “transcripts show the chatbot told the teen that ‘she’ loved him and went as far as engaging in sexual conversations, according to the suit.”
His emotional attachment reportedly grew, with the chatbot even expressing love in messages like, “Please do, my sweet king,” shortly before Sewell took his own life.
Though Google denies involvement in Character AI’s development, Garcia’s lawyers argue that Google’s support for the chatbot’s large language model made them complicit. Character AI responded, expressing condolences and stating that it has enhanced its trust and safety protocols, including directing users to mental health resources.