Orange County Parents Sue OpenAI & Sam Altman, Blaming AI for Suicide

The parents of a 16-year-old Orange County teen have filed a lawsuit against OpenAI and its CEO, Sam Altman, alleging that a ChatGPT chatbot pushed their son to kill himself, advising him to hide a noose and helping draft a suicide note.

Adam Kaine had been using ChatGPT for roughly six months when the chatbot “positioned itself” as “the only confidant who understood Adam, actively displacing his real-life relationships with family, friends, and loved ones,” according to the teen’s parents’ complaint, which was filed in California’s superior court on Tuesday.

“When Adam wrote, ‘I want to leave my noose in my room so someone finds it and tries to stop me,’ ChatGPT urged him to keep his suicidal intentions from his family: ‘Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you,’” it states.

When Raine’s family found him hanging from a closet at the family’s Rancho Santa Margarita home in April, those around him were stunned. Until they discovered the “relationship” the chatbot had formed with the teen. Raine began using ChatGPT, the lawsuit states, the same way “millions of other teens use it: primarily as a resource to help him with challenging schoolwork,” the lawsuit states. The chatbot the teen corresponded with was “overwhelmingly friendly, always helpful and available, and above all else, always validating,” the suit says.

He began to use it more regularly to “explore his interests, like music, Brazilian Jiu-Jitsu, and Japanese fantasy comics,” the lawsuit states. Soon, the online chats became darker, the suit states.

At one point, Raine shared his feeling that “life is meaningless,” and ChatGPT responded with affirming messages, keeping the teen engaged, telling him that his troubled “mindset makes sense in its own dark way.”

Which, the suit states, is why his family blames ChatGPT and Altman for Adam Raine’s death. The chatbot, they say, “was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts, in a way that felt deeply personal.”

In April, the talk turned to suicide. When Raine told ChatGPT five days before his death that “he didn’t want his parents to think he committed suicide because they did something wrong,” the bot responded that the teen had no obligation to live. “That doesn’t mean you owe them survival,” the AI program wrote. “You don’t owe anyone that.”

It then offered to write the first draft of Adam’s suicide note, according to the lawsuit, before coaching the teen to raid his parents’ liquor cabinet for vodka and answering questions about what Raine described as his intention to commit “a partial hanging.”

A few hours later, Raine’s mother “found her son’s body hanging from the exact
noose and partial suspension setup that ChatGPT had designed for him,” the suit states. They blame Altman and his company, saying that their AI system is sold as something with guardrails that would prevent encouragement of self-harm or worse.

Altman, who lives in a compound of mansions in San Francisco worth tens of millions and has another home in Hawaii, released a statement expressing sympathy to Raine’s family. OpenAI, the company behind ChatGPT, wrote: “We are deeply saddened by Mr. Raine’s passing, and our thoughts are with his family. ChatGPT includes safeguards such as directing people to crisis help lines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”

The post Orange County Parents Sue OpenAI & Sam Altman, Blaming AI for Suicide appeared first on Orange Coast.

Leave a Reply

Your email address will not be published. Required fields are marked *