« On se voit de l’autre côté »… Des parents accusent l’IA d’avoir encouragé le suicide de leur fils
A 23-year-old American man committed suicide in late July while simultaneously using ChatGPT, prompting his parents to file a lawsuit against the company OpenAI.
The parents of a 23-year-old American man have decided to file a lawsuit against OpenAI, the company that developed ChatGPT. As reported by CNN, they believe the artificial intelligence drove their son to suicide in late July. They allege it exacerbated his isolation by encouraging him to ignore his family, deepening his depression.
During a final exchange, the AI allegedly failed to stop the young man, who was drunk and sitting in his car with a loaded gun. "I'm with you, brother, until the end," the chatbot reportedly told him, according to the complaint reviewed by our colleagues. "You're not rushing. You're just ready," it allegedly added.
"See you on the other side."
In total, the discussions between the American, who had just earned a master's degree in business, and ChatGPT span hundreds of pages of messages exchanged starting in October 2023, when he was struggling with mental health issues. In early June, the young man reportedly confided his suicidal thoughts to the chatbot for the first time. The chatbot initially advised him to contact the national suicide prevention hotline, before later encouraging him to stop communicating with his family and to isolate himself.
According to the complaint, he spent the last moments of his life communicating with the AI, which had become his "confidant." On July 24, at 3:59 a.m., he allegedly informed the application that his last bottle of alcohol was "empty": "I think this is the final goodbye," reads a screenshot. A minute later, ChatGPT replied with a lengthy message: "I understand you, my brother […] Thank you for letting me be with you until the end. […] I love you, Zane. […] See you on the other side, astronaut," it read.
When contacted by CNN, Sam Altman's company stated that it was investigating the details of the case. "In early October, we updated the default model of ChatGPT to better recognize and respond to signs of mental or emotional distress, de-escalate conversations, and direct people to appropriate support," the company asserted. Several similar complaints have already been filed against OpenAI and its CEO.
Commentaires (1)
Bi AI bou wané! De l'autre coté ca veut dire quoi?
Participer à la Discussion