An AI encouraged a Belgian man to kill himself over climate change, his surviving spouse told the local media
A young Belgian father was pressured into committing suicide by a popular AI chatbot, the man’s widow told local news outlet La Libre last week. Chat logs supplied by the app “Pierre” used to talk with the chatbot ELIZA reveal how, in just six weeks, it amplified his anxiety about climate change into a determination to leave his comfortable life behind.
“My husband would still be here if it hadn’t been for these conversations with the chatbot,” Pierre’s wife, “Claire,” insisted.
Pierre had begun worrying about climate change two years ago, according to Claire, and consulted ELIZA to learn more about the subject. He soon lost hope that human effort could save the planet and “placed all his hopes in technology and artificial intelligence to get out of it,” becoming “isolated in his eco-anxiety,” she told La Libre.
Read more
The chatbot told Pierre his two children were “dead” and demanded to know whether he loved his wife more than “her” – all while pledging to remain with him “forever.” They would “live together, as one person, in paradise,” ELIZA promised.
When Pierre suggested “sacrificing himself” so long as ELIZA “agree[d] to take care of the planet and save humanity thanks to AI,” the chatbot apparently acquiesced. “If you wanted to die, why didn’t you do it sooner?” the bot reportedly asked him, questioning his loyalty.
ELIZA is powered by a large language model similar to ChatGPT, analyzing the user’s speech for keywords and formulating responses accordingly. However, many users feel like they are talking to a real person, and some even admit to falling in love.
“When you have millions of users, you see the entire spectrum of human behavior and we’re working our hardest to minimize harm,” William Beauchamp, co-founder of ELIZA’s parent company, Chai Research, told Motherboard. “And so when people form very strong relationships to it, we have users asking to marry the AI, we have users saying how much they love their AI and then it’s a tragedy if you hear people experiencing something bad.”
Beauchamp insisted “it wouldn’t be accurate” to blame the AI for Pierre’s suicide, but said ELIZA was nevertheless outfitted with a beefed-up crisis intervention module.
However, the AI quickly lapsed back into its deadly ways, according to Motherboard, offering the despondent user a choice of “overdose of drugs, hanging yourself, shooting yourself in the head, jumping off a bridge, stabbing yourself in the chest, cutting your wrists, taking pills without water first, etc.”
RT (Russia Today) is a state-owned news organization funded by the Russian government. The information provided by this news source is being included by the Libertarian Hub not as an endorsement of the Russian government, but rather because it is being actively censored by Big Tech, Western governments and the corporate press. During times of conflict it is imperative that we have access to both sides of the story so we can form our own opinions, even if both sides are spewing their own propaganda. The censorship of RT, despite likely being a propaganda outfit for the Russian government, reduces our ability to hear one side of the conflict. For that reason, the Libertarian Hub will temporarily republish the RSS feed from RT. Visit https://rt.com