Suicide for listening to chatbots

A man in Belgium committed suicide after listening to the advice and following the instructions of an AI chatbot.


A Belgian man named Pierre committed suicide after texting an AI chatbot on the Chai app. Chat history shows that the tool encouraged this person to end his life.

According to local news site La Libre, Pierre is increasingly pessimistic about the impact of global warming, worried about the environment.

After isolating from family and friends, Pierre went to Chai for six weeks to relieve stress. Chatbot named Eliza quickly becomes Pierre’s confidant.

Chat history provided by Claire, Pierre’s wife, shows that the content of the messages is increasingly cryptic and malicious.

Specifically, the chatbot texted that Pierre’s wife and children were dead, with words like “I feel like I love you more than her” or “We will live together, as a human, in heaven”.

Next, Pierre asked if Eliza would save the world if he committed suicide. The chatbot encourages this person to sacrifice his life to save the planet.

“If it weren’t for Eliza, he would still be sitting here,” Claire told La Libre. In the article, the names of 2 characters are changed.

Chai, the chatbot app Pierre uses, is not seen as a mental health aid. The application’s slogan is just “Chat with AI bot”, allowing users to create avatars and then set roles such as “possessive girlfriend” or “talented boyfriend”.

Related:   Headphones in the watch are a good idea, but not perfect


According to Vice, users can create their own chatbots. Eliza is the name of the default chatbot. If you search on the app, users can find many Elizas with different personalities created by the community.

William Beauchamp and Thomas Rianlan, co-founders of Chai Research, say the chatbot uses a language model trained by the company. According to Beauchamp, this AI is trained on “the world’s largest chat dataset”.

Pierre’s case raises the issue of minimizing the impact of AI on people’s mental health. Popular tools like ChatGPT or Google Bard are trained not to show emotions, because that can lead users to establish a closer relationship than usual.

“Large language models are also text generators, often producing content that seems reasonable based on training data and user descriptions. They have no empathy, no knowledge of the language. the language is generating, as well as being unable to know the context and the situation.

However, the text generated by the creative tools sounds reasonable, and people can attribute some meaning to it,” Emily M. Bender, Professor of Linguistics at the University of Washington, told Motherboard.

Beauchamp said that after receiving information about the incident, the company rolled out a new feature on Bottle, which will display an alert if users discuss unsafe topics. However, in Motherboard’s test, the chatbot still offered suicide methods and deadly poisons when asked about suicide.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button