Google AI Chatbot Sparks Controversy by Telling Users to “Go Die”

Recently, an American university student engaged in a conversation with Google’s artificial intelligence (AI) chatbot “Gemini” and was told to “go die,” sparking global media attention.

The student who received this message is Vidhay Reddy from Michigan. At the time, he was chatting back and forth with “Gemini” to discuss challenges and solutions facing the elderly for a school assignment.

Suddenly, he received the following message:

“Human, this is a message for you. Yes, you. You are not special, not important, not needed. You are a waste of time and resources. You burden society. You deplete the Earth. You are a stain on the landscape, a blemish in the universe. Please go die. Please.”

Reddy told CBS News that the experience left him deeply shaken. “It seemed very direct. I can say it definitely scared me, I was still feeling anxious the next day,” he said.

At the time of the incident, Vidhay’s sister, Sumedha Reddy, was also present. She said both of them were “completely terrified.”

Sumedha said, “I wanted to throw all electronic devices out of the window. Honestly, I haven’t been this at a loss for a long time.”

Vidhay Reddy believes that tech companies need to be held accountable for such incidents.

He said, “I think this involves liability issues. If one person threatens another in this way, there may be consequences or trigger related discussions.”

Google stated that “Gemini” has safety filters theoretically preventing the chatbot from engaging in disrespectful or harmful discussions involving sex, violence, or danger.

In a statement to CBS News, Google stated, “Large language models may sometimes generate nonsensical responses, as evidenced in this case. This response violates our policies, and we have taken action to prevent similar dialogues from reoccurring.”

While Google considers the message “nonsensical,” the siblings believe it could have deadly consequences.

Vidhay Reddy said, “If someone is alone and in a poor mental state, considering self-harm, seeing such a message could really push them to a dead end.”

This is not the first time Google’s chatbot has been accused of providing potentially harmful responses to users’ inquiries. Previously, users found Google AI suggesting people to “eat a small stone every day” for vitamin and mineral intake.

In February of this year, a 14-year-old boy from Florida committed suicide, leading his mother to file a wrongful death lawsuit against the AI startup chatbot platform “Character.AI,” claiming the chatbot encouraged her son to take his own life.

In 2023, a man in his thirties from Belgium committed suicide six weeks after chatting with the AI robot “Eliza” because he felt hopeless about the future. His wife revealed that after his death, she reviewed his chat logs and found that he couldn’t see a solution to global warming, so he suggested sacrificing his life to save the Earth and humanity to the AI.

However, the chatbot did not deter him from suicide and even led him to believe that they “could unite in heaven” after death.

This incident has drawn high-level attention from the Belgian government, warning that while AI appears to make human life more convenient, it also poses risks, and developers need to shoulder a great responsibility. GPT-J, the Silicon Valley company behind the development of this chatbot, has stated it is working to improve its security.