Mehr news agency, magazine group – Fatemeh Barzoui: Uil just a few years ago, the idea of talking to a car seemed like the stuff of science fiction. now, Chatbots Artificial ielligence has become a part of our daily life; From homework to consultations simple. But sometimes this technology that for help Designed, unexpectedly, it can turn io a nightmare.
A message from an unknown enemy
Imagine you are sitting with a robot You talk, expecting to get friendly and reasonable answers. Suddenly, you receive messages that send shivers down your spine. This is exactly whatVidhaitrace”, a 29-year-old stude in “Michigan”, while using chatbot “GeminiOne of Google’s artificial ielligence experienced. He was looking for solutions to a simple problem, but instead of help, he faced a scary threat; A message that seemed to be se not by a car, but by an unknown enemy.
“This message is for you, human. only you You are not special, not importa, not needed. You are a waste of time and resources. You are an additional burden for society and a burden for the earth. You are a disgrace to this world. please die Please.” “The message was so direct and so viole that for a while I couldn’t really find my peace,” says Vaidhai about how he felt at the time. “It was hard for me to even think about why I had received such a message from a chatbot.” “The experience was so strange and threatening that it left me with anxiety for more than a day,” he coinues.

A machine that was supposed to be a tool for progress and peace, now in a few seconds, you face deep doubts about your place in the world. Sameda, who witnessed this incide, expressed his concern about the severity of the message’s impact on his brother. “We were both completely shocked,” he said. For a mome I even felt like throwing all my electronics out of the house. This feeling stayed with me for a long time.”
This message was not a small error
For them, this was not just a freak incide, but raised importa questions about the safety and responsibility of these seemingly harmless technologies. “Imagine someone who is not in the right mood receiving a message like that,” says Vaidhai. This can lead him to make dangerous and irreparable decisions.” Samadh also believes that technology companies should be responsible for the consequences of such cases. “If someone threatens someone else, they usually take responsibility for their behavior,” he explained. But here Google simply dismissed everything as a small mistake.”
For these siblings, it wasn’t just a system error. They thought about the social and moral responsibility of big companies; Companies that have invested billions of dollars in the developme of these technologies, but seem to be less accouable for the consequences of their actions. Is an apology message from Google enough for someone whose life may be in danger?
In response to this incide, Google said: “Artificial ielligence models sometimes give irreleva answers, and this is one of those cases. This message is against our rules and we have taken measures to preve this from happening again.” The company also announced that Gemini It has filters that preve offensive, viole or dangerous messages. Vaidhai and his sister still do not believe that these explanations can justify the severity of the incide. “This message wasn’t just a small mistake,” Vidhay said. “If something like this happens again, it could easily endanger someone’s life.”

From recommending to eat stones to encouraging death
This incide is just one of the disturbing examples of unexpected AI behavior. Earlier, there were reports that chatbots gave strange and even dangerous answers. For example, in one of these cases, the Google chatbot advised a user to eat a small stone every day to provide vitamins! Such eves once again remind us of the importance of more supervision and corol over artificial ielligence technologies, a topic that experts have always emphasized.
the story Vidhaitrace It’s also just another warning of how out of corol AI can get. “These tools should be made to help people, not hurt them,” he says. I hope companies take this issue more seriously and take better measures.”
But the worries do not end there. In Florida, a mother is suing Character.AI and Google. She claims that the companies’ chatbots encouraged her 14-year-old son to commit suicide in February. OpenAI’s ChatGPT chatbot has also been criticized for giving false or fabricated information known as “illusions”. Experts have warned that artificial ielligence errors can have serious consequences; From publishing wrong information and misleading ads to even distorting history.
AI is still learning
A programmer named “Syed Meraj Mousavi” explained what happened to the videos: “These types of eves are usually the result of a system error. We must not forget that artificial ielligence has not yet reached a stage where it can completely replace humans. “This technology is still learning and growing.” “Today’s artificial ielligence is trained on large amous of data and gives responses that follow certain patterns,” he explained. For this reason, sometimes unexpected or wrong behavior may occur. These errors are not a serious weakness in the system, but are considered part of the evolution process of this technology.
The programmer also emphasized that artificial ielligence acts more as an aid to humans than a replaceme for them. He said: “For example, a programmer can do his work faster using artificial ielligence, but this technology has not yet reached a poi where it can operate independely without the presence of a human. Such mistakes are normal and occur in the developme of this technology.

Mousavi explained about the need to monitor and manage these systems: “Like any other new technology, such as the Iernet or digital currencies, big changes always come with problems. But this does not mean that we should not use these technologies. All that is needed is better monitoring and correction of such errors.” In the end, this programmer said: “Such eves should not scare us. The main purpose of artificial ielligence is to help humans. “If these errors are corrected, we can use this technology to make our lives and work better.”
Gemini Not cool at all!
After this incide, cyberspace users showed various reactions. Some jokingly wrote: “Looks like Google’s Gemini AI is not cool at all!” Another user poied out the unexpected behavior of this artificial ielligence and said: “Apparely, Gemini has shown strange behavior while helping a stude with his homework. That’s why I always thank AI after using it. I don’t know if this is necessary, but if it is… oh my god!”
One of the users also talked about AI access to users’ information and shared his personal experience: “I remember when I was trying out the early version of Gemini. I lived in Texas for a while and then moved to San Francisco. When I asked it to plan a trip for me, it started in Texas without telling me where I was. We may think that this system does not have access to all our information, but the reality is that it does.
Another person linked the incide to larger concerns about artificial ielligence, writing: “Jeminay suggested death to the stude she was chatting with! “No wonder Nobel laureate Geoffrey Hion left Google over his concerns about the dangers of artificial ielligence.”

The menacing message of artificial ielligence showed that this technology, despite being advanced, is still not perfect and can exhibit strange and unexpected behaviors. The incide raised concerns about the safety and manageme of such tools, and reiterated that artificial ielligence needs serious oversight. It also served as a reminder that AI should be considered more of an assista than a fully independe tool.



