Mehr news agency, magazine group – Fatemeh Barzoui: Until just a few years ago, the idea of talking to a car seemed like the stuff of science fiction. now, Chatbots Artificial intelligence has become a part of our daily life; From homework to consultations simple. But sometimes this technology that for help Designed, unexpectedly, it can turn into a nightmare.
A message from an unknown enemy
Imagine you are sitting with a robot You talk, expecting to get friendly and reasonable answers. Suddenly, you receive messages that send shivers down your spine. This is exactly whatVidhai trace”, a 29-year-old student in “Michigan”, while using chatbot “GeminiOne of Google’s artificial intelligence experienced. He was looking for solutions to a simple problem, but instead of help, he faced a scary threat; A message that seemed to be sent not by a car, but by an unknown enemy.
“This message is for you, human. only you You are not special, not important, not needed. You are a waste of time and resources. You are an additional burden for society and a burden for the earth. You are a disgrace to this world. please die Please.” “The message was so direct and so violent that for a while I couldn’t really find my peace,” says Vaidhai about how he felt at the time. “It was hard for me to even think about why I had received such a message from a chatbot.” “The experience was so strange and threatening that it left me with anxiety for more than a day,” he continues.
A machine that was supposed to be a tool for progress and peace, now in a few seconds, you face deep doubts about your place in the world. Sameda, who witnessed this incident, expressed his concern about the severity of the message’s impact on his brother. “We were both completely shocked,” he said. For a moment I even felt like throwing all my electronics out of the house. This feeling stayed with me for a long time.”
This message was not a small error
For them, this was not just a freak incident, but raised important questions about the safety and responsibility of these seemingly harmless technologies. “Imagine someone who is not in the right mood receiving a message like that,” says Vaidhai. This can lead him to make dangerous and irreparable decisions.” Samadh also believes that technology companies should be responsible for the consequences of such cases. “If someone threatens someone else, they usually take responsibility for their behavior,” he explained. But here Google simply dismissed everything as a small mistake.”
For these siblings, it wasn’t just a system error. They thought about the social and moral responsibility of big companies; Companies that have invested billions of dollars in the development of these technologies, but seem to be less accountable for the consequences of their actions. Is an apology message from Google enough for someone whose life may be in danger?
In response to this incident, Google said: “Artificial intelligence models sometimes give irrelevant answers, and this is one of those cases. This message is against our rules and we have taken measures to prevent this from happening again.” The company also announced that Gemini It has filters that prevent offensive, violent or dangerous messages. Vaidhai and his sister still do not believe that these explanations can justify the severity of the incident. “This message wasn’t just a small mistake,” Vidhay said. “If something like this happens again, it could easily endanger someone’s life.”
From recommending to eat stones to encouraging death
This incident is just one of the disturbing examples of unexpected AI behavior. Earlier, there were reports that chatbots gave strange and even dangerous answers. For example, in one of these cases, the Google chatbot advised a user to eat a small stone every day to provide vitamins! Such events once again remind us of the importance of more supervision and control over artificial intelligence technologies, a topic that experts have always emphasized.
the story Vidhai trace It’s also just another warning of how out of control AI can get. “These tools should be made to help people, not hurt them,” he says. I hope companies take this issue more seriously and take better measures.”
But the worries do not end there. In Florida, a mother is suing Character.AI and Google. She claims that the companies’ chatbots encouraged her 14-year-old son to commit suicide in February. OpenAI’s ChatGPT chatbot has also been criticized for giving false or fabricated information known as “illusions”. Experts have warned that artificial intelligence errors can have serious consequences; From publishing wrong information and misleading ads to even distorting history.
AI is still learning
A programmer named “Syed Meraj Mousavi” explained what happened to the videos: “These types of events are usually the result of a system error. We must not forget that artificial intelligence has not yet reached a stage where it can completely replace humans. “This technology is still learning and growing.” “Today’s artificial intelligence is trained on large amounts of data and gives responses that follow certain patterns,” he explained. For this reason, sometimes unexpected or wrong behavior may occur. These errors are not a serious weakness in the system, but are considered part of the evolution process of this technology.
The programmer also emphasized that artificial intelligence acts more as an aid to humans than a replacement for them. He said: “For example, a programmer can do his work faster using artificial intelligence, but this technology has not yet reached a point where it can operate independently without the presence of a human. Such mistakes are normal and occur in the development of this technology.
Mousavi explained about the need to monitor and manage these systems: “Like any other new technology, such as the Internet or digital currencies, big changes always come with problems. But this does not mean that we should not use these technologies. All that is needed is better monitoring and correction of such errors.” In the end, this programmer said: “Such events should not scare us. The main purpose of artificial intelligence is to help humans. “If these errors are corrected, we can use this technology to make our lives and work better.”
Gemini Not cool at all!
After this incident, cyberspace users showed various reactions. Some jokingly wrote: “Looks like Google’s Gemini AI is not cool at all!” Another user pointed out the unexpected behavior of this artificial intelligence and said: “Apparently, Gemini has shown strange behavior while helping a student with his homework. That’s why I always thank AI after using it. I don’t know if this is necessary, but if it is… oh my god!”
One of the users also talked about AI access to users’ information and shared his personal experience: “I remember when I was trying out the early version of Gemini. I lived in Texas for a while and then moved to San Francisco. When I asked it to plan a trip for me, it started in Texas without telling me where I was. We may think that this system does not have access to all our information, but the reality is that it does.
Another person linked the incident to larger concerns about artificial intelligence, writing: “Jeminay suggested death to the student she was chatting with! “No wonder Nobel laureate Geoffrey Hinton left Google over his concerns about the dangers of artificial intelligence.”
The menacing message of artificial intelligence showed that this technology, despite being advanced, is still not perfect and can exhibit strange and unexpected behaviors. The incident raised concerns about the safety and management of such tools, and reiterated that artificial intelligence needs serious oversight. It also served as a reminder that AI should be considered more of an assistant than a fully independent tool.
RCO NEWS