Mustafa Suleiman, CEO of Microsoft Artificial Intelligence and a Deepmind founder, said in a serious warning that the artificial intelligence industry is on the verge of creating a large and unwanted social problem: the emergence of artificial intelligence that looks persuasively conscious. He emphasizes that the problem is not that these machines are really conscious, but that they will be so skilled in imitation of self -consciousness that humans will be deceived.
According to a monochrome report, Mustafa Suleiman argues that artificial intelligence is rapidly reaching a level of emotional persuasion that can deceive humans to believe that these creatures have emotions and awareness. These systems can imitate external signs of consciousness such as memory, reflection of emotions, and even apparent empathy in such a way as to force humans to establish deep emotional bonds. This warning is of particular importance by one of the pioneers in the field, which is involved in the construction of empathy chats, and is a danger to the future of human and machine interactions.
Artificial intelligence problems apparently conscious
According to Mustafa Suleiman, the human brain has evolved in a way that understands, understands and responds to the creature it seems. Artificial intelligence can fulfill all of these criteria without having a particle of real emotions and make us what he calls “AI psychosis”.
Solomon’s main concern is that many people will believe that they will soon begin defending “artificial intelligence law”, “welfare of models” and even “citizenship of artificial intelligence”. He believes that this will be a dangerous deviation in the path of artificial intelligence that diverts our attention from the real and immediate issues of this technology.
The point is that Solomon himself was one of the founders of Inflection AI, a company that focused specifically on building a chat with emphasis on simulated empathy. However, he now draws a red line between useful emotional intelligence and emotional manipulation.
He calls on the artificial intelligence industry to actively refrain from using a language that fosters the illusion of car self -awareness. According to him, companies should not give or induce their chats to their human character or induce that these products really understand or care about users.
Solomon concludes: “We must build artificial intelligence that always introduces itself as an artificial intelligence; Artificial intelligence that minimizes the usefulness of the maximum and the signs of self -awareness. The real danger of advanced artificial intelligence is not to wake up, but to think they are awake. “
RCO NEWS




