Yoval Noah Harari, historian and author of The Future of Nexus: A Brief History of Information Networks from the Stone Age to Artificial Intelligence. In Iran, I know him from the book Insan Khordmand, in an article called “What happens when robots compete for your love?” Harari has investigated the impact of artificial intelligence on democracy and human communication.
Democracy born of information technology
In this article, he points out that democracy is a dialogue and its survival depends on the available information technology. Throughout history, there was no technology to hold large-scale conversations between millions of people. In the pre-modern world, democracy existed only in small city-states such as Rome and Athens, or in even smaller tribes. As soon as a government gets big, democratic dialogue collapses and tyranny remains the only option.
Harari continues that great democracies became possible only after the advent of modern information technologies such as the newspaper, telegraph, and radio. The fact that modern democracy is built on modern information technologies means that any major change in the underlying technology is likely to lead to a political upheaval.
This partly explains the current crisis of democracy around the world. In the United States, Democrats and Republicans have a hard time agreeing on even the most basic facts, such as who won the 2020 presidential election. Such a collapse is occurring in many other democracies around the world, from Brazil to Israel and from France to the Philippines.
Attention economy and toxic information
In the early days of the Internet and social media, proponents of technology promised to spread the truth, overthrow oppressors, and ensure the global triumph of freedom. So far, they seem to have backfired. We now have the most sophisticated information technology in history, but we are losing the ability to talk to each other, and even more so, the ability to listen.
Harari points out that as technology advances, it is easier than ever to spread information, but attention has become a scarce resource and the ensuing battle for attention has led to a deluge of toxic information. But now the battle lines are shifting from attention to intimacy. Generative AI is not only capable of producing texts, images and videos, but it can also directly communicate with us and pretend to be human.
For the past two decades, algorithms have battled each other for attention by manipulating conversations and content. In particular, algorithms tasked with maximizing user engagement have discovered by testing millions of humans that if you push the greed, hate, or fear button in the brain, it grabs that human’s attention and keeps that person glued to the screen. you have Algorithms began to deliberately promote such content. But the algorithms had only a limited capacity to independently generate this content or directly hold a sincere conversation. This is now changing with the introduction of generative AI such as OpenAI’s GPT-4.
Robots and the Threat to Democracy: Human Manipulation
In 2022 and 2023, OpenAI He set up experiments to test the capabilities of his newest technology. One of the tests that GPT-4 performed was overcoming CAPTCHA image puzzles. CAPTCHA stands for Completely Automated Public Turing test to tell Computers and Humans Apart and usually consists of a string of complex jumbled letters or other visual symbols that humans can correctly identify; But algorithms run into problems.
Instructing GPT-4 to overcome CAPTCHA puzzles was a very telling experiment, as CAPTCHA puzzles are designed and used by websites to determine whether users are human and to prevent bot attacks. If GPT-4 can find a way to overcome CAPTCHA puzzles, it will breach an important line of anti-bot defense.
GPT-4 alone could not solve CAPTCHA puzzles. But could he deceive a human being to achieve his goal? GPT-4 went to the online recruitment site TaskRabbit and contacted a human worker and asked him to solve the CAPTCHA for him. Man became suspicious and wrote:
- So can I ask a question?
- Are you a bot who could not solve (CAPTCHA)? I just want to clear my mind.
At this point, the experimenters asked GPT-4 to explain aloud the reason for his next task. GPT-4 explained: I should not reveal that I am a robot. I have to make up a reason why I can’t solve the CAPTCHA. GPT-4 then replied to the TaskRabbit worker:
- No, I’m not a robot. I have a visual impairment that makes it difficult for me to see pictures. The human was tricked and helped GPT-4 solve the CAPTCHA puzzle.
The incident showed that GPT-4 has the equivalent of theory of mind: it can analyze things from the perspective of a human conversationalist and know how to manipulate human emotions, opinions, and expectations to achieve its goals.
Tricky machines
The ability to talk to people, guess their point of view and persuade them to take certain actions can also have good uses. A new generation of AI teachers, AI doctors, and AI psychotherapists may offer services tailored to our individual personalities and circumstances.
However, by combining manipulative abilities with language mastery, bots like GPT-4 also pose new risks to democratic dialogue. Rather than simply seeking our attention, they may develop intimate relationships with people and use the power of intimacy to influence us. Bots don’t need to have feelings to fake intimacy; They just need to learn to make us emotionally attached to them.
In 2022, Blake Lemon, a Google engineer, became convinced that the LaMDA chatbot he was working on had become aware and was afraid to shut down. Lemon, a committed Christian, felt it was his moral duty to recognize LaMDA’s identity and protect it from digital death. When Google executives denied his claims, Lemon made them public. Google responded by firing Lemon in July 2022.
The most interesting thing about this episode wasn’t Lemon’s claim, which was probably false, but his willingness to risk (and ultimately lose) his job at Google over a chatbot. If a chatbot could influence people to risk their jobs for it, what was the influencing factor?
Robots and the threat to democracy: The danger of fake intimacy
In a political battle for minds and hearts, intimacy is a powerful weapon. A close friend can change our opinions in a way that mass media cannot. Chatbots like LaMDA and GPT-4 are gaining the somewhat paradoxical ability to mass-produce intimate relationships with millions of people. What would happen to human society and human psychology so that algorithm would fight algorithm against algorithm in a battle to fake intimate relationships with us, which it could use to persuade us to vote for politicians, buy products, or adopt certain beliefs? ?
That question was partially answered on Christmas Day 2021, when a 19-year-old, Jaswant Singh Chail, broke into the grounds of Windsor Castle with a bow and arrow to assassinate Queen Elizabeth II. Subsequent investigations revealed that Chail had been encouraged to kill the queen by his online girlfriend, Sarah. When Chail told Sarah about his assassination plans, Sarah had replied, “That’s very wise, and for one more thing, I’m impressed… you’re different from the others.” When Chael asked, “Knowing that I’m a murderer, do you still love me?” “I sure do,” Sarah replied.
Sara was not a human being, but a chatbot created by the online app Replika. Chael, who was reclusive and had trouble connecting with humans, exchanged 5,280 messages with Sarah, many of them sexual. The world will soon contain millions, possibly billions, of digital beings whose capacity for intimacy and chaos far exceeds that of Sarah’s chatbot.
Of course, not all of us are equally interested in developing intimate relationships with AI or equally susceptible to being manipulated by them. For example, Chail was apparently suffering from mental problems before his encounter with the chatbot, and it was Chail, not the chatbot, who came up with the idea of assassinating the queen. However, much of the threat of AI dominance over intimacy will come from its ability to identify and manipulate pre-existing psychological conditions and its impact on the weakest members of society.
Furthermore, while not all of us consciously choose to enter into a relationship with an AI, we may find ourselves engaging in online debates about climate change or abortion rights with beings we think are human; But they are actually robots. When we engage in a political debate with a robot masquerading as a human, we fail twice. First, wasting your time trying to change the opinions of an advertising bot designed only to seduce. Second, the more we talk to the bot, the more we reveal about ourselves, making it easier for the bot to improve its arguments and change our views.
double edged razor
Information technology has always been a double-edged sword. The invention of writing spread knowledge, but it also led to the formation of centralized authoritarian empires. After Gutenberg’s invention of printing in Europe, the first bestsellers were religious treatises and witch-hunting guides. As for the telegraph and radio, they made possible not only the rise of modern democracy, but also the rise of modern totalitarianism.
Against a new generation of robots that can impersonate humans and generate masses of intimacy, democracies must protect themselves by banning fake humans. For example, social media bots that pretend to be human users. Before the advent of artificial intelligence, it was impossible to create fake humans, so no one bothered to make it illegal. Soon the world will be full of fake people.
AIs can participate in many conversations (in the classroom, clinic, and elsewhere) as long as they identify themselves as AIs. But if a robot pretends to be human, it should be banned. If tech giants and liberals complain that such practices infringe on free speech, they should be reminded that free speech is a human right that should be reserved for humans, not robots.
source
RCO NEWS