From ChatGPT writing emails to AI systems that suggest TV shows and even help diagnose diseases, the presence of machine intelligence in everyday life is no longer a science fiction story. However, despite the speed, accuracy and efficiency of this technology, there is still a feeling of discomfort among people. Some of them enjoy using AI tools, while others feel anxious, mistrustful or even betrayed by it, but why?
The answer isn’t just about how AI works, it’s also about how we work. We don’t understand it, so we don’t trust it. Humans are more likely to trust systems they understand. The common instruments look familiar, so that you have to turn the key to start the car; Or you press a button and the elevator goes to the desired floor. On the other hand, many AI systems act like black boxes. You type something and the result appears.
The logic between these two steps is hidden. Psychologically, this is troubling. We like to know the cause and effect relationship and be able to review decisions. When we can’t do this, we feel powerless. This is one of the reasons for the phenomenon called algorithm aversion. This is a term popularized by marketing researcher Berkeley Dethorst and his colleagues. Their research found that people often prefer imperfect human judgment to algorithmic decision-making, especially after seeing even one algorithmic error.
We understand logically that AI systems do not have emotions or agendas. But this does not prevent us from considering them as human beings. Some users find it intimidating when JPT Chat responds too politely. When the recommendation engine becomes too detailed, it creates a sense of privacy intrusion for users. So, we start to suspect that maybe the AI is manipulated, even though the AI has no personality. This phenomenon is a form of anthropomorphism, that is, attributing human intentions or characteristics to non-human systems.
Communication gurus Clifford Nass, Byron Reeves, and others have proven that we still respond socially to machines even when we know they’re not human. One of the interesting findings in the behavioral sciences is that we often overlook human error more easily than machine error. When a human makes a mistake, we understand it; We may even sympathize with him. On the other hand, we feel betrayed when an algorithm makes a mistake, especially if it is presented as an unbiased or data-driven algorithm.
This is related to the study of expectation violations, when our assumptions about how something works are disrupted. This causes discomfort and loss of trust. We believe that machines are rational and unbiased. So when they fail, like misclassifying an image, providing biased output, or recommending something wildly inappropriate, our reaction is heightened. We expect more from them, while we ourselves always make poor decisions. However, can we ask AI why?

We hate getting AI wrong
For some people, AI is not only unfamiliar, but also makes them anxious. Teachers, writers, lawyers, and designers are suddenly faced with tools that duplicate parts of their work. It’s not just about automation, it’s about what makes our skills valuable and what it means to be human. This can lead to a sense of identity threat, a concept explored by social psychologist Claude Steele and others. This concept describes the fear of diminishing expertise or uniqueness. What will be the result? Resistance, inhibitory mode or complete rejection of technology. In this case, mistrust is not a bug, but a psychological defense mechanism.
Interest in emotional cues
Human trust is based on more than logic. We pay attention to tone of voice, facial expressions, hesitation, and eye contact. Artificial intelligence has none of these. Although this technology may be smooth and even attractive, it cannot provide the same reassurance as a normal human. It’s similar to uncanny valley unease, a term coined by Japanese roboticist Masahiro Mori to describe the uncanny feeling of something that looks almost human, but not quite. This thing looks or sounds right, but it has a flaw.
This lack of emotion can be interpreted as coldness or even deception. In a world full of deepfakes and algorithmic decisions, that lack becomes a problem, not because the AI is doing something wrong, but because we don’t know how to feel about it. It’s worth noting that not all suspicions of artificial intelligence are irrational. It has been proven that algorithms can be biased, especially in areas such as recruitment, policing and scoring. If you’ve had a bad experience with data systems or been in a bad situation before, you’re not pessimistic, you’re cautious.
This ties into a broader psychological idea, namely learned distrust. When institutions or systems constantly malign certain groups, skepticism becomes a rational thing and a defense mechanism. Telling you to trust a system usually doesn’t work. Trust must be earned. This means designing AI tools that are transparent, auditable, and accountable. It means giving users authority, not just convenience. Psychologically, we trust what we understand, what we can question, and what treats us with respect. If AI is going to be adopted, it needs to be less of a black box and shape the conversation we’re invited to have.
RCO NEWS



