Artificial ielligence has infiltrated various aspects of our life and work and is constaly becoming more human-like. Now ChatGPT’s advanced voice feature has a warm, engaging, confide and charismatic voice and tone, or Character.ai lets you talk to famous personalities like Elon Musk or even Napoleon Bonaparte. But does making AI more human-like make it more trustworthy? The researchers’ answer is negative; They suggest a differe approach.
According to an article by researchers in the Harvard Business Review, most tech companies seem to believe that if AI had a human face or voice, we would like and trust it more. This belief is perhaps in line with some scieific evidence that pois to the positive effects of humanizing artificial ielligence to gain consumer trust. Marketing researchers from King’s College London and Erasmus University Rotterdam now argue that humanizing AI may not be the optimal approach and could even have uniended consequences.
For example, drawing similarities between artificial ielligence and humans can create unrealistic expectations about its capabilities and lead to user frustration if it fails. A new study has found that human chatbots decrease customer satisfaction and purchase ie. In other words, consumers expect more from a human-like chatbot, and when AI fails to meet expectations, the customer feels disappoied.
A new approach to increase users’ trust in artificial ielligence
Researchers say that the further we go in humanizing artificial ielligence, the more complicated the issues may be. For example, in the end, during the process of humanization, artificial ielligence must have a certain gender and race to appear more realistic, which causes human stereotypes to spread to artificial ielligence as well. To preve this from happening, some companies have developed chatbots whose voices are neither male nor female.
So, what approach should we take to make artificial ielligence more widely accepted?

Researchers say that instead of humanizing artificial ielligence, it is better to highlight the role of humans in the developme of artificial ielligence; It means to say that the artificial ielligence system is the product of human work.
To prove this concept, researchers conducted various experimes. In one, participas were told to upload a photo and receive feedback from an AI to help them improve their photography skills. The participas were divided io differe groups, and one group was told that human data scieists and photography experts were involved in the developme of artificial ielligence. In another group, an artificial ielligence with a human name and image was used; In this group, the human eleme was completely removed and the participas were told that their artificial ielligence was developed based on machine learning algorithms.
Participas were then asked to rate how helpful the AI feedback was. Even though the groups used the same AI, the group that emphasized human elemes accepted AI feedback better and perceived it as useful.

Of course, now there are artificial ielligence tools that use this approach and have been welcomed by more users. An example is Eduaide.AI; A special AI tool to help teachers automate administrative tasks that touts its founders’ educational backgrounds and bills itself as “developed by educators.” SkinVision can also be an example that skin health professionals were involved in its developme and this is communicated to users.
In general, like other products that gain more acceptance when they are labeled “organic” or “carbon neutral”, we should also emphasize the human elemes when offering artificial ielligence tools.



