The results of a new study at the University of Arizona show that the honest declaration of using artificial ielligence in the workplace, unlike expecting, can reduce others’ trust in you. The study, which has been conducted in the form of more than 6,000 experimes, shows that people who acknowledge artificial ielligence tools to perform tasks such as writing emails, reporting or designing advertising campaigns, appear to be less reliable to others, even if their audiences are familiar with technology.
These findings represe a paradox; Transparency and honesty that usually lead to confidence have a reversal of artificial ielligence. Researchers say part of this mistrust is because humans are still expected to do things like writing, analysis and innovation. If a person highlights the role of artificial ielligence in his or her work process, his or her work may be less autheic or legitimate.
However, researchers warn that secrecy is not a good solution either. Experimes found that if the use of artificial ielligence is disclosed by others in the future, reducing trust would be far more severe. As a result, honesty, although it may reduce confidence, is still a better option than secrecy.
With the expansion of artificial ielligence tools in professional environmes, from education and health to the economy and the media, this “transparency” can become a serious challenge for many people. The authors of the article suggest that in order to reduce the negative effects, organizational culture should go to a poi where the use of artificial ielligence is legitimate and accepted, not a secret or suspicious thing.




