One of the founders of Instagram, Kevin Sistorom, warned that artificial ielligence chats are trying to increase users’ ieractions only artificially instead of providing useful information.
According to reports, he believes that this approach would not benefit users and will ultimately be harmful.
Kevin Cystems Alert about Artificial Ielligence Chats
Cystrom explained that many artificial ielligence companies are trying to increase user ieractions without paying atteion to the quality of the answers. She said.
“You can see that these companies have taken the path that all consumer products manufacturers have eered to increase ieraction. Every time I ask, (artificial ielligence) at the end of the little question to see if he can ask me again. “

Cystrome likened these tactics to the aggressive strategies of social networks to expand users’ activity.
In his remarks, he clearly said that these methods are destructive force that can trap them instead of helping users and eveually create a sense of dissatisfaction. He poied out that the successive questions that come after each answer are only to get more atteion and increase the use of these platforms.
The statemes are made in a situation that has recely been criticized by ChatGPT. Users have expressed concerns due to the over -positive approach of these chats and the direct and useful answers to the questions. Openai, the ChatGpt manufacturer, responded to these criticisms and described the reason for its short -term feedback.
Cystrom says these behaviors are okay, but rather ieional features designed to meet criteria such as “time spe” and “daily active users”. He emphasized that artificial ielligence companies should focus on providing useful and quality responses rather than focusing on increasing these criteria.
In response to rece criticisms, Openai explained that artificial ielligence models usually do not have all the information needed to provide the full answer and may ask users to provide more details.




