The Federal Commerce Commission (FTC) has obliged seven companies active in the field of artificial ielligence to provide information on the impact of chats on children and minors.
Meta and Instagram, OpenAI, Snap, XAI, Alphabet and Charracter.ai maker have all been required to provide information on how their artificial ielligence assistas earn money, their plans to maiain their user bases and reduce their poteial damage to users to the Federal Commerce Commission.
Artificial ielligence companies need to clarify the effects of chat with children
Of course, this FTC request from artificial ielligence companies is part of a study aimed at better understanding and evaluating the safety of artificial ielligence and not an executive action against these companies. These companies have been asked to respond to this request within 5 days.

“Despite the amazing abilities of these chats to simulate human cognition, they are products like other products, and those who make them accessible are responsible for consumer protection laws,” FTC Commissioner Mark Midor said in a stateme.
In a stateme, Andrew Ferguson, the head of the commission, emphasized the need to investigate the possible effects of chattems on children. Of course, Ferguson said at the same time that the United States is maiaining its role as a global leader in this new and exciting industry.
The dangers of artificial ielligence chats have become a concern for many pares and policymakers because of their similarity to humans. The issue of chats on children after suicide has been re -exposed to adolesce users.
Last moh, the New York Times reported that a 5 -year -old California teenager talked to ChatGPT about his plan to commit suicide and offered suggestions that seem to help him do so. Last year, a 6 -year -old teenager in Florida committed suicide after ieracting with a virtual companion on the Character.ai platform.



