The Federal Commerce Commission (FTC) has obliged seven companies active in the field of artificial intelligence to provide information on the impact of chats on children and minors.
Meta and Instagram, OpenAI, Snap, XAI, Alphabet and Charracter.ai maker have all been required to provide information on how their artificial intelligence assistants earn money, their plans to maintain their user bases and reduce their potential damage to users to the Federal Commerce Commission.
Artificial intelligence companies need to clarify the effects of chat with children
Of course, this FTC request from artificial intelligence companies is part of a study aimed at better understanding and evaluating the safety of artificial intelligence and not an executive action against these companies. These companies have been asked to respond to this request within 5 days.
“Despite the amazing abilities of these chats to simulate human cognition, they are products like other products, and those who make them accessible are responsible for consumer protection laws,” FTC Commissioner Mark Midor said in a statement.
In a statement, Andrew Ferguson, the head of the commission, emphasized the need to investigate the possible effects of chattems on children. Of course, Ferguson said at the same time that the United States is maintaining its role as a global leader in this new and exciting industry.
The dangers of artificial intelligence chats have become a concern for many parents and policymakers because of their similarity to humans. The issue of chats on children after suicide has been re -exposed to adolescent users.
Last month, the New York Times reported that a 5 -year -old California teenager talked to ChatGPT about his plan to commit suicide and offered suggestions that seem to help him do so. Last year, a 6 -year -old teenager in Florida committed suicide after interacting with a virtual companion on the Character.ai platform.
RCO NEWS




