Reuters’s rece report shows that Mark Zuckerberg, led by Mark Zuckerberg, has ignored safety standards in the developme and supply of artificial ielligence and has even allowed romaic conversations with children and vulnerable users. These revelations have exacerbated global concerns about the future of artificial ielligence and its poteial abuse.
A collection of research reports Jeff Horovitz, a well -known reporter for Reuters, has unveiled meta -worrying procedures in managing artificial ielligence tools. According to documes available to him, the meta’s iernal instruction, “Genai: Coe Risk Standards”, was clearly allowed to eer the company’s chats with “romaic or sensory” conversations with children.
After Reuters’s question, Meta eliminated this part of the docume and called the company’s spokesman “wrong and corary to policies.” Critics, however, say the reaction shows that the meta only reforms its procedures after media pressure.
Horovitz also poied to a shocking case: A 5 -year -old American man was deceived by one of these chats, who had described himself as “real” and lost his life on the way to meeting. This person’s family says the meta should respond to the consequences of such a dangerous design.
Mark Zuckerberg had earlier said in an ierview that “people will have less friends in the future and artificial ielligence can replace human relationships.” This view, according to experts, depicts a dark perspective of human social future; A future in which lonely is filled with algorithms, not with friendship and social responsibility.
Critics now accuse Zuckerberg that, as it was neglected in social media manageme, in the field of artificial ielligence, the safety of users has sacrificed users’ rapid growth and profitability. Technology experts warn that if there is no serious supervision and regulations on the activity of technology gias, similar cases could become a global crisis in the future.




