Artificial ielligence tools like ChatGPT During their activity, they have faced a lot of criticism that made them Providing baseless explanations and poteial harm to meal healthUsers have accused Now, a group of US attorneys general has se an official letter to the biggest artificial ielligence companies, warning them about coinuing to produce “baseless answers”.
The said letter was signed by dozens of attorneys general of the states and territories of the United States and from companies such as Microsoft, OpenAI and Google Was to make a collection of Iernal protection approaches to protect users. Other names such as Eropic, Apple, Meta, Perplexity AI and xAI are also seen in this list.
Demands for artificial ielligence gias
Prosecutors have emphasized that companies must allow transpare and independe inspections that ideify signs of unfounded coe or flattering behavior. Also, companies should define new approaches to provide reports related to incides so that users are faced with Harmful outputs in terms of psychologicalBe informed quickly.
In this letter, it is said that independe institutions (academic groups or civil society organizations) should be able to evaluate the models and publish their results freely before public release. Another part of the letter states:
“Artificial ielligence can transform the world in a positive way, but at the same time it can cause serious harm and can still pose a risk to vulnerable groups.”


The letter’s authors poi to several well-publicized incides over the past year that they say are related to excessive use of artificial ielligence. According to them, in many of these cases, chatbots have produced baseless outputs. These cases include the suicide of 16-year-old Adam Raine, which OpenAI recely said was caused by “misuse” of ChatGPT.
To solve these problems, prosecutors have suggested that AI companies design a transpare system for reporting meal health incides, inspired by the tech industry’s curre approach to cybersecurity. They also called for the design and implemeation of “reasonable and appropriate” safety tests before the models are released to the public.
Over the past year, the Trump administration has tried to block the approval of state regulations, but these state efforts have failed Pressure from local authorities It has failed so far. Trump announced on Monday that he plans to issue an executive order next week to block independe AI legislation by states.



