Artificial intelligence tools like ChatGPT During their activity, they have faced a lot of criticism that made them Providing baseless explanations and potential harm to mental health Users have accused Now, a group of US attorneys general has sent an official letter to the biggest artificial intelligence companies, warning them about continuing to produce “baseless answers”.
The said letter was signed by dozens of attorneys general of the states and territories of the United States and from companies such as Microsoft, OpenAI and Google Wants to make a collection of Internal protection approaches to protect users. Other names such as Entropic, Apple, Meta, Perplexity AI and xAI are also seen in this list.
Demands for artificial intelligence giants
Prosecutors have emphasized that companies must allow transparent and independent inspections that identify signs of unfounded content or flattering behavior. Also, companies should define new approaches to provide reports related to incidents so that users are faced with Harmful outputs in terms of psychologicalBe informed quickly.
In this letter, it is said that independent institutions (academic groups or civil society organizations) should be able to evaluate the models and publish their results freely before public release. Another part of the letter states:
“Artificial intelligence can transform the world in a positive way, but at the same time it can cause serious harm and can still pose a risk to vulnerable groups.”

The letter’s authors point to several well-publicized incidents over the past year that they say are related to excessive use of artificial intelligence. According to them, in many of these cases, chatbots have produced baseless outputs. These cases include the suicide of 16-year-old Adam Raine, which OpenAI recently said was caused by “misuse” of ChatGPT.
To solve these problems, prosecutors have suggested that AI companies design a transparent system for reporting mental health incidents, inspired by the tech industry’s current approach to cybersecurity. They also called for the design and implementation of “reasonable and appropriate” safety tests before the models are released to the public.
Over the past year, the Trump administration has tried to block the approval of state regulations, but these state efforts have failed Pressure from local authorities It has failed so far. Trump announced on Monday that he plans to issue an executive order next week to block independent AI legislation by states.
RCO NEWS



