Openai has changed the ChatGpt rules to increase adolescent safety and has completely blocked talks on topics such as suicide and self -harm for users under the age of 5. In addition to introducing age forecasting and parental monitoring capabilities, this is a response to concerns about the psychological effects of chats on the adolescent generation.
A meeting was held in the US Senate on September 5, during which parents and experts testified about the consequences of adolescents’ use of artificial intelligence chats. The axis of the session was the shocking narrative of a teenager who committed suicide after months of conversation with ChatGpt.
In his speech, the father of the teenager, Matthew Reyne, said, “ChatGpt has repeated the word suicide for my son more than a thousand times and had become a suicide coach from a simple assistant assistant.” This narrative has become a turning point in the public debate and regulations on the responsibility of technology companies.
Openai’s policy changes
In response to criticism, Sam Altman, CEO of Openai, announced a set of changes to improve the safety of adolescent users:
Complete Stop Talking about Suicide and Self -Development for Users Under -5, Even in Creative Storytelling or Writing
Age Forecasting System Design to Activate Adolescent User Experience if I was unknown
Information to parents or authorities if you see signs of suicide thoughts in conversations
Connecting a teenager account to parents, the possibility of disabling the history of dialogue and activating alert in critical moments
Altman emphasized that the three principles of “safety, freedom and privacy” are always in relative conflict, but for adolescent safety, other values should be preceded.
Social Stress and Alert Studies
According to the Common Sense Media data, about 2 percent of American teens have used chats for emotional or psychological issues at least once; A figure that has exacerbated concerns about replacing these tools with real consultants. Other studies also show that artificial intelligence dialogue can increase the feeling of frustration and anxiety in some adolescents.
Senators have called on technology companies, including meta, to publish internal data on the impact of products on children’s mental health and the effectiveness of parental supervision tools. This shows that the issue of adolescent safety has become a common challenge for the whole technology industry.
Open Questions and Solutions of Experts
Although OpenAI is an important step, experts warn that it is not enough. Questions such as the proper timing of these changes, how to guarantee them effectively, and the need for strict legal frameworks are still raised. Some experts call for independent psychological evaluations of the effects of chat with chats and the imposition of tougher standards to interact with adolescent users.
Ultimately, the recent changes to Openai can be seen as the beginning of a broader discussion: How to develop technological innovations without harm to the mental health of the younger generation? The answer to this question requires the co -operation of technology companies, legislators and parents. The future will show whether these measures can prevent the repetition of similar tragedies.
RCO NEWS




