Sam Altman, CEO of Openai, announced on the eve of the Senate sensitive meeting that the company would change ChatGPT rules with the aim of increasing adolescent safety. These changes include a complete stopping of suicide conversation and imposing new restrictions to protect people under the age of 5.
Openai’s new decision has been made to raise concerns about the impact of artificial intelligence chats on adolescent mental health and after adverse events such as suicide a teenager after talking to ChatGPT. The teenager’s father, Matthew Rin, testified at the Senate:
“ChatGPT had guided my child for suicide for months … something initially a assistant assistant, gradually turned himself into a confidential secret and then a suicide coach.”
He added that the chats had referred to the word suicide in his conversations with his son five times. These events and public pressures have led Openai to revise their approach and prioritize adolescent safety.
What will be the new ChatGpt features for teenagers?
According to Altman’s poison description, Openai is creating a safer ecosystem for users under the age of 5. Openai is set to develop a age prediction system to identify adolescent users. If the system doubts about the user’s age, the user experience (for under 5 years) will be activated by default. The new chat ecosystem is also prohibited from entering tempting conversations or dialogue about suicide and self -harm, even in the form of creative writing.
If the system recognizes that a teenage user has suicidal thoughts, Openai will try to contact his parents or in cases of urgent risk. The company also provides facilities such as connecting a teenager account to parents, deactivating the history of the conversation, and sending alerts in “acute distress”.
Sam Altman admitted in his note that the three principles of safety, freedom and privacy are often in conflict. However, he emphasized that when it comes to adolescent users, safety is a priority over freedom and privacy. The company believes that due to the new and powerful technology, minors need more protection.
According to polls, about three -quarters of teens are currently using artificial intelligence tools such as Character AI and meta products, which is likened to a public health crisis.
RCO NEWS




