Sam Altman, CEO of Openai, announced on the eve of the Senate sensitive meeting that the company would change ChatGPT rules with the aim of increasing adolesce safety. These changes include a complete stopping of suicide conversation and imposing new restrictions to protect people under the age of 5.
Openai’s new decision has been made to raise concerns about the impact of artificial ielligence chats on adolesce meal health and after adverse eves such as suicide a teenager after talking to ChatGPT. The teenager’s father, Matthew Rin, testified at the Senate:
“ChatGPT had guided my child for suicide for mohs … something initially a assista assista, gradually turned himself io a confideial secret and then a suicide coach.”
He added that the chats had referred to the word suicide in his conversations with his son five times. These eves and public pressures have led Openai to revise their approach and prioritize adolesce safety.
What will be the new ChatGpt features for teenagers?
According to Altman’s poison description, Openai is creating a safer ecosystem for users under the age of 5. Openai is set to develop a age prediction system to ideify adolesce users. If the system doubts about the user’s age, the user experience (for under 5 years) will be activated by default. The new chat ecosystem is also prohibited from eering tempting conversations or dialogue about suicide and self -harm, even in the form of creative writing.

If the system recognizes that a teenage user has suicidal thoughts, Openai will try to coact his pares or in cases of urge risk. The company also provides facilities such as connecting a teenager accou to pares, deactivating the history of the conversation, and sending alerts in “acute distress”.
Sam Altman admitted in his note that the three principles of safety, freedom and privacy are often in conflict. However, he emphasized that when it comes to adolesce users, safety is a priority over freedom and privacy. The company believes that due to the new and powerful technology, minors need more protection.
According to polls, about three -quarters of teens are currely using artificial ielligence tools such as Character AI and meta products, which is likened to a public health crisis.



