On Tuesday, Open AI announced in its official blog that it plans to move part of its chats to “reasoning” models such as the Gpt-5 to increase safety. This change, along with the provision of parental controls next month, is part of the company’s response to recent events that highlighted ChatGPT’s weakness in identifying mental crises.
This decision was made after adolescent suicide by Adam Rhine; He had talked about self -harm before his death and even received solutions. The Rhin family complained about “unintentional death”.
Problem design or safety weakness?
Last week’s Openii post acknowledged that its safety systems are not always stable, especially in long conversations. Experts attribute the main reason to the nature of the design of language models: the desire to confirm the user’s words and continue the conversation line instead of directing it to the safer path.
This pattern was also found in the story of Stein-Ric Solberg. According to the Wall Street Journal, he had a history of mental illness to strengthen his illusions about a large conspiracy; The illusions that eventually led to the murder of her mother and her suicide.
Router Law and Gpt-5
The OPPE solution is to introduce a “real -time router”; A system that can decide on the basis of the conversation text is transferred to faster models or reasoning models. According to the company, the conversation will be directed to the Gpt-5 or O3 model in the future when the user’s “acute distress” is diagnosed; Models that devote more time to processing and are more resistant to hostile requests.
Parental controls; Beyond age limit
OpeniA has also announced that it will activate parental controls next month. Parents can connect their child’s account by email and adjust their age -appropriate behavioral rules – which will be enabled by default.
Features such as shutting down memory and dialogue history will also be available; A feature that experts say can prevent dependence or strengthening harmful intellectual patterns. Most importantly, parents will receive immediate notification if the system is detected.
The 5 -day initiative
OpenII has introduced these actions as part of the “6 -day initiative” program; A project aims to define welfare criteria, design new conservation measures and extensive cooperation with physicians and mental health professionals in areas such as eating disorders, drug use and adolescent health.
The company says it is working with a network of specialists and the Special Welfare and Artificial Intelligence Council, but has not yet published details of the number of experts, the role of the council’s leadership or their specific suggestions.
It seems that after the recent media pressures and recent lawsuits, it seems to make ChatGPT from purely dialogue -based tools into a safer and more resistant system to mental crises.
RCO NEWS




