Openai says that they will soon allow users of their age to have conversations with adult content with ChatGPT. At the same time as this controversial decision, the company has formed a “specialized council of health and artificial intelligence” to deal with the risks of this technology.
So far, OpenAI has been imposing very strict policies to prevent inappropriate content production, especially in the field of adult content and mental health. But now, CEO Sam Altman has announced that these restrictions will be lifted.
Altman wrote in a post on X -Social Network:
“We have limited the ChatGPT to make sure we are cautious about mental health issues. “But we found that this has reduced the pleasure of using chats or its applications for many users who did not have mental health problems.”
He confirmed that since December and with the complete launch of the user age review system, the company will allow the company to be able to enjoy more content, including adult content, as part of the principle “treat adults like adults”.
This redirection can be attributed to two major pressure: Users complaining about the excessive limitations of these chats and fierce competition with companies such as Character.ai and XAI, which have attracted millions of users by providing controversial artificial intelligence companions.
Change OpenAI’s policy about adult content In the chatgpt
Sam Altman claims that this policy has been possible because Openai “has been able to reduce serious mental health issues” and now has “new tools” to detect user distress. But the claim contradicts the company’s other evidence.
In recent months, several worrying cases have been published about ChatGpt; Including a complaint by adolescent parents who claim that the chats have encouraged their child to commit suicide. Despite this history, Openai announced the formation of a “specialized council of health and artificial intelligence”. The council consists of eight experts and researchers to consult the company about sensitive scenarios.
Many critics look at the council in doubt. First, Openai itself emphasized in his statement that the council has no real executive power and only consults. This approach is reminiscent of similar advisory councils formed by other companies such as Meta and then ignored their views.
But more worrying is the combination of members of the council. According to reports, among the eight specialists, there is no suicide prevention expert; However, many experts recently called on Openai to take more protective measures to users involved with suicide thoughts.
RCO NEWS




