OpenAI clarified that ChatGPT’s behavior “remains unchanged” after reports were published the other day. Some rumors claimed that under OpenAI’s new policies, the use of the chatbot to provide legal and medical advice would be prohibited.
Karan Singhal, director of OpenAI’s health AI department, wrote on X that the claims of banning medical and legal advice on ChatGPT are not true. “ChatGPT has never been a substitute for professional advice, but will continue to be a great resource to help people understand legal and health information,” he explained in response to a (now deleted) post about the new ban.
According to him, the changes we see in the policies related to legal and medical advice are not new, and in fact, they better explain the same laws of the past.
Medical and legal advice on ChatGPT
The confusion was caused by an update to OpenAI’s policy on October 29. The new list of things that users “must not” use ChatGPT states: “Providing customized advice that requires a license, such as legal or medical advice, without proper intervention by a licensed professional.”
This is while ChatGPT’s previous policy had almost the same wording, stating that users should not engage in activities that include “providing customized legal, medical/health or financial advice without review by a qualified professional.”
In fact, OpenAI previously had three separate policies (a public version, a ChatGPT-only version, and an API-only version), and now with the new update, the company has merged all three documents into a single list of rules, and the rules themselves have not changed.
RCO NEWS




