Scieists in a new study of ChatGPT They asked for cancer treatme designed apps, but the OpenAI AI chatbot provided them with incorrect information.
According to Insider’s report, Brigham and Women’s Hospital researchers from Harvard Medical School have determined that the cancer treatme plans created by the revolutionary chatbot ChatGPT are full of errors. According to the researchers, almost a third of the answers given by the OpenAI model in the field of designing treatme programs for differe types of cancers coained incorrect information.
Another problem with ChatGPT: mixing correct and incorrect information
In addition, it is noted that ChatGPT uses a mixture of correct and incorrect information in its responses, which makes it difficult to determine their accuracy.
According to scieists, out of a total of 104 questions, approx 98 perce From the answers provided by ChatGPT include At least one recommendation It was a treatme that conformed to the guidelines of the National Comprehensive Cancer Network. Daniel Bitterman, one of the authors of this study, says:
“Chatbots often speak in a very confide way that makes sense, and this can be a poteially dangerous mix of incorrect and correct information.”
ChatGPT was launched in November 2022 and since then it has been very well received and its capabilities have surprised and worried everyone. The success of this chatbot made many technology companies, including Google, Microsoft, Meta and even Apple, focus their programs on artificial ielligence to some exte, and so far many user models have been launched.
Recely, South Korea also unveiled its competitor to ChatGPT, which is called CLOVA X. Xiaomi is another big company that was recely rumored to be joining the list of Google’s ChatGPT and Bard competitors. Although this technology has surprised Bill Gates, some of its problems, such as providing false information or even the huge cost of its developme, have caused concerns.




