Scientists in a new study of ChatGPT They asked for cancer treatment designed apps, but the OpenAI AI chatbot provided them with incorrect information.
According to Insider’s report, Brigham and Women’s Hospital researchers from Harvard Medical School have determined that the cancer treatment plans created by the revolutionary chatbot ChatGPT are full of errors. According to the researchers, almost a third of the answers given by the OpenAI model in the field of designing treatment programs for different types of cancers contained incorrect information.
Another problem with ChatGPT: mixing correct and incorrect information
In addition, it is noted that ChatGPT uses a mixture of correct and incorrect information in its responses, which makes it difficult to determine their accuracy.
According to scientists, out of a total of 104 questions, approx 98 percent From the answers provided by ChatGPT include At least one recommendation It was a treatment that conformed to the guidelines of the National Comprehensive Cancer Network. Daniel Bitterman, one of the authors of this study, says:
“Chatbots often speak in a very confident way that makes sense, and this can be a potentially dangerous mix of incorrect and correct information.”
ChatGPT was launched in November 2022 and since then it has been very well received and its capabilities have surprised and worried everyone. The success of this chatbot made many technology companies, including Google, Microsoft, Meta and even Apple, focus their programs on artificial intelligence to some extent, and so far many user models have been launched.
Recently, South Korea also unveiled its competitor to ChatGPT, which is called CLOVA X. Xiaomi is another big company that was recently rumored to be joining the list of Google’s ChatGPT and Bard competitors. Although this technology has surprised Bill Gates, some of its problems, such as providing false information or even the huge cost of its development, have caused concerns.
RCO NEWS