If you are one of the millions of people who turn to ChatGPT or other chatbots to diagnose your illness, it’s better to hold back. A new study by researchers at the University of Oxford suggests that relying on artificial ielligence for medical decision-making can have dangerous consequences.
According to the BBC report, Oxford University researchers say that artificial ielligence models in the field of medicine give inconsiste and sometimes incorrect answers. Dr. Rebecca Payne, the study’s lead physician, warns that asking chatbots about symptoms can be dangerous.
In this study, which was conducted on 1,300 people, the participas were placed in differe scenarios (such as severe headache or persiste fatigue after childbirth). The results showed that people who used artificial ielligence did not make better decisions than those who used traditional methods (such as Google searches).
The main problem here is that the AI may give three differe diagnoses, leaving the user to guess the correct option. Dr. Adam Mehdi explains: “People give information to the chatbot gradually and don’t say everything; This is exactly where it breaks down.”
The dangers of medical advice with artificial ielligence
In this study, the researchers noticed a two-way communication failure between the user and the artificial ielligence. Users didn’t know what details the AI needed to provide accurate advice. On the other hand, the AI’s answers depended heavily on how the question was worded. “This analysis showed that ieracting with humans is a major challenge even for advanced AI models,” the researchers said.


Unlike standard medical tests in which artificial ielligence gets a passing grade, in the real world and ieracting with non-experts, these systems fail. Therefore, relying on them to decide whether to go to the general practitioner or the emergency room is a high risk.
Dr. Amber W. Childs of Yale University pois out that because chatbots are trained with curre medical data, they repeat the same biases and prejudices that have existed in medicine for decades. “A chatbot is only as good as the experienced clinicians who generated its data, which isn’t perfect,” he says.
However, some experts also believe that specialized medical prescriptions recely released by companies such as OpenAI and Ahropic may have differe results. The goal should be to improve technology with clear regulatory rules and medical guardrails to ensure patie safety.
The findings of this research have been published in the journal Nature Medicine.



