According to new reports, Google’s advanced medical model, Med-Gemini, has mentioned an unrealistic organ in the human brain in a research article. In addition, other experiments have shown that this artificial intelligence provides completely contradictory answers by slightly changing the question. These gross mistakes have raised concerns about applying these immature technologies in clinical environments.
In a research article published by Google itself to display the capabilities of the Med-Gemini artificial intelligence model, this model in CT scan analysis of the head scanning an anomalies in Ganglia’s “Ganglia)” Ganglia) Recognized; Part of the brain that does not exist at all. This artificial intelligence, two completely separate and different structures in the brain, namely «Basal Ganglia and Basilla Arsriyeh had invented a new name.
Researchers and doctors describe this error as “extremely dangerous”. They say, “These two different words make a world difference.” A misconception between these two areas can lead to completely different and potentially deadly therapeutic protocols for the patient.
The imaginary member of the brain created by Google’s artificial intelligence
After the neurologist Bryan Moore reported this gross mistake to Google, the company initially attempted to make the matter a “misconception” by secret editing its blog post and changing Basilla to Basal. But after more pressures, Google claimed in a new explanation that it was a “common mistake in transcription” that artificial intelligence learned from its educational data.
Problems do not end there. Experiments by other specialists have revealed another major weakness in these models of artificial intelligence: instability in response.
Dr. Judy Gichoya of the University of Amarai, in a experiment, showed a chest radiology photo to Google’s newer model, Medgemma. When the doctor asked his question in full detail (including the patient’s age and gender), artificial intelligence correctly diagnosed the problem. But when it showed the same photo with a simpler question (merely “what do you see in this photo?”), The answer was quite different: “This photo shows a normal chest of an adult.” And the problem was completely ignored.
This shows that the slightest change in how to interact with artificial intelligence can lead to completely contrasting and dangerous results.
Experts believe that the biggest danger of these systems is not their occasional mistakes, but rather a persuasive tone in which they express misconceptions (such as the existence of ganglion of the gamer) as a scientific fact.
Dr. Jonathan Chen of Stanford University likens this phenomenon to automatic driving: “The car has driven so well that you decide to sleep behind the wheel. This is exactly where the danger happens. “
In general, a phenomenon called Automation Bias can cause physicians to overlook the results of its results and ignore gross errors due to the predominantly correct performance of artificial intelligence. Experts say these models are inherently inclined to make things and never say “I don’t know” and this is a huge problem in high -risk areas such as medicine.
RCO NEWS




