According to new reports, Google’s advanced medical model, Med-Gemini, has meioned an unrealistic organ in the human brain in a research article. In addition, other experimes have shown that this artificial ielligence provides completely coradictory answers by slightly changing the question. These gross mistakes have raised concerns about applying these immature technologies in clinical environmes.
In a research article published by Google itself to display the capabilities of the Med-Gemini artificial ielligence model, this model in CT scan analysis of the head scanning an anomalies in Ganglia’s “Ganglia)” Ganglia) Recognized; Part of the brain that does not exist at all. This artificial ielligence, two completely separate and differe structures in the brain, namely «Basal Ganglia and Basilla Arsriyeh had inveed a new name.
Researchers and doctors describe this error as “extremely dangerous”. They say, “These two differe words make a world difference.” A misconception between these two areas can lead to completely differe and poteially deadly therapeutic protocols for the patie.
The imaginary member of the brain created by Google’s artificial ielligence
After the neurologist Bryan Moore reported this gross mistake to Google, the company initially attempted to make the matter a “misconception” by secret editing its blog post and changing Basilla to Basal. But after more pressures, Google claimed in a new explanation that it was a “common mistake in transcription” that artificial ielligence learned from its educational data.

Problems do not end there. Experimes by other specialists have revealed another major weakness in these models of artificial ielligence: instability in response.
Dr. Judy Gichoya of the University of Amarai, in a experime, showed a chest radiology photo to Google’s newer model, Medgemma. When the doctor asked his question in full detail (including the patie’s age and gender), artificial ielligence correctly diagnosed the problem. But when it showed the same photo with a simpler question (merely “what do you see in this photo?”), The answer was quite differe: “This photo shows a normal chest of an adult.” And the problem was completely ignored.
This shows that the slightest change in how to ieract with artificial ielligence can lead to completely corasting and dangerous results.
Experts believe that the biggest danger of these systems is not their occasional mistakes, but rather a persuasive tone in which they express misconceptions (such as the existence of ganglion of the gamer) as a scieific fact.
Dr. Jonathan Chen of Stanford University likens this phenomenon to automatic driving: “The car has driven so well that you decide to sleep behind the wheel. This is exactly where the danger happens. “
In general, a phenomenon called Automation Bias can cause physicians to overlook the results of its results and ignore gross errors due to the predominaly correct performance of artificial ielligence. Experts say these models are inherely inclined to make things and never say “I don’t know” and this is a huge problem in high -risk areas such as medicine.



