“Elon Musk”; Platform X’s owner encourages users to upload their medical test results, such as CT scans, bone scans, etc., to the platform so Grok, X’s AI chatbot, can learn how to ierpret them efficiely.
“Try sending x-rays, PET, MRI or other medical images to Grok for analysis,” Musk wrote in a post on X last moh. Grok is still early stage, but still accurate and will get much better. Let us know if Grok is doing the check job right.”
According to some users, AI has successfully analyzed blood test results and has managed to detect breast cancer. But according to doctors who responded to Elon Musk’s post, this artificial ielligence chatbot is severely misierpreting information. In one instance, Grok confused tuberculosis with a herniated disc or spinal stenosis. In another case, a mammogram mistook a benign breast cyst for an image of a testicle.
Musk has been ierested in the relationship between healthcare and artificial ielligence for years, and in 2022 he launched the brain chip startup Neuralink. Musk claimed in February that the company had successfully implaed an electrode that would allow a user to move a computer mouse with their mind.
xAI, Musk’s tech startup that helped launch Grok, announced in May that it had raised a $6 billion investme round, giving Musk pley of capital to invest in healthcare technologies, though it’s not clear how Grok will go further to meet the needs. Medicine will develop.
Dr. Grok’s problems
According to experts, Musk’s goal of training artificial ielligence for medical diagnosis is also dangerous. While artificial ielligence is increasingly being used as a tool to further access complex science and create assistive technologies, training Grok to use data from a social media platform raises concerns about Grok’s accuracy and user privacy.
Ryan Tarzi, CEO of health technology company Avandra Imaging, said in an ierview that asking users to eer data directly, rather than sourcing it from secure databases with de-ideified patie data, was Musk’s way of speeding up Grok’s developme. Also, the information comes from a limited sample of people willing to upload their images and tests, meaning the AI doesn’t collect data from sources that are represeative of a broader and more diverse medical landscape.
Medical information shared on social media is not limited. This means that once a user chooses to share information, there is less corol over where the information is used.
“This approach has couless risks, including accideally sharing paties’ ideities,” Tarzi said.
According to Matthew McCoy, assista professor of medical ethics and health policy at the University of Pennsylvania, the privacy risks that Dr. Grok will bring are not fully understood; Because X’s privacy policy may not be well read by users.
Users share medical information at their own risk, he said. “As a user, do I feel comfortable providing health data?” “Absolutely not,” he told The New York Times.




