“Elon Musk”; Platform X’s owner encourages users to upload their medical test results, such as CT scans, bone scans, etc., to the platform so Grok, X’s AI chatbot, can learn how to interpret them efficiently.
“Try sending x-rays, PET, MRI or other medical images to Grok for analysis,” Musk wrote in a post on X last month. Grok is still early stage, but still accurate and will get much better. Let us know if Grok is doing the check job right.”
According to some users, AI has successfully analyzed blood test results and has managed to detect breast cancer. But according to doctors who responded to Elon Musk’s post, this artificial intelligence chatbot is severely misinterpreting information. In one instance, Grok confused tuberculosis with a herniated disc or spinal stenosis. In another case, a mammogram mistook a benign breast cyst for an image of a testicle.
Musk has been interested in the relationship between healthcare and artificial intelligence for years, and in 2022 he launched the brain chip startup Neuralink. Musk claimed in February that the company had successfully implanted an electrode that would allow a user to move a computer mouse with their mind.
xAI, Musk’s tech startup that helped launch Grok, announced in May that it had raised a $6 billion investment round, giving Musk plenty of capital to invest in healthcare technologies, though it’s not clear how Grok will go further to meet the needs. Medicine will develop.
Dr. Grok’s problems
According to experts, Musk’s goal of training artificial intelligence for medical diagnosis is also dangerous. While artificial intelligence is increasingly being used as a tool to further access complex science and create assistive technologies, training Grok to use data from a social media platform raises concerns about Grok’s accuracy and user privacy.
Ryan Tarzi, CEO of health technology company Avandra Imaging, said in an interview that asking users to enter data directly, rather than sourcing it from secure databases with de-identified patient data, was Musk’s way of speeding up Grok’s development. Also, the information comes from a limited sample of people willing to upload their images and tests, meaning the AI doesn’t collect data from sources that are representative of a broader and more diverse medical landscape.
Medical information shared on social media is not limited. This means that once a user chooses to share information, there is less control over where the information is used.
“This approach has countless risks, including accidentally sharing patients’ identities,” Tarzi said.
According to Matthew McCoy, assistant professor of medical ethics and health policy at the University of Pennsylvania, the privacy risks that Dr. Grok will bring are not fully understood; Because X’s privacy policy may not be well read by users.
Users share medical information at their own risk, he said. “As a user, do I feel comfortable providing health data?” “Absolutely not,” he told The New York Times.
RCO NEWS