With the breakthrough of a new mind-reading technique, scientists can now read and decode human thoughts without touching the head.
Past mind-reading techniques relied on implanting electrodes deep into people’s brains. But the new method, which was presented in an article on September 29 (7 October), relies on a non-invasive brain scanning technique called “Functional Magnetic Resonance Imaging” or “fMRI”.
fMRI actually tracks oxygenated blood flow in the brain, and since active brain cells require more energy and oxygen, this information makes it possible to indirectly measure brain activity.
By its very nature, this scanning method cannot capture brain activity in real time, as the electrical signals emitted by brain cells flow much faster than the movement of blood in the brain.
But the study’s authors found that they could still use the marker remarkably well to decipher the meaning of people’s thoughts, even though they couldn’t produce word-for-word translations.
Alexander Huth, the senior author of the study and a neuroscientist at the University of Texas at Austin, said: “If you had asked any cognitive neuroscientist in the world 20 years ago if this was possible, they would have laughed at you!”
For this new study, which has not yet been peer-reviewed, the team scanned the brains of a woman and two men aged 20 to 30. Each of the participants listened to a total of 16 hours of different podcasts and radio programs during several sessions while being in the brain scanner.
The research team then fed the results of these scans to a computer algorithm they call a “decoder” that compares the patterns in the sound with the recorded brain activity patterns.
Hath told “The Scientist” that this algorithm can then take the information of a recorded fMRI and produce a story based on its content; And this story fits very well with the original context of the podcast or radio show.
In other words, based on the brain activity of each participant, the decoding program was able to infer what story he heard. It should be noted that, in this preliminary test, the algorithm also had errors. For example, changing the pronouns moved the characters and had problems using the first and third person. “This algorithm knows exactly what’s going on, but it doesn’t know who’s doing things,” Huth pointed out.
In additional tests, the algorithm was able to fairly accurately describe the plot of a silent film that study participants were watching. He even managed to read their minds without prior knowledge and was able to tell the story that the participants were imagining in their own minds.
In the long term, the research team plans to develop this decoding technology so that it can be used in brain-computer interfaces designed for people who cannot speak or type.
Cover photo: MRI brain scan
Credit: Sebastian CondreaGetty Images
Source: Live Science
RCO NEWS