Mind-reading: a superhero power potentially unlocked by neuroscience

by Stephanie Jue


We live in a world where only fiction encapsulates the wonders of mind-reading, but now that prospect seems more feasible than ever –– that is, if mind-reading is the comprehension of the human brain on a neural or otherwise quantifiable level. Indeed, studies have shown that it is possible to roughly predict, or ‘mind-read’, what people are going to say before they even say it. 

New technology and research has found a correlation between brain activity and speech. A recent study conducted by researchers at UCSF attempted to decode speech through the monitoring of auditory and sensorimotor cortical areas of the brain. Researchers found correlations between brain signals and words spoken and heard and incorporated such patterns into a prediction algorithm that could then be used to predict other words spoken and heard; similar brain patterns correlated to similar-sounding words, and researchers based word predictions off of this model. The study provides a new, promising approach to the field of assistive speech and hearing devices.

Attempting to mimic real-time conversation, researchers observed high-density electrical signaling from the brain in relation to both perception and production of speech from participants. Neural signals, in other words, were used to predict when participants were listening or speaking and what words they were listening to or speaking. The resulting brain activity patterns contributed toward the  creation of a word-predicting model, with specific brain patterns associated with specific portions of words. This approach, which mimics natural conversation to evoke speech perception and production, may hopefully be used in future studies with real-world applications, such as speech assistance for those who cannot verbally communicate.

To most closely model normal conversation, studies were conducted in a question-and-answer fashion, with participants answering the verbally-posed questions aloud. For instance, a computer could ask “How is the room right now?” and participants might respond with words such as “hot”, “dark”, or “fine”. Their resulting brain activity was recorded and added to data in real-time, meaning that the next Q-&-A series already considered prior predictions on what was being listened to and had been spoken. 

This subsequently yielded a higher probability of being correct than before. Even as the participants formulated a verbal response, the model, which was based on already spoken portions of the word, predicted in real-time with increasing accuracy what the remaining portion of the word will be. The qualities of dynamic updates and increased contextualization are reflected in potential real-world applications. For instance, speech-aiding technology can quickly “learn” from previous trials the patterns of the specific individual it models and prediction capabilities can become increasingly able to anticipate what a person is hearing or about to say based on neural signaling.

The study’s data indicated that both speech perception and production predictions performed significantly better than chance, and that the models were able to predict the questions heard and answers spoken using neural signals. In the future, gathering more training data beforehand would optimize model predictions and distinction between similar-sounding words. 

Overall, these findings are promising in terms of not only utilizing data for future decoding of speech based on neural activity, but also to real-world applications, such as neuroprosthetics or  assistive speech and hearing technology, due to the unique approach that incorporated real-time updates and contextualization. 

An optimized version of the model studied could provide a more generalizable and user-friendly method to decode speech in the future and potentially allow for the implementation of assistive devices in the daily life of those with speech or hearing impairments, although the ethical implications of brain-signaling monitoring must be thoroughly examined and considered before implementation. In the realm of mere feasibility, however, this study has suggested that functionable and implementable predictions of words processed by the brain — mind-reading — can now be a very viable reality of the near future.



Moses, D.A., Leonard, M.K., Makin, J.G. et al. Real-time decoding of question-and-answer speech dialogue using human cortical activity. Nat Commun 10, 3096 (2019) doi:10.1038/s41467-019-10994-4

Leave a Reply

Your email address will not be published. Required fields are marked *