These Researchers Want to Read Your Mind

New UMD study seeks to decipher ‘imagined speech’ in the brain.
Descriptive image for These Researchers Want to Read Your Mind

“Tell me what you’re really thinking” rarely gets a straightforward response, even from the most candid individual. But for those trying to reach someone who’s unable to communicate, even a hint would be welcome as they wonder if the person is in pain or needs something.

A new study into how we “imagine” speech by researchers at the University of Maryland may one day help reveal a person’s inner monologue by transforming the jagged blips of neuroactivity on a brain scan into words.

“This capability could be transformative for people who are fully active in the mind, but unable to communicate because of a physical impairment,” said electrical and computer engineering Professor Shihab Shamma, who is leading the research effort. “This is really like a window to the mind.”

His research team was one of six to receive funding through the Defense University Research Instrumentation Program (DURIP), which supports the instrumentation necessary for cutting-edge research. The nearly $300,000, one-year project, which began last week, will fund the equipment for intensive electroencephalography (EEG) recordings to decode “imagined speech”—the unspoken words and phrases we intentionally form in our minds.

For decades, Shamma, who has affiliations with the Institute for Systems Research and the Brain and Behavior Institute, has been a leader in the field of computational neuroscience with a focus on understanding the process of sound recognition and processing in the auditory systems of the brain. Models that he created are now the standard in the literature for how sound is processed.

The new research stems from studies by Shamma and his colleagues on what happens in the brain when people imagine music, like recalling a favorite song or post-concert orchestra piece. The team found that the signals generated when “imagining” a piece of music were closely related to when a person actively listened to it, and detailed enough to be used to identify the imagined notes.

“Then we discovered that, similar to music, if you imagine speech, you also get something that you could potentially decode and interpret,” said Shamma.

The new study will tether subjects to dozens of sensors as they listen to spoken words, such as an audiobook, while Shamma’s students record their neural activity. The exercise will generate three datasets: the speech being played, the text of that speech and the EEG recordings of subjects. This will all be shared with research collaborator and computer science Professor Ramani Duraiswami and another team of students to process using large language models and previously trained speech models to essentially “match” activity on the EEG readings with listened-to speech.

Click HERE to read the full article 

 

The Department welcomes comments, suggestions and corrections.  Send email to editor [-at-] cs [dot] umd [dot] edu.