Neuroscientists with the University of California, Berkeley, have observed brain re-tuning in action by recording directly from the surface of a person’s brain as the words of a previously unintelligible sentence suddenly pop out after the subject is told the meaning of the garbled speech.
Working with epilepsy patients who had pieces of their skull removed and electrodes placed on the brain surface to track seizures, a procedure known as electrocorticography, UC Berkeley graduate student Chris Holdgraf presented seven subjects with an auditory test by playing a highly garbled sentence, which almost no one initially understood, and then playing a normal, easy to understand version of the sentence, and immediately repeating the garbled version.
The result: almost everyone understood the sentence the second time around, even though they initially found it unintelligible.
When the garbled sentence was first played, activity in the auditory cortex as measured by the 468 electrodes on the brain surface was small. The brain could hear the sound, but couldn’t do much with it, said Knight, first author of a study published this week in the journal Nature Communications.
When the clear sentence was played, the electrodes recorded a pattern of neural activity consistent with the brain tuning into language. When the garbled sentence was played a second time, the electrodes recorded nearly the same language-appropriate neural activity, as if the underlying neurons had re-tuned to pick out words or parts of words.
“They respond as if they were hearing unfiltered normal speech,” Holdgraf said. “It changes the pattern of activity in the brain such that there is information there that wasn’t there before. That information is this unfiltered speech.”
“It is unbelievable how fast and plastic the brain is,” co-author Robert Knight, a UC Berkeley professor of psychology and Helen Wills Institute researcher, was quoted as saying in a news release. “In seconds or less, the electrical activity in the brain changes its response properties to pull out linguistic information. Behaviorally, this is a classic phenomenon, but this is the first time we have any evidence on how it actually works in humans.”
The observations confirm speculation that neurons in the auditory cortex that pick out aspects of sound associated with language, namely the components of pitch, amplitude and timing that distinguish words or smaller sound bits called phonemes, continually tune themselves to pull meaning out of a noisy environment. “We believe that this tuning shift is what helps you ‘hear’ the speech in that noisy signal,” Holdgraf said, “the speech sounds actually pop out from the signal.”
Such pop-outs happen all the time: when you learn to hear the words of a foreign language, or latch onto a friend’s conversation in a noisy bar; or visually, when someone points out a number in what seems like a jumbled mass of colored dots, and somehow you cannot un-see that number.
“Normal language activates tuning properties that are related to extraction of meaning and phonemes in the language,” Knight explained. “Here, after you primed the brain with the unscrambled sentence, the tuning to the scrambled speech looked like the tuning to language, which allows the brain to extract meaning out of noise.”
The findings are expected to aid the researchers in developing a device implanted in the brain that would interpret people’s imagined speech and help speechless patients, such as those paralyzed by Lou Gehrig’s disease, communicate. Enditem