A researcher connects electrodes resting on a woman's brain to the computer that translates her attempted speech into the spoken words and facial movements of an avatar. /UCSF
A researcher connects electrodes resting on a woman's brain to the computer that translates her attempted speech into the spoken words and facial movements of an avatar. /UCSF
A severely paralyzed woman who suffered a brainstem stroke has been able to "speak" through a digital avatar by using brain-computer technology, which can translate her brain signals into speech and facial expressions, according to the University of California, San Francisco (UCSF).
Researchers implanted a paper-thin rectangle of 253 electrodes onto the surface of the woman's brain over areas critical for speech, and used a cable, plugged into a port fixed to the head, connected the electrodes to computers.
The team trained the system's artificial intelligence (AI) algorithms to recognize her brain signals for speech.
It involved repeating different phrases from a 1,024-word conversational vocabulary over and over again, until the computer recognized her brain activity patterns associated with the basic sounds of speech.
Researchers created a system that decodes words from smaller components called phonemes, rather than training AI to recognize whole words. Upon this approach, the computer only needed to learn 39 phonemes to decipher any English word, which enhanced the system's accuracy and efficiency, according to the UCSF.
The team also devised an algorithm for synthesizing speech, which they personalized to sound like her voice by using a recording of her wedding speech.
They've further animated the woman's avatar by using software to simulate and animate face muscle movements.
The software was even customized to mesh with signals being sent from the woman's brain as she was trying to speak, and convert them into the movements on the avatar's face, resembling a person.
The UCSF said that the patient's 18-year-old daughter knows her mom's "voice" as a computerized voice with a British accent.
A screenshot of the study published in the journal Nature on August 23, 2023.
A screenshot of the study published in the journal Nature on August 23, 2023.
The findings were published in the journal Nature on August 23.
The team said it will create a wireless version that would not require the woman to be physically connected to the brain-computer interface.
"Giving people like Ann the ability to freely control their own computers and phones with this technology would have profound effects on their independence and social interactions," said David Moses, an adjunct professor in neurological surgery at UCSF and co-first author of the study.
"It's amazing I have lived this long; this study has allowed me to really live while I'm still alive," the 47-year-old woman, who lost her ability to speak 18 years ago, wrote in answer to a question.
The device produced 78 words per minute with a median word-error rate of 25.5 percent, according to a Nature article, adding that another brain-computer interface, a study published on the journal the same day, could decode speech at 62 words per minute.
"Natural conversation happens at around 160 words per minute, but the new technologies are both faster than any previous attempts," said the journal.