A woman can no longer speak after a stroke. Thanks to AI, she can now hear her own voice again and talk to other people.
Our title image is a symbolic image.
People can lose the ability to speak for various reasons. For example, due to neurodegenerative diseases like amyotrophic lateral sclerosis (ALS) or a stroke that damages or paralyzes certain areas of the brain.
Researchers have now helped a paralyzed patient to speak again. They used AI for this. In the eyes of the researchers, this is a significant advancement and, for the patient, an emotional journey, as she can finally hear her own voice again.
A stroke prevents speaking
A patient had a stroke just over 10 years ago and has been unable to speak since then. For many, it is emotionally burdensome when they can no longer talk to family, friends, and acquaintances.
At the University of California San Francisco (UCSF), under the direction of neurosurgeon Edward Chang, a technique called electrocorticography (ECoG) was used, which allows for recording signals from the surface of the brain without having to insert electrodes into the brain.
The electrical signals of the brain that are responsible for movements of the jaw, mouth, and tongue were extracted. These signals are normally responsible for speaking. Chang’s team then developed an AI system that could accurately and quickly translate these signals into speech, achieving a rate of 78 words per minute with a vocabulary of 1000 words and an error rate of 25.5%.
In the next step, the text was converted back into speech. For this, they used an old wedding video of the patient to mimic her voice.
Using a clip from her wedding video, we were able to decipher these sounds into a voice that sounded just like the patient’s before the stroke.
Sean Metzger, a member of Edward Chang’s team.
Thus, the text that the paralyzed patient wanted to speak was reproduced in her own voice.
However, there are also problems with AI voices. A similar technique is also used to pretend a kidnapping and extort money in this way.
Thanks to digital avatar
, the patient can also move her face again
What happened next? The team wanted to return the ability to communicate with facial movements to the patient. To do this, they created a “personalized avatar”: this digital face moved in response to the brain signals with which the patient would have moved before her stroke.
The patient herself explained: First of all, the simple fact of hearing a voice that is similar to one’s own is very emotional. The ability to speak out loud is very important.
More about AI and Co: Want to know more about artificial intelligence? Then check out the following articles directly on MeinMMO: