Redazione RHC : 16 August 2025 06:33
Eighteen years after a brainstem stroke left Anne Johnson almost completely paralyzed, she’s now using a brain-computer interface (BCI) that converts speech directly from brain signals. In 2005, at age 30, the Saskatchewan teacher and athletic trainer suffered a brain injury that left her with locked-in syndrome, in which she remains conscious but unable to speak or move.
For years, she communicated through an eye-tracking system, speaking at a rate of about 14 words per minute, significantly slower than the 160 words a person normally speaks. In 2022, she became the third participant in a clinical trial conducted by the University of California, Berkeley, and UCSF that aims to restore speech in people with severe paralysis.
The neuroprosthetic system used records electrical activity in the cortical area responsible for articulation, bypassing damaged signal transmission pathways. An implant with a series of intracranial electrodes is placed on the surface of this area. When the patient tries to pronounce words, sensors record characteristic patterns of activity and transmit them to a computer. Machine learning algorithms then convert the streams of neural signals into text, speech, or facial expressions of a digital avatar, including basic expressions like a smile or frown.
Initially, the system worked on sequential patterns that produced results only after the entire sentence was completed, resulting in a delay of about eight seconds. In March 2025, Nature Neuroscience reported the move to a streaming architecture: the conversion now occurs in near real time, with a pause of about a second.
For maximum customization, the developers restored the timbre and intonation of the recording of Johnson’s 2004 wedding speech and also made it possible to choose the avatar’s appearance so that it is recognizable and conveys familiar visual cues.
The project leaders—University of California, Berkeley, associate professor of electrical and computer engineering Gopala Anumanchipalli, University of California, Berkeley, neurosurgeon Edward Chang, and Berkeley graduate student Kylo Littlejohn – see the ultimate goal as plug-and-play technology that would make the prototypes standard clinical tools.
Short-term goals include developing fully wireless implants that eliminate the need for a physical connection to a computer and creating more realistic avatars for natural communication. In the future, they plan to move to digital “lookalikes” that can reproduce not only a voice but also a familiar communication style along with nonverbal cues.
This technology is particularly important for a small but vulnerable group of people who have lost speech due to stroke, ALS, or trauma. The developers emphasize a key ethical point: decoding is activated only when a person consciously attempts to pronounce words. This allows the user to maintain control over communication and minimizes the risk of invasion of personal space.
For Anne, participating in the program was a milestone: she is considering working as a counselor in a rehabilitation center and conducting conversations with clients through a neuroprosthetic. With the current latency of about a second and rapidly improving artificial intelligence models, research teams are cautious about the possibility of voice restoration systems being ready for mass use in the near future. Essentially, we’re talking about a technology that restores the voice of those who have lost it and makes dialogue sound natural again, not through slow “eye typing,” but in a familiar, live form.