The University of California’s scientists created a computer program that turns brain signals into virtual voice with the help of a brain-computer interface (BCI). This device works out a person’s speech intentions by connecting brain activity to a complex simulation of a vocal tract. Through encoding the movements of lips, tongue, jaw, larynx, and other speech mechanisms, it allowed the computer to translate data into spoken words. This method was proven to be more reliable than matching brain waves to predicted speech sounds. This study was published in a scientific journal, Nature, on April 24, 2019.

Edward Chang, one of the project’s co-authors, said in a press briefing, “It’s been a longstanding goal of our lab to create technologies to restore communication for patients with severe speech disability,” The study worked with five volunteers whose brains were already being monitored for epileptic seizures. Scientists placed stamp-size arrays of electrodes on the surfaces of the volunteers’ brains and recorded the fluctuations in the language-producing region of it as they read several hundred sentences aloud. The computer models will then translate the data into speech. Though the outcome of this study was successful, it will take years of further work before this technology is made available for patients’ use.