Scientists from the University of California, San Francisco have shown a way to use artificial intelligence to turn brain signals into spoken words. It could one day pave the way for people who can not speak or otherwise communicate so they can talk to those around them.
Work began with researchers studying five volunteers with severe epilepsy. These volunteers had electrodes temporarily placed on the surface of their brain in order to locate the part of the brain responsible for triggering attacks. As part of this work, the team could also study the way the brain reacts when someone talks. This included analyzing brain signals that are turning into vocal tract movements, which involve the jaw, larynx, lips and tongue. Subsequently, the artificial nerve network was used to decode this intention, which in turn was used to generate an understandable synthesized speech.
While it is still at a relatively early stage, we hope this work will open some exciting opportunities. The next step will involve carrying out clinical trials to test the technology in patients who are physically unable to speak (which was not the case with this demonstration). It will also be necessary to develop an approved food and medical device with a high channel capacity (256 channels in this latest study) needed to capture the required level of brain activity.
This is not the first time to cover the impressive brain-computer interfaces of Digital Trends. In 2017, researchers at Carnegie Mellon University developed a technology that used AI. algorithms for machine learning for reading complex thoughts based on brain scanning, including interpretation of full sentences in some cases.
A similar project, conducted by researchers in Japan, was able to analyze brain scans of FIMI and generate a written description of what a person was seeing – such as "the dog sitting on the floor in front of an open door" or "group of people who stand on the beach. "Given that this technology matures, more and more examples of similar work will arise.
Recently published a paper describing the recent work of UC San Francisco, called Synthesis of the Speech of Neurological Deciphering of Well-known Sentences.