Wednesday , November 20 2019
Home / argentina / Scientists train AI to turn brain signals into speech

Scientists train AI to turn brain signals into speech



gettyimages-91560242

The researchers worked with patients with epilepsy undergoing brain surgery.

Scientific photo library

Neuro-engineers created a breakthrough device that uses neural networks for machine learning to read brain activity and translate it into speech.

An article published in Scientific Reports magazine on Tuesday explains how a team from Columbia University at the Zuckerman Mind Institute of Brain Reflection uses deep learning algorithms and the same type of technology that powers devices such as Apple's Siri and Amazon Echo turn into "accurate and understandable reconstructed speech". On the survey was published earlier this month but the magazine's article goes much deeper.

The human-computer framework could ultimately provide patients who have lost the opportunity to speak the opportunity to use their thoughts to verbally communicate through a synthesized robotic voice.

"We have shown that, with the right technology, the thoughts of these people could be decoded and understood by each listener," said Nima Messagrani, chief researcher of the project, in a statement.

When we speak our brains light up, sending electrical signals zipping around the old thought box. If scientists can decode those signals and understand how they relate to the formation or hearing of words, then a step closer to their translation into speech. With enough understanding – and enough processing power – that can create a device that directly translates thinking into speaking.

And that's what the team managed to do, creating an "vocoder" that uses algorithms and nerve networks to include signals in speech.

To do this, the research team asked five patients with epilepsy who had already undergone brain surgery to help. They fixed the electrodes on various exposed surfaces of the brain, and then the patients listened to the sentences in a value of 40 seconds, randomly repeated six times. Listening to stories has helped train vocoders.

Then, patients listened to the speakers from zero to nine, while their brain signals were returned to the vocoder. The vocoder algorithm, known as WORLD, then spat out its sounds, which were cleared of the nervous network, which in the end resulted in a robotic speech that mimicked counting. You can hear how it sounds here. It's not perfect, but surely it's understandable.

"We found that people can understand and repeat the sounds about 75 percent of the time, which is much above and above all previous attempts," Mesgarani said.

The researchers concluded that the accuracy of the reconstruction relies on how many electrodes were planted on the patient's brain and how long a vocoder was trained. As expected, the increase in electrodes and the lengthening of the training allowed the vocoder to collect more data and results in better reconstruction.

Looking ahead, the team wants to test what signals emit when someone just imagines to speak, unlike listening to speech. They also hope to test a more complex set of words and sentences. Improving algorithms with more data could ultimately lead to a brain implant that completely bypasses the speech, turning the thoughts of a person into words.

It would be a monumental step forward for many.

"It will give anyone who has lost the ability to speak, whether through an injury or illness, a renewed chance to connect with the world around them," Mesgarani said.

NASA turns 60 years old: the space agency has taken humanity away from anyone else, and plans to go further.

Taking extreme situations: Mix crazy situations – erupting volcanoes, nuclear strikes, 30-foot waves – with everyday technology. Here's what's happening.


Source link