Developed technology that would convert brain signals into speech
US neuroengineering specialists have created a system that translates thoughts into intelligible, recognizable speech. This technology, by monitoring brain activity, can reconstruct words as precisely as ever before.
The research used artificial intelligence and synthesizers. This work can lead to new ways ofoin communicating computerow directly from the mozgiem. They can roalso helpoc people, whooThose who are unable to mowić – such as people after a stroke or those suffering from amyotrophic lateral sclerosis. In this wayob patients could regain the ability to communicate with the outside world.
– Our voice helps us communicate with our friendsolm, family and the world around us, which is why its loss due to injury or disease is so catastrophic, said Nima Mesgarani, an author of the study from the Zuckerman Institute at Columbia University in New York City. – Thanks to this research, we have a potential wayob on adductoprice of this power. We have shown that with the right technology, the thoughts of these people can be decoded and understood by any listener – he added.
Decades of research have shown that when people motie – or even imagine mowiening – they appear in their mozgu certain patterns of activity. A clear, recognizable pattern of signalsow appears roAlso, when people listen to someone mowi, or imagine that they are listening. Specialists have for years recorded and prob have been decoding emerging patterns, and they appear to have succeeded in doing so.
The findings were published in „Scientific Reports”.
However, developing this technology has proven to be a difficult task. Early proto decode the signaloin mozg studies have focused on simple computer models that analyze spectrograms, whichore are visual representations of sound frequencies. But this approach has not produced anything that even remotely resembles intelligible speech. Therefore, the teamoMesgarani’s bench used a vocoder (Voice Encoder) – sound synthesis devices, whichore based on artificial intelligence algorithms can also synthesize human speech. But before the device made intelligible sounds, the algorithms trained on recordings of people talking to each other.
In teaching algorithms to interpret the activity of mozgu pomoMesgarani’s head Ashesh Dinesh Mehta, a neurosurgeon at Northwell Health Physician Partners Neuroscience Institute, et alopublication author.
– We used the same technology thatorej use Amazon Echo and Apple Siri to give verbal answers to our questions,” said Mesgarani. – Working with Mehta, we asked the patientow with epilepsy, whichoThose who have undergone mozgu to listen to sentences spoken by roThe human. During this activity, we measured their activity patterns mozgu. These patterns then used artificial intelligence to train theow to learn how best to recognize them – added.
The researchers then asked the same patientsoThe new recordings are to be listened to in the mowcoin reciting the digits from 0 to 9. At the same time, they recorded m signalsozgowe, whichowhich were then processed by the vocoder. The sound produced by the device has been analyzed. It has also been cleaned up by neural networks – a type of artificial intelligence thatora mimics the structure of neuronow mozg biological. The end result was a voice resembling that of a robot from old moviesoin, whichory recited a sequence of numbers.
Mesgarani and his teamoł released recordings of roto people to see if it is understandable. – We discovered that people can understand and repeatoation of sounds in 75 percent. caseoin, which far exceeds any previous proby – admitted Mesgarani. Improving the clarity of speech was a particularolily visible at the cfown comparing the new recordings with earlier prowith spectrograms.
Mesgarani and his team will now test more complex words and whole sentences. The scientists hope that their system could in the future take the form of an implant similar to those ktore have someoers suffering from epilepsy. U osob epileptic patients, whichore have a high frequency of attackow, a vagus nerve stimulation implant can be implanted. In the case of osob deprived of the ability to speak, the implant would translate thoughts into words.
– In this scenario, if the user thinks: „I need a glass of water " – Our system can accept signals mozgowe and convert them into synthesized speech – explained Mesgarani. – Such a system would transform the lives of anyone who has lost the ability to moVisit – added.
Experts working on the technology say that the thoughts appearing in our heads do not have to be hidden at all, they can be in any way possibleob translate into the language of mowiony. Certainly, this is a tremendous achievement, but it carries with it some rather disturbing possibilities. It is not difficult to imagine situations in which theorej we would rather keep our thoughts to ourselves.