Groundbreaking research success: Speaking by imagining
Groundbreaking research success: Computer scientists from the Cognitive Systems Lab at the University of Bremen have now succeeded in an international project to realize a so-called speech neuroprosthetic. With it, imagined speech can be made acoustically audible – without latency in real time. The advancement can help people who have fallen silent due to neurological diseases and cannot communicate with the outside world without external help.
Great research successes require international collaboration: For several years, the Cognitive Systems Lab (CSL) at the University of Bremen, the Department of Neurosurgery at Maastricht University in the Netherlands, and the ASPEN Lab at Virginia Commonwealth University (USA) have been working on a speech neuroprosthetic. The goal: To translate speech-related neural processes in the brain directly into audible speech.
This goal has now been achieved: “We have managed to make our test subjects hear themselves speak, even though they only imagine speaking,” says Professor Tanja Schultz, head of the CSL, happily. “Neural signals from volunteers who imagine speaking are directly translated into audible output by our speech neuroprosthetic – in real time with no perceptible latency!” The exciting research result has now been published in the prestigious scientific journal „Nature Communications Biology“.
The innovative speech neuroprosthetic is based on a closed-loop system that combines technologies from modern speech synthesis with brain-computer interfaces. This system was developed by Miguel Angrick at the CSL. As input, it receives the neural signals of users who imagine speaking. Using machine learning, it translates them into speech almost immediately and outputs audible feedback to its users. “This closes the loop for them from imagining speaking to hearing their speech,” says Angrick.
Study with volunteer epilepsy patient
The work published in Nature Communications Biology is based on a study with a volunteer epilepsy patient who was implanted with depth electrodes for medical examinations and was in hospital for clinical monitoring. In the first step, the patient read texts aloud, from which the closed-loop system learned the correspondence between speech and neural activity by means of machine learning. „In the second step, this learning process was repeated with whispered and imagined speech“, explains Miguel Angrick. „In the process, the closed-loop system produced synthesised speech. Although the system had learned the correspondences exclusively on audible speech, audible output is also produced with whispered and imagined speech.“ This suggests that the underlying speech processes in the brain for audibly produced speech share to some extent a common neural substrate to those for whispered and imagined speech.
Important role of the Bremen Cognitive Systems Lab
“Speech neuroprosthetics focuses on providing a natural communication channel for people who are unable to speak due to physical or neurological impairments,” says Professor Tanja Schultz, explaining the background for the intensive research activities in this field, in which the Cognitive Systems Lab at the University of Bremen plays a world-renowned role. “Real-time synthesis of acoustic speech directly from measured neural activity could enable natural conversations and significantly improve the quality of life of people whose communication capabilities are severely limited.”
The groundbreaking novelty is the result of a long-term cooperation jointly funded by the German Federal Ministry of Education and Research (BMBF) and the U.S. National Science Foundation (NSF) within the research program “Collaborative Research in Computational Neurosciences”. This collaboration with Professor Dean Krusienski (ASPEN Lab, Virginia Commonwealth University) was established jointly with former CSL staff member Dr. Christian Herff as part of the successful RESPONSE (REvealing SPONtaneous Speech processes in Electrocorticography) project. It is currently being continued with CSL staff member Miguel Angrick in the ADSPEED (ADaptive Low-Latency SPEEch Decoding and synthesis using intracranial signals) project. Dr. Christian Herff is now an assistant professor at Maastricht University.
Link to original publication: https://www.nature.com/articles/s42003-021-02578-0
Further information:
www.uni-bremen.de/en/csl
www.uni-bremen.de/en
Questions will be answered by:
Prof. Dr.-Ing. Tanja Schultz
University of Bremen / Cognitive Systems Lab
Department of mathematics / computer science
Phone: +49 421 218-64270
E-Mail: tanja.schultz@uni-bremen.de
Wissenschaftliche Ansprechpartner:
Prof. Dr.-Ing. Tanja Schultz
University of Bremen / Cognitive Systems Lab
Department of mathematics / computer science
Phone: +49 421 218-64270
E-Mail: tanja.schultz@uni-bremen.de
Originalpublikation:
Media Contact
All latest news from the category: Information Technology
Here you can find a summary of innovations in the fields of information and data processing and up-to-date developments on IT equipment and hardware.
This area covers topics such as IT services, IT architectures, IT management and telecommunications.
Newest articles
Pinpointing hydrogen isotopes in titanium hydride nanofilms
Although it is the smallest and lightest atom, hydrogen can have a big impact by infiltrating other materials and affecting their properties, such as superconductivity and metal-insulator-transitions. Now, researchers from…
A new way of entangling light and sound
For a wide variety of emerging quantum technologies, such as secure quantum communications and quantum computing, quantum entanglement is a prerequisite. Scientists at the Max-Planck-Institute for the Science of Light…
Telescope for NASA’s Roman Mission complete, delivered to Goddard
NASA’s Nancy Grace Roman Space Telescope is one giant step closer to unlocking the mysteries of the universe. The mission has now received its final major delivery: the Optical Telescope…