Northwell Health - Feinstein Institute for Medical Research

Volume 1, 2018

Issue link: http://viewer.e-digitaledition.com/i/923291

Contents of this Issue

Navigation

Page 2 of 7

Current hearing aid technology can only suppress background noise — not select a voice out of the crowd — making it difficult to hear a particular speaker. Associate Professor Ashesh Mehta, MD, PhD, identified the neural signals that help us home in on certain speakers to translate this brain function into device technology. Cognitive hearing aids incorporate technology that not only picks up sound and amplifies it, but is also able to identify what the patient is trying to listen to and amplify that particular sound. To develop this technology, Nima Mesgarani, lead researcher, and his team at Columbia Engineering need to understand how the brain processes what we hear and how it tells the ears to focus on different sounds. This is where Dr. Mehta's expertise comes into play. He is a part of The Laboratory for Human Brain Mapping at the Feinstein Institute, which uses the latest methods for measuring brain structure and function to advance our understanding of how the human brain works. For this study, Dr. Mehta and his team performed brain recordings while patients were undergoing epilepsy surgery to identify the portion of the brain that determines who is speaking and then map those neural signals. "In our neural recordings, you could see the shift in focus from one person to another during the course of a conversation, and this could be identified solely on the basis of the brain recordings within seconds of the attention shift," Dr. Mehta said. "The Columbia Engineering team was able to use these recordings to develop their technology and prove that it mirrored the functionality found in the human brain." The Columbia Engineering team developed auditory attention decoding (AAD) methods, which receive one audio channel with multiple speakers being heard by a listener along with the listener's neural signals. It then automatically separates the individual speakers from the mix, determines which speaker is being listened to, and amplifies the attended speaker's voice to assist the listener — all in seconds. "This study shows we have the technological capability to automatically and rapidly separate an attended speaker from a mixture of multiple sources," said Nima Mesgarani, associate professor of electrical engineering at Columbia University. "We hope to continue developing this technology and work with a developer of hearing devices to take it into clinical trial." This discovery was recently published in Journal of Neural Engineering. Discovery.will.bring.cognitive. hearing.aids.closer.to.reality Picture yourself in a crowded restaurant with your closest friends, catching up and sharing the latest news. For most people, this sounds like an idyllic evening, but for those with a hearing impairment this setting can be stressful. FeinsteinInstitute.org 3 Research + development

Articles in this issue

Links on this page

Archives of this issue

view archives of Northwell Health - Feinstein Institute for Medical Research - Volume 1, 2018