{"id":1966,"date":"2022-03-25T21:04:46","date_gmt":"2022-03-26T01:04:46","guid":{"rendered":"https:\/\/chelmsfordhearinggroup.com\/?p=1966"},"modified":"2022-03-25T21:04:48","modified_gmt":"2022-03-26T01:04:48","slug":"what-could-the-future-of-hearing-aids-hold","status":"publish","type":"post","link":"https:\/\/massachusettshearinggroup.com\/what-could-the-future-of-hearing-aids-hold\/","title":{"rendered":"What Could the Future of Hearing Aids Hold?"},"content":{"rendered":"\n

Our brains perform exceptional tasks that we don\u2019t even think twice about, like picking out individual voices at Moonstones<\/a>. This is a task even today\u2019s most advanced hearing aids<\/a> have struggled with. The hearing aids of tomorrow, however, are showing promise.<\/p>\n\n\n\n

Researchers at the Columbia University<\/a> in New York City are developing new artificially intelligent (AI) technology that will better amplify the correct speaker in group settings.<\/p>\n\n\n\n

The Challenge Hearing Aids Face<\/h2>\n\n\n\n
\"Otoscope<\/figure><\/div>\n\n\n\n

Today\u2019s hearing aids are good at selecting a single speaker\u2019s voice and amplifying it while suppressing background noises like city traffic and dishes clanking. But devices have a hard time boosting an individual speaker\u2019s voice over other voices, especially when the target voice is not directly in front of the wearer. Instead, they tend to amplify all speakers at once, which is known as the cocktail party problem. The cocktail party problem severely hinders the ability of a hearing aid wearer to participate in conversations in group settings.<\/p>\n\n\n\n

The Solution for Future Hearing Aids<\/h2>\n\n\n\n

Rather than designing yet another hearing aid that uses external microphones to identify the correct speaker, the new technology in development by the Columbia research team actually monitors the wearer\u2019s brain waves. This way, the device can boost the voice the speaker wants to focus on.<\/p>\n\n\n\n

This technology is extremely complex, as it uses speech-separation algorithms with neural networks to perform this task. Neural networks are complex mathematical models that imitate the brain\u2019s natural computational abilities.<\/p>\n\n\n\n

The system works by first separating the voices of individual speakers from a group, then comparing them to the brain waves of the person listening. The speaker whose voice pattern most closely matches the lister\u2019s brain waves is amplified.<\/p>\n\n\n\n

\u201cBy creating a device that harnesses the power of the brain itself, we hope our work will lead to technological improvements that enable the hundreds of millions of hearing-impaired people worldwide to communicate just as easily as their friends and family do,\u201d explained senior study author Nima Mesgarani, Ph.D<\/a>.<\/p>\n\n\n\n

The next steps for the team are to transform the prototype into a noninvasive device that can be worn externally and refine the algorithm so it can function in a broader range of environments. <\/p>\n\n\n\n

To learn more on the current and future benefits of hearing aids<\/a> or to schedule an appointment with a hearing aid expert, call Massachusetts Hearing Group<\/span> today.<\/p>\n\n\n\n