Follow

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

AI-powered headphones would let you listen to just one person in a crowd 

Imagine you’re in a crowded room with multiple people talking, and you’re trying to listen to just one specific person. That’s a challenging situation we’ve all faced. Now, a team at the University of Washington has created a technology aimed at addressing this challenge. 

As reported in a UW news release, the team has designed an AI system that lets someone wearing off-the-shelf headphones listen to just one person in a crowd of people. To enroll a person’s voice, you simply look at them once for three to five seconds. The system, known as “Target Speech Hearing,” can then block out all other voices and sounds in the area and let you listen just to the person you enrolled. You can even move around and away from the speaker and continue to hear just their voice. 

Wearing any pair of headphones outfitted with dual microphones, you tap a button while looking at someone who’s speaking. The sound waves from that person’s voice hit the microphones on both sides of the headset. That signal is sent to the system’s on-board computer, where the embedded AI learns the speaker’s voice patterns. The system then picks up the voice and continues to play it back to you. The longer the person speaks, the more the system learns and adds to its training data. 

Current headphones and earbuds already offer noise cancellation features and other options to help you better hear specific sounds. Apple’s AirPods Pro, for example, provide noise control settings in which you can muffle sounds around you to focus on the audio piping through the earbuds. You’ll also find features such as Personalized Volume and Conversation Awareness, both aimed at automatically adjusting the audio volume. An accessibility setting in iOS called Conversation Boost can amplify the conversations of nearby people. Plus, iOS 18 is reportedly gaining a hearing aid mode to help if you have trouble hearing. 

The system developed by the UW team promises to expand this type of capability, especially since it’s designed to work with any pair of headphones. 

“We tend to think of AI now as web-based chatbots that answer questions,” senior author and UW professor Shyam Gollakota said in a statement. “But in this project, we develop AI to modify the auditory perception of anyone wearing headphones, given their preferences. With our devices, you can now hear a single speaker clearly, even if you are in a noisy environment with lots of other people talking.” 

So far, the team has tested its system on 21 different people, who rated that the clarity of the enrolled speaker’s voice was almost twice as high as unfiltered audio. 

The system has some limitations. 

For now, you can enroll just one speaker at a time, and only when there isn’t another loud voice coming from the same location. Further, the system works only with headphones, although the team is working to support earbuds and hearing aids. Finally, the system itself isn’t commercially available. Rather, the code for the device is available for other developers to examine and use. 

To learn more about the system, check out the team’s presentation and report delivered on May 14 in Honolulu at the ACM CHI Conference on Human Factors in Computing Systems. 

 

Content Courtesy – ZDNET