Overview
- Columbia University researchers reported Monday in Nature Neuroscience that a real-time system decoded which speaker a person attended to and made that voice easier to hear.
- Epilepsy patients with existing brain electrodes listened to two overlapping talks as machine-learning software read their brain activity and turned up the chosen voice.
- The approach improved speech intelligibility, reduced listening effort, and was favored by volunteers over unassisted audio.
- The team demonstrated fast, stable performance in lab tests across clinical sites at Hofstra Northwell, the Feinstein Institutes, NYU, and UCSF.
- The prototype depends on implanted sensors, and the authors say wearable, minimally invasive versions must still prove they can work in messy, real-world noise for people who struggle most in social settings.