Next-Generation Hearing Healthcare Technologies

🧠 Auditory Neuroscience
We work at the intersection of neuroscience and deep learning to better understand human auditory processing. By aligning state-of-the-art auditory models with neurophysiological data, we uncover shared principles of auditory processing and build biologically inspired models that not only capture key functions—like identifying what, where, and who is speaking—but also replicate complex mechanisms such as auditory attention switching and acoustic scene analysis.
🦻 Brain-Inspired Hearing Assistive Technology
We aim to develop AI-based hearing assistive technologies to enhance communication for hearing-impaired listeners in challenging acoustic environments. We achieve this by integrating techniques from auditory perception, digital signal processing, and modern deep learning to create next-generation hearing assistive technologies. Our core innovation lies in utilizing brain signal analysis to decode real-time auditory attention signals, enabling us to extract sound sources of interest from competing ones. This technology will revolutionize hearing aids, allowing hearing-impaired listeners to seamlessly switch between different sound sources.
Here, we present a real-time system that integrates EEG-based auditory attention decoding and speech separation. This system leverages brain signals to determine which speaker a listener is attending to in a multi-speaker environment, and then separates the corresponding speech stream. It demonstrates the potential of brain-computer interfaces in complex auditory scene understanding.
🎧 Facilities and Equipment
Through collaboration with academic and industry partners in hearing aids and low-power chip design, we have established a comprehensive research platform that, combined with our professional audio testing equipment and facilities, enables realistic evaluation and rapid prototyping of smart hearing assistive devices.
📝 Representative Publications
Toward Ultralow-Power Neuromorphic Speech Enhancement With Spiking-FullSubNet
X. Hao, C. Ma, Q. Yang, J. Wu and K. C. Tan
IEEE Trans. on Neural Networks and Learning Systems, doi: 10.1109/TNNLS.2025.3566021 | Speech Enhancement, Best Paper Award in 2024 IEEE Conference on Artificial Intelligence
KoopSTD: Reliable Similarity Analysis between Dynamical Systems via Approximating Koopman Spectrum with Timescale Decoupling
S. Zhang, Z. Ye, Y. Yan, Z. Song, Y. Wu, and J. Wu
ICML'25, July, Vancouver, Canada | Auditory Neuroscience
EEG-Based Auditory Attention Detection With Spiking Graph Convolutional Network
S. Cai, R. Zhang, M. Zhang, J. Wu and H. Li
IEEE Trans. on Cognitive and Developmental Systems, vol. 16, no. 5, pp. 1698-1706, Oct. 2024 | Auditory Attention Decoding
Target Speaker Verification With Selective Auditory Attention for Single and Multi-Talker Speech
C. Xu, W. Rao, J. Wu and H. Li
IEEE/ACM Trans. on Audio, Speech, and Language Processing, vol. 29, pp. 2696-2709, 2021 | Selective Auditory Attention