Skip to main content

Home

  • About C-CEBH
    • Welcome
    • What is C-CEBH?
    • Evolutionary Biology of Hearing
    • Related links
  • Faculty and Labs
    • People
      • CORE Faculty
      • NIDCD Faculty
      • Affiliated Faculty
      • Staff
    • Research Focus Areas
    • Grant Support
  • Trainees
    • Current Trainees
    • Featured Past Trainees
    • Past Trainees
  • C-CEBH Training
    • Pre-Doctoral Training
    • Post-Doctoral Training
    • Courses
    • Apply Today
  • C-CEBH / NIDCD Partnership
    • C-CEBH / NIDCD Partnership Overview
    • C-CEBH and NIDCD Joint Meeting
  • Apply Today
  • About C-CEBH
    • Welcome
    • What is C-CEBH?
    • Evolutionary Biology of Hearing
    • Related links
  • Faculty and Labs
    • People
      • CORE Faculty
      • NIDCD Faculty
      • Affiliated Faculty
      • Staff
    • Research Focus Areas
    • Grant Support
  • Trainees
    • Current Trainees
    • Featured Past Trainees
    • Past Trainees
  • C-CEBH Training
    • Pre-Doctoral Training
    • Post-Doctoral Training
    • Courses
    • Apply Today
  • C-CEBH / NIDCD Partnership
    • C-CEBH / NIDCD Partnership Overview
    • C-CEBH and NIDCD Joint Meeting
  • Apply Today
Enter the terms you wish to search for.

New Grant Awarded to CEBH mentors Hoover, Espy-Wilson, and Gordon-Salant: AI-Based Speech Enhancement for Hearing Aids

CCEBH Banner Image of sound wave entering an ear.

New Grant Awarded to CEBH mentors Hoover, Espy-Wilson, and Gordon-Salant: AI-Based Speech Enhancement for Hearing Aids

We are pleased to announce that the National Institutes of Health has awarded funding (FAIN: R41DC023169) for the project “AI-Based Speech Enhancement for Hearing Aids” (September 1, 2025 – August 31, 2026).

This collaborative effort between OmniSpeech and the University of Maryland Department of Hearing and Speech Sciences will develop a deep learning algorithm designed to improve speech intelligibility for people with hearing loss in everyday noisy environments. Unlike traditional noise-reduction methods, this AI-based approach is tailored to restore important auditory cues without eliminating important environmental sounds like sirens and crying babies. It has the potential to be embedded directly in hearing aids due to its small size and low latency.

The research team will leverage the Zilany model of the impaired auditory system, with guidance from Dr. Laurel Carney (University of Rochester), to refine the technology for listeners with hearing impairment. Two algorithm versions will be tested: one prioritizing preservation of environmental sounds, and another maximizing speech clarity in challenging environments.

Principal Investigators:

  • Dr. Craig Birkhimer (OmniSpeech)

  • Dr. Eric Hoover (University of Maryland)

Contributing Investigators: Dr. Carol Espy-Wilson, Dr. Sandra Gordon-Salant, Dr. Ed Smith, and Dr. Laurel Carney.

This Phase I project represents an important step toward translating cutting-edge AI into practical solutions for hearing healthcare.

What This Means for Students

This project offers a unique opportunity to see how advanced AI methods can be applied to real-world clinical challenges. Students in hearing and speech sciences, engineering, and neuroscience will be able to follow how laboratory research translates into technologies that directly improve communication for people with hearing loss. The collaboration also highlights the value of interdisciplinary teamwork—spanning speech science, biomedical modeling, and deep learning.

Eric Hoover in a blue collared shirt with a grey background
University of Maryland 1856 - College of Behavioral & Social Sciences

Login / Logout