It's estimated that more than 46 million people in the US of all ages, races and gender, suffer some form of disordered communication. Of these, approximately 28 million have some degree of hearing loss. By far, the most frequent complaint made by people with hearing loss is their difficulty or inability to understand speech in noisy environments. The most effective way to address this problem is to watch the talkers face while he or she speaks. The process of deriving information from watching the movement of the lips, jaw, and other facial gestures during speech production, is know as speechreading. When speechreading and hearing are combined, the result is an extremely robust speech signal that is greatly resistant to noise and hearing loss. The mission of the Auditory-Visual Speech Recognition Laboratory is to identify the various perceptual processes involved in auditory-visual speech perception, to determine the abilities of individual patients to carry out these processes successfully, and to design intervention strategies incorporating modern signal processing technologies and training techniques to remedy any deficiencies that may be found. Laboratory studies typically involve the presentation of auditory, visual, and auditory-visual speech samples to subjects. The subjects task is to identify the speech samples by activating designated areas on a touch-screen terminal, writing with paper and pencil, or repeating back what they thought was said. Because speech recognition involves several levels of processing, from peripheral extraction of primary auditory and visual cues, to the utilization of linguistic knowledge and language experience for categorizing sounds and images into words and phrases, the speech samples used cover a broad range from nonsense syllables to connected speech (e.g. sentences and paragraphs).

Grant
Phone
Email
grant [at] tidalwave.net