Speech Attention

How does hearing loss affect our ability to pay attention to speech?

The acoustic properties of speech vary in many ways, such as in basic characteristics of frequency (pitch) and amplitude (loudness), and in more complex ways such as spectrotemporal modulations, speech rate, and timing. When we have damage to our ears due to aging and noise exposure, the ability to detect and encode these features in the ear diminishes. If these sound cues are not available, they cannot be used to help us select the speech we are interested in when listening in the presence of other talkers or environments with background noise. 


The envelope and temporal fine structure are important acoustic features of speech. Envelopes are time varying changes in the overall waveform amplitude, and fine structures are the fast periodicities that are perceived as pitches of speech. Computer modeling of the auditory nerve shows how these features are represented for the words “one two three” in normal hearing. With hearing loss, the speech features across different frequencies become less clear: the fine structure of the speech is poorly encoded.


Not all hearing loss is the same. Structures of the cochlea, the primary hearing organ, can be damaged in many configurations. Although it is hard to tease apart what is damaged, clinical tests and physiological measures can help us understand exactly how a person’s hearing is affected. Our goal is to draw a better line between problems of attention and speech understanding to the status of a person’s hearing.

One of the main questions we ask is, can we map how patterns of damage to the ear affect how an individual forms auditory objects and is able to perform sensory selection?

How does hearing loss affect how we pay attention to one talker when another person is talking nearby?

Example Publication: 

Paul, B. T., Uzelac, M., Chan, E., & Dimitrijevic, A. (2020). Poor early cortical differentiation of speech predicts perceptual difficulties of severely hearing-impaired listeners in multi-talker environmentsScientific Reports, 10:6141. doi:10.1038/s41598-020-63103-7