RERC logo
   Rehabilitation Engineering Research Center
   on Hearing Enhancement

divider between banner and body
spacer for menu buttons spacer for menu buttons spacer for menu buttons
spacer for menu buttons design for top of left side menu
spacer for menu buttons spacer for menu buttons
spacer for menu buttons Home
spacer for menu buttons About the RERC
spacer for menu buttons Projects
spacer for menu buttons Publications
spacer for menu buttons Dr. Ross Says...
spacer for menu buttons Recruitment
spacer for menu buttons Downloads
spacer for menu buttons Links
spacer for menu buttons The RERC Staff
spacer for menu buttons Contact Us
spacer for menu buttons spacer for menu buttons
spacer for menu buttons design for bottom of left side menu
spacer for menu buttons spacer for menu buttons
spacer for menu buttons

            Gallaudet University Logo
     Gallaudet
     University



spacer for menu buttons

spacer for menu buttons spacer for menu buttons

Dr. Ross on Hearing Loss

Hearing in Noise

by Mark Ross, Ph.D.
This article first appeared in the
SHHH Journal (March/April 1996)

If there is a topic concerning hearing aids that excites more concern and complaints among hard of hearing people than difficulty understanding speech in noise, I haven't heard of it. It is important to understand why this happens and what, if anything, can be done about it.

The ultimate limits to speech perception in noise has nothing to do with the noise or the specific hearing aid used, but with the limitations imposed by an impaired ear. The first limitation to be considered is the fact that the hearing loss eliminates or reduces the possibility of perceiving certain of the sounds of speech. Some speech sounds, like the voiceless consonants, range across the higher frequencies; other sounds, like vowels, have most of their acoustical energy in the low frequencies. All speech sounds, however, spread across a wide portion of the frequency spectrum, with different concentrations of energy defining the different phonemes (sounds).

Now, depending upon the shape of the audiogram (and everybody with a hearing loss should have a basic understanding of their own audiogram), much of the acoustical information in a speech signal may be eliminated. For example, if one has little or no residual hearing at certain high frequencies, then some speech sounds may not be heard no matter how intensely they are amplified by a hearing aid. Or, and this also happens often, only varying portions of some speech sounds may be perceived (for example, only the lower parts of some of the high frequency consonants).

Second, a hearing-impairment does more than simply affect the threshold of hearing. Often, and this differs across people with different types and degrees of hearing loss, the auditory system also loses some of its capacity to analyze the sounds that can be heard. One such analytic function is termed "frequency resolution". This is the operation of a normal cochlea that separates out the various frequency components in speech sounds by producing simultaneous vibrations in different locations within the cochlea. For example, the pattern of vibrations in the cochlea would be different for the vowels /ee/ and /ah/ (and for the rest of the vowels, consonants and syllables as well). In a normal cochlea the information coded by the different vibratory locations are "sharpened" by neural networks and transmitted up the auditory pathways to the brain. In an impaired cochlea the frequency resolution function is "muddied up", thus obscuring the acoustical differences between the sounds.

Another problem may be the difficulty that many hard of hearing people have in detecting changes in the duration of sounds. An example here would be the ability of a normal hearing person to detect 10 to 20 millisecond differences between the end of one sound and the beginning of another. It may take 200 milliseconds or more for some hard of hearing people to make this same kind of judgment (this is called temporal gap discrimination). For example, temporal abnormalities may affect the ability to distinguish such syllables as /mat/, /mack/, and /map/ from one other, or cause the confusion of such syllables as /bit/ and /bid/ (try saying these two, and you'll find the duration of the vowels is more salient than that of the voicing of the last sound).

In a quiet situation, even though less of the acoustical speech information is perceived, and even though some of the information is distorted by the impaired cochlea, there is sufficient redundancy in the acoustical speech signal and sufficient linguistic predictability, to permit the comprehension of speech. But, and this is the salient point, there is no more room for error; the person is "hanging on" by his or her fingernails, so to speak. Any further reduction in speech information will then produce a disproportionate effect upon comprehension. What happens in noise is that additional speech cues are lost, maybe just a few, but enough to go from barely hanging on to an almost complete lack of auditory speech recognition.

Not every type of noise will have the same effect, even though they may be equally loud. The worst kind of noise is the sounds of other people talking, since the noise this creates has the same frequency spectrum as the one generated by the person one wants to listen to. This is also the reason why some automatic signal processing hearing aids, the ones that reduce the loudness of low frequencies as the noise level increases, don't always work so well. In their action, they are also reducing the level of the desired, as well as the undesired, sounds.

The absolutely worst kind of noise, in my opinion, is cross-talk, where the two people on either side of you at a table are carrying on a conversation while you are trying to listen to someone else at the table. Not much can be done acoustically about this situation, but it is a natural for a more assertive stance (either move yourself, have one of them move, ask them to join the group conversation or to just plain keep quiet!).

People often ask if hearing aids can improve speech perception in noise compared to the unaided condition. The answer is both yes and no. It is yes, if some of the speech sounds amplified by the hearing aid would otherwise not be perceived. This is what a hearing aid is supposed to do and generally, in quiet, it does very well. The answer is no if the high noise levels generate a great deal of internal distortion in the hearing aid. The external and internal noises will then combine and decrease speech perception even more than is seen in the unaided condition. What happens is that the high noise levels reach the maximum sound output possible with the hearing aid (the point of "saturation", like a towel that can absorb no more water because it is already soaked).

Since no further increase in sound pressure is possible, the additional input is converted into noise. In such a situation it doesn't matter that there was a favorable speech-to-noise ratio at input (let's say 70 dB for the speech and 60 dB for the noise, or plus 10 dB speech-to-noise ratio); if the 70 dB speech signal drives the hearing aid into saturation, then not only is the aid producing internal distortion, but the speech to noise ratio at output (in the ear canal) is being decreased. The situation gets worse as the noise level increases.

Can anything be done? The above scenario will occur with hearing aids using an inadequate amplifier, one that employs a "peak-clipping" method of limiting the output (in a peak clipping device, the energy peaks of the intense sounds are literally "clipped" as shown on an oscilloscope). A survey conducted about three or four years ago showed that the majority of hearing aids were of this type. More recent hearing aids can employ different ways of limiting the output (different kinds of compression or automatic gain control systems).

With this type of system, when the speech and noise levels are high, the gain is automatically reduced; thus the kind of saturation effects seen with "peak-clipping" circuits are less likely to occur. Not only is less internal distortion produced, but also the favorable speech-to-noise ratio observed at the input (at the microphone) is more likely to be retained at the output (in the ear canal). This is because the compression action is controlled by the most intense sounds, and at a positive speech-to-noise ratio, that would be the speech signal.

Will this type of hearing aid eliminate the deleterious effect of noise? No, but it should help reduce the noise effects somewhat and it may make it easier to converse in high noise levels. However, if the noise is loud enough any hearing aid will begin to distort. At very high speech and noise levels, even if the hearing aid provides no amplification at all because of the compression effect, the limitations imposed by the impaired ear itself will set the limits. The key considerations are for the hearing aids to supply speech information not normally perceived without them. Other factors do apply, such as the type of output system used by the hearing aid, or whether or not the microphones in the hearing aids are deactivated (otherwise they will pick up and amplify the high noise levels), but without a high speech-to-noise ratio at input, the effectiveness of any method is going to be limited.

In concluding this topic, it is important to restate the point made earlier: it is the impaired ear that sets the ultimate limits to speech perception in noise. Our challenge (and those of the audiologists who serve us) is to ensure that we have reached these limits and not those imposed by inadequate amplification systems.

divider between body and bottom of page
RERC brand logo

Copyright 2011 by the RERC on Hearing Enhancement -- All Rights Reserved
Last modified: 07/01/2013

For more information, email info@hearingresearch.org
For technical support with this website, email webmaster@hearingresearch.org

Valid HTML 4.01 TransitionalThis site is W3C HTML 4.01 Transitional Compliant.