RERC logo
   Rehabilitation Engineering Research Center
   on Hearing Enhancement

divider between banner and body
spacer for menu buttons spacer for menu buttons spacer for menu buttons
spacer for menu buttons design for top of left side menu
spacer for menu buttons spacer for menu buttons
spacer for menu buttons Home
spacer for menu buttons About the RERC
spacer for menu buttons Projects
spacer for menu buttons Publications
spacer for menu buttons Dr. Ross Says...
spacer for menu buttons Recruitment
spacer for menu buttons Downloads
spacer for menu buttons Links
spacer for menu buttons The RERC Staff
spacer for menu buttons Contact Us
spacer for menu buttons spacer for menu buttons
spacer for menu buttons design for bottom of left side menu
spacer for menu buttons spacer for menu buttons
spacer for menu buttons

            Gallaudet University Logo
     Gallaudet
     University



spacer for menu buttons

spacer for menu buttons spacer for menu buttons

Dr. Ross on Hearing Loss

Improving Hearing Aid Design and Performance

by Mark Ross, Ph.D.
This article first appeared in
Hearing Loss (Jul/Aug 2004)

In the January and February issues of The Hearing Review, Dr. Mead Killion discusses how the prevalence of certain myths have been discouraging improvements in hearing aid design. In this article, I'd like to discuss these myths and some research results that bear on them, while adding a few personal observations as well.

He begins his January article saying that as he approaches his 40th year in the hearing aid industry, it seems like a good time for him to reflect on how we got to where we are today. This is a worthy objective, and since I can spot Dr. Killion a few years in the profession, I'm not only going to comment on his "myths", but try to put his observations into an historical perspective.

The most important myth he cites concerns hearing aid fidelity, a belief that "fidelity doesn't matter to those with hearing loss because they can't hear the difference anyway." A related myth holds that fidelity has to be sacrificed to a certain extent in order to obtain maximum intelligibility through a hearing aid, i.e. the belief that fidelity and intelligibility may be conflicting goals.

If we go back to the time I first entered the profession (1957), it was understood that hearing aids were "low-fidelity" devices. Nobody called them that, of course, but all one had to do was look at their electroacoustic performance and this fact would be quite obvious. Even at the time (which may appear to be in the dim, dark ages to some), audiophiles were demanding and receiving high fidelity audio amplification through phonographs and other devices. They wanted, and received, a smooth amplification pattern (the "frequency response") from about 50 Hz to around 16 kHz, with total distortion of less than 1 or 2%. This is still the goal of audiophiles, with the difference being that this response is now quite common and can be obtained in many relatively inexpensive audio devices.

The frequency response of hearing aids in the l950's, however, were usually between about 300 Hz to 3 kHz with a frequency response that looked like a profile of the Rocky Mountains - it was that jagged and peaked. (As evidenced from the first published book on hearing aids in l947, the situation was even worse ten years earlier). Total distortion was considered acceptable by the hearing aid industry if it did not exceed 10%. After using various techniques to eliminate the most egregious peaks (like inserting lambs wool in the nub of the button receiver), these aids could be made more acceptable, at least compared to the unaided condition. But their benefit was mainly limited to comprehending speech in quiet situations, or at times when the hearing aid could be physically located close to a talker's mouth. Understanding speech in noise was difficult, if not impossible (still a major issue and a theme that is intertwined with Killion's myths).

It was undoubtedly the poor fidelity of hearing aids at the time that was responsible for another common myth: the belief that hearing aids could not help anyone with a sensorineural hearing loss. Initially, this belief may not have been a myth at all, but had a solid foundation in fact. When the distorted amplification product of these early hearing aids interacted with the distortion inherent in sensorineural hearing losses, speech comprehension was often poorer than that obtainable without a hearing aid. People could, indeed, hear better in noise without a hearing aid than with one. The problem is that this belief held on long after hearing aids improved sufficiently so that most people with sensorineural hearing could receive significant benefit from hearing aids (but not enough, which is the theme of this paper.). This myth was quite common among many physicians until well into the l970's; indeed, speaking personally, some of my earliest professional articles dealt with this topic.

But how did the original hearing aid designers arrive at the limited frequency range they designed into the early generations of hearing aids? As near as I can determine, hearing aid designers used as their design goals the same range as that found in telephones at the time, i.e. around 300 Hz to 3 kHz. No doubt influencing this decision was the cost factor; the more "hi-fi" the hearing aids, the more difficult it would be to design quality miniature components and the more expensive the hearing aids. Since people could understand on the telephone with a frequency range of 300 Hz to 3 kHz, it was felt that hearing impaired people would be able to understand speech through hearing aids that embodied the same frequency range. At least that appears to have been the reasoning. Of course, the fact that a phone is placed against the ear while hearing aids are worn some distance from a source, and thus more susceptible to poor environmental acoustics, was simply ignored.

Evidently, the fact that young children would also be using hearing aids did not influence these early design decisions. Even though adults with a normal history of language development can understand speech through a relatively narrow bandwidth (actually, even narrower than 300 Hz to 3 kHz), this is not necessarily true for young children who are in the process of learning an auditory-based language. Because of their knowledge of the language, adults are able to predict and fill in missing acoustic elements (often, quite unconsciously). This is not possible for children who are still learning language; they need all the acoustic information they can get. There is a big difference, in other words, between developing an initial auditory-verbal language and recognizing one already mastered. And, for those children with residual hearing through the high frequencies, a hearing aid with a frequency range from 300 Hz to 3 kHz was simply inadequate.

Nevertheless, over the years, hearing aids did gradually improve in quality and complexity compared to these early days. Better fidelity is one of the improvements. Now, instead of a frequency range of 300 Hz to 3 kHz, we have hearing aids that can significantly amplify speech signals up to and beyond 6 kHz. But, according to Dr. Killion, this still provides insufficient fidelity (not "hi-fi" enough) and he conducted several studies in order to evaluate directly the validity of the myths.

In his first study, he compared an experimental hearing aid with six modern digital hearing aids. The bandwidth of the experimental hearing aid, at 16 kHz far exceeded the bandwidth of the commercially available digital aids. In addition to the increased bandwidth, the experimental aid could tolerate inputs exceeding 110 dB Sound Pressure Level (SPL) without producing measurable distortion. In brief, the experimental aid was a high-fidelity instrument comparable to a high quality audio reproduction system.

Using live (and loud) music as the input, recordings were made through the hearing aids while they were placed on a head maniken. This maniken (the KEMAR) is commonly employed in acoustic research. Instead of an eardrum, it uses a microphone that connects to various acoustic analyzers and recording devices. The control condition was a recording made directly through the maniken's open "ears" (no hearing aids and thus no processing distortion of any kind). The subjects (16 with moderate sloping losses and 11with moderate flat hearing losses) compared the quality of the sound reproduced through each of the hearing aids, including the experimental hearing aid, to the open ear condition. Thus, the aids were not compared directly to each other; rather, each aid was compared to the condition that would produce the highest fidelity ratings, the open ear condition. In addition to the hearing-impaired subjects, some 60 normally hearing audiologists also listened through the hearing aids and compared their quality to the control condition.

The results showed that the hearing-impaired subjects gave fidelity ratings almost identical to that of the normally hearing controls. All subjects, both hearing-impaired and normally hearing, rated the experimental hearing aid as sounding closest to the fidelity occurring in the control condition. The ratings of the six digital hearing aids all fell below the experimental aid, with several rather far below. When asked to put a dollar value on the ratings, the hearing-impaired subjects judged that the extra quality of sound reproduction offered by the experimental hearing aid would have been worth approximately $1000 more than the best digital hearing aid. Consequently, the notion that people with hearing loss were not sensitive to, and could not appreciate, "high-fidelity sound was found to be just a "myth." They enjoyed "high-fidelity" sound as much as the people with normal hearing

In another study, the fidelity of the various aids were compared in respect to the speech to noise ratio (SNR) at which 50% of sentence scores could be identified. In this type of test, the lower numbers are actually more advantageous, since they indicate that a person can understand speech in the presence of higher levels of noise (a lower speech-to-noise ratio). The results of this study show a direct relationship between the judged fidelity ratings and the SNR scores, with higher fidelity associated with the lower SNR's (remember, this is good). According to this study, therefore, the myth that fidelity and intelligibility are somehow in conflict was not supported.

One difference between the two above studies should be noted. In the first one, the input sound signal was loud instrumental music. In the second one, it was speech signals. In a personal correspondence with me, Dr. Killion points out that the SNR scores in the second study actually clustered pretty closely for all the hearing aids with the highest fidelity, both the commercial and experimental aids. What we can conclude is: (1) higher fidelity signals may be required when listening to and appreciating acoustically complex musical signals and (2) that, perhaps, the same high fidelity is not as necessary in understanding speech as it is in appreciating music. But it doesn't hurt either.

In a follow-up article in The Hearing Review in February 2004, Dr. Killion challenges another hearing aid myth, this one in regards to hearing in noise and directional microphones. He begins by pointing out that directional microphones "provide the only verified method of improving the ability of hearing aid users to understand speech in noise. Noise-reduction circuits do not." (Note: this does not apply when a microphone is placed close to a talker's lips as in a personal FM system.) He then states that only 20% to 30% of all hearing aids include this feature and wonders why.

Part of this reason, he ascribes to the fact that the vast majority dispensing professionals do not actually measure their client's speech perception ability in noise through the use of appropriate, standardized tests. Without this information, they cannot have an informed, quantitative understanding of their client's difficulty in comprehending speech in noisy situations. This difficulty would not be apparent when dispensers counsel their clients in the quiet of their office. Obviously, therefore, dispensing professionals have an obligation to include measures of SNR during the hearing aid selection process. In my opinion, consumers should ask for this test if it is not automatically provided. It offers direct information how a person can understand speech in noise. Several are now available: the hearing in noise (HINT) test, and the speech in noise (SIN) test (but please don't go into your hearing aid dispenser's office and ask for "sin!").

But even when directional microphone hearing aids are recommended, they may be of little practical assistance. Beyond the fact that hearing aid users must understand the basic social dynamics in using directional microphones, i.e. they need to ensure that the desired signals are at their front and unwanted signals (noise) at the rear or sides, the directional performance of the microphones may themselves be inadequate. That is, they simply may not be providing sufficient directional benefit.

This benefit is given as the Directivity Index (DI). This is the metric most often used to describe the performance of directional microphone hearing aids. Killion points out that there is a vast difference between a measurable DI and one that is actually noticeable. DI's of 2 dB or less will, under carefully controlled test conditions, increase sentence identification scores by about 10% for every 1 dB improvement in the DI. Therefore, under these conditions, the effect of the directional microphones will be measurable.

In a real-life situation, however, with unpredictable and sometimes large changes in the nature, level and location of the noise, and with speech signals often varying from moment to moment, a DI of 2 dB or less is simply not noticeable. It is easy to attribute momentary improvements or decrements in comprehension to changes in the environmental situation, like somebody talking louder or softer or the noise suddenly waning or waxing. It is only when the DI reaches about 4 dB that the contribution of directional microphone hearing aids are both measurable and noticeable in spite of the changing acoustic circumstances.

Confounding this concept is the fact that the DI is lower when actually measured on a human being (or, in this case, a simulated human being, the manikin KEMAR) compared to the value obtained when the aid is simply mounted on a stand in a test chamber. Thus the DI specifications given by a hearing aid manufacturer may not accurately predict real-life performance. A good rule of the thumb for hearing aid users would be to obtain directional microphone hearing aids that provide the highest possible DI. There are hearing aids with KEMAR measured DI's of 5 and 6 dB. This degree of directionality would not only be measurable, but would also be noticeable. It's not enough, therefore, for consumers to simply request hearing aids with directional microphones; rather, they should be asking for those with the best directional performance. It can make a big difference.

According to Killion, the extent of this difference could actually permit some hearing aid users to hear better than people with normal hearing in certain noisy situations. For many audiologists, including me, this is an heretical concept. We know of the mountains of evidence that points to the greater relative difficulty that hearing-impaired people have understanding speech in the presence of noise. His point is that the signal processing of directional microphone technology can more than compensate for the signal distortions caused by impaired hearing. However, he made this statement on a theoretical basis only, after taking into consideration a person's SNR loss and the DI of some directional microphone hearing aids.

In March 2004, three respected audiological researchers (Bentler, Palmer and Dittberner) published an article in the Journal of the American Academy of Audiology that did examine whether some hearing-impaired people using directional microphone hearing aids could understand speech in noise as well as a group of normally hearing college students. The 46 hearing-impaired subjects had a mild-to-moderate sensorineural hearing loss with an average age of 62. As did Killion, these researchers also used a test that that provided an SNR. (To review: an SNR is the intensity level of speech relative to noise at which a subject can achieve a 50% sentence identification score. The lower the better, with negative numbers being the best and each 1 dB change in the SNR can increase or decrease sentence intelligibility scores by about 10%.)

The 48 normally hearing subjects were directly tested in a sound-treated room. Their performance was the goal to which the scores of the hearing-impaired group was compared. The examiners tested the hearing-impaired subjects under a number of conditions. These included a hearing aid set in the omnidirectional mode and two and three directional microphone hearing aids in both fixed and adaptive directional modes. In an adaptive mode, the hearing aid "tracks" the maximum noise source and changes its directional characteristics accordingly. All tests were conducted using both stationary and moving noise delivered from loudspeakers behind and to the sides of the subjects.

There are several points that should be attended to in evaluating this study:

First, do directional microphones work as well as reputed?

Second, is there is difference between two and three directional microphones?

Third, is there a difference in efficacy between aids with fixed and those with adaptive directional microphone characteristics.

Fourth, are these differences affected by the nature of the competing noise source (stationary or moving)?

And fifth, do directional microphones permit hearing-impaired people to comprehend speech in noise as well as normally hearing college students.

Lots of permutations here, but we'll just cover the major findings.

The very first general conclusion we can come to regarding this study is that directional microphones do work and can work very well. They provide better understanding of speech in noise with both two and three microphones, in either the adaptive or fixed directional mode and with both stationary and moving noise sources. In all of these conditions, the SNR obtained is 3 to 4 dB less than that obtained with omnidirectional microphone hearing aids.

In respect to the third point, the scores were slightly better with the adaptive directional microphones compared to the aids set in the fixed directional mode. This, of course, is only relevant in a moving noise situation. These differences, however, are slight and may not even be noticeable (though they are measurable!).

In terms of the last point mentioned above, they do work well enough in stationary noise conditions so that the SNR scores of the group of hearing-impaired people was very similar to those obtained by the normally hearing subjects using either the two or three directional microphone system. In a moving background noise situation, which is a more challenging listening situation, only the results obtained with the three microphone adaptive microphone was statistically indistinguishable to those achieved by the normally hearing college students.

Still these are pretty impressive results. Does this mean that all people with hearing loss need to do is use a hearing aid with good directional microphones and that they will hear as well as people with normal hearing? Not hardly. Let's consider the subjects used in the three studies reviewed above. All of them had mild to moderate hearing losses, with measurable thresholds extending to the limits of the audiogram. These people had, what Dr. Bentler called "plain vanilla" hearing losses. We do not know how much we can generalize these findings beyond the specific type of subjects and noise conditions of the three studies.

It is true that most people with hearing loss fall into the mild to moderate category. For these people, I think that "high-fidelity" hearing aids would be of undeniable and immediate benefit. If technology can provide these people with hearing aids that sounds like an upscale audio system, why not? Just because their hearing loss is rated as "mild to moderate" does not mean that it does not produce problems for them. Of course, it does. These people, also, could undoubtedly realize great benefits from directional microphone hearing aids that incorporate real-life DI's in excess of 6 dB. This is not an unreasonable figure.

However, there is one major qualification here. Directional microphones, of any type, do not work as well in highly reverberant conditions as they do in noise. In the Bentler, Palmer & Dittberner study, the competing noise did not include reverberation. Since many real-life conversations take place in reverberant places, it is unlikely that people with hearing loss will be able to understand speech as well as those with normal hearing. The normal ear feeding to the normal brain is still a lot more sophisticated than even the most advanced technology. And it is the brain more than the ears that enables people with normal hearing to focus in on the direct sound in a reverberant location while suppressing its reflections.

How about the people with severe hearing losses or those with little or no residual hearing in the higher frequencies? How would the results of this research relate to them? We don't really know, but one conclusion is immediately apparent. It is unlikely that these people can appreciate a high fidelity signal when they have only "low fidelity" hearing. For example, if a person has little or no residual hearing past 3 or 4 kHz, then can he/she appreciate an amplified signal that extends to 16 kHz? Probably not, in my judgment. We also do not know the degree to which people with more severe hearing losses can benefit from directional microphone hearing aids. The ear operates differently with hearing losses up to about 60 or 70 dB than it does to hearing losses greater than that. I do believe that some benefit is possible with this population I've experienced it myself but the degree of benefit may be less than that realized by people with less severe hearing losses.

Whatever one's hearing loss, what this research suggests is that better hearing in noise is attainable through the use of high fidelity amplification coupled to excellent directional microphones. At the most pessimistic, it can only help and won't hurt.

divider between body and bottom of page
RERC brand logo

Copyright 2011 by the RERC on Hearing Enhancement -- All Rights Reserved
Last modified: 07/01/2013

For more information, email info@hearingresearch.org
For technical support with this website, email webmaster@hearingresearch.org

Valid HTML 4.01 TransitionalThis site is W3C HTML 4.01 Transitional Compliant.