Frequency-Lowering Hearing Aids: Increasing the Audibility of High-Frequency Speech Sounds

The most common type of hearing impairment is a high-frequency loss (i.e., perception of higher frequencies is poorer than that of lower ones). People with this type of problem often complain of hearing but not understanding. While the deleterious effect of noise occurs for a number of reasons, a primary one is the fact that such individuals cannot perceive many of the high frequency voiceless consonants,  such as the /t/, /k/, /f/, /th/, /sh/, and /s/ sounds. Yet in order to fully, or easily, comprehend speech it is crucial that these sounds be heard.  In fact, it has been known for some time within the field of audiology that speech comprehension depends more upon hearing the higher, as opposed to the lower, frequencies in the speech spectrum.

In addition to their importance for speech perception, some of these high frequency consonants convey important grammatical information. For example, consider the /s/ sound in signaling plurals (book, books), contractions (it is, it’s), possession (Jake’s book) and third person singular (Ben walks home while his sister takes the bus). In each of these examples, important semantic as well as grammatical information is being transmitted by the /s/ phoneme. This has particular significance for hard of hearing children, who are in the process of developing speech and language via hearing. Because such children cannot hear the high frequencies very well, their speech, language and academic skills are often deficient.

Given the importance of the /s/ phoneme, it is ironic that it is precisely this sound which contains the highest frequency acoustic elements of any sound in the English language, and is thus the most challenging for the average hearing-impaired listener. An analysis of the acoustic spectrum of /s/ shows that it has most of its significant energy well above 4000 Hz, ranging from 4500 Hz to more than 8000 Hz. This suggests that most people with a high-frequency hearing loss must depend upon the lower frequency elements of this and other high-frequency voiceless consonants in order to barely perceive them. Hard of hearing adults are able to unconsciously call upon their normal linguistic development to fill in the acoustic gaps when  the actual cues are missing or minimal, albeit imperfectly and with  considerable effort. The situation is much more difficult for hard of hearing children who lack this normal background.

Audiologists are well aware of the importance of the high frequencies in general, and the /s/ phoneme in particular. When fitting a hearing aid, they do try to ensure that the high frequencies are as audible as possible, but are limited by the extent of the high-frequency hearing loss and the upper frequency range of most hearing aids. Generally, the greater the degree of high-frequency hearing loss, the more difficult it is to properly fit a hearing aid. For some people it may be impossible to provide the necessary degree of high- frequency amplification without incurring acoustic squeal (even with a feedback- suppression feature in the hearing aid). Complicating the situation is the possibility that cochlear dead regions may exist at the frequencies where thresholds are in excess of about 70 db. That is, the measured hearing thresholds may reflect the responses of a lower portion on the basilar membrane (the inner ear structure supporting the hair cells) and not the specific frequency being tested. Because of the possibility of distortion, delivering amplified sounds to this region may actually be detrimental to comprehension (or at best ineffective).

The combination of all these factors, — i.e., a high-frequency hearing loss, the acoustic spectrum of the voiceless consonants (in particular the /s/), the difficulty in providing sufficient amplification to the higher frequencies, the possibility of cochlear dead regions and the upper frequency limits of hearing aids – led to the concept of hearing aids that would shift the high frequencies of speech to the lower ones. The reasoning was that if the speech energy in the high frequencies could somehow be shifted to the lower frequencies, where the hearing thresholds were better, then this high-frequency information would at least be audible, though considerably modified and sounding somewhat “unnatural.” The challenge was – and is – to reach this goal without simultaneously obscuring or unduly degrading the acoustic information being delivered to the lower frequencies. Currently, there appear to be at least three different techniques incorporated in commercially available hearing aids designed to do this (there may be others, but I’ve seen no published reports on them).

In l998, the AVR Sonovation Company introduced the ImpaCt BTE hearing aid (following an earlier body-aid version). Although one doesn’t hear much from this company lately, for a number of years they were the only one that offered this concept to consumers. The company still exists and markets several aids that include what they term “Dynamic Speech Recoding” or Frequency Compression. When a voiceless sound is detected (predominance of energy in the higher frequencies), for that moment in time the entire spectrum is compressed and thus, essentially, shifted to the lower frequencies. All energy peaks within the signal are shifted proportionately (for example, with a frequency compression ratio of 2, sounds at 6000 Hz  are shifted to 3000 Hz, while 3000 Hz sounds are moved to 1500 Hz and so on). The system works extremely rapidly and lower frequencies are not supposed to be affected. Essentially, what the system does is match the bandwidth of the incoming speech spectrum to the damaged ear’s more limited, but usable, intact hearing. The degree of frequency compression and the cross-over frequency are adjustable, depending upon the configuration of the hearing loss.

As ever in instances of a new or different hearing aid feature, the final test is whether it actually improves speech perception. There have been a number of published studies that investigated the efficacy of this  feature, with the latest appearing just a year ago. On average, these studies have reported generally favorable results. However, the findings on all of them display large individual differences; about half the subjects show clear improvement with this feature,  while the other half  obtained similar scores in the treated and untreated conditions. For example, in the last such study to be reported, two of the six subjects showed significant improvement in their speech perception scores while using frequency compression, with three others showing minimal improvements in the noise condition.
Several years ago, Widex introduced what they term the Audibility Extender (AE) feature in their Inteo hearing aid. Essentially, the Audibility Extender transposes unaidable high- frequency sounds to usable low-frequency regions. In the first step of the process, the hearing aid selects a “start” frequency. This is the frequency point at which the AE program determines (based on the person’s stored thresholds) that aidable hearing ends and unaidable begins. For example, 2000 Hz could be the start frequency for someone whose thresholds drop off sharply at this frequency and whose hearing, therefore, is not usable above this point. The program then identifies a peak frequency within the non-aidable octave above the start frequency (in this case, from 2000 Hz to 4000 Hz), then shifts and filters it – and the sounds surrounding it – to fit in the octave below the start frequency (i.e., from 1000 Hz and 2000 Hz). It is important to properly identify the start frequency, a point the company stresses in its publications. If it is set too low, then usable hearing will not be aided normally; if set too high, then potentially important information will not be transposed. The program allows for wide individual variations (in start frequency, number of octaves transposed, etc.)

Essentially, then, the transposed high frequencies are laid over and may co-exist in the frequency region one octave below the selected start frequency. On the surface, this appears to increase the likelihood of signal distortion and confusion. However, Dr. Francis Kuk of Widex, who has written extensively on the AE, states that while hearing aid users may experience some initial “masking/confusion,” within two weeks to two months the initial confusion apparently diminishes and performance begins to improve. At this time, most evaluations of the efficacy of the AE  have been undertaken by Widex personnel who report generally favorable results, particularly with consonant recognition and after an adaptation period.

The latest entry into the frequency-lowering realm is the SoundRecover (SR) feature offered in Phonak’s Naida hearing aid. This aid appears to combine aspects of the two previous devices in that it both compresses high-frequency signals and shifts them to a lower-frequency region. The SoundRecover (SR) feature compresses speech signals above some pre-selected cut-off frequency and shifts this high frequency sound into a frequency region in which there is usable residual hearing.  For example, in a case reported by the University of Western Ontario, the cut-off frequency was 2900 Hz and the compression ratio was 4:1. What this means is that all the speech energy above this frequency (extending to the limits of the hearing-aid response) would be divided by four and shifted to the area slightly higher than 2900 Hz (at which there was still usable residual hearing). The idea is to ensure that the important information contained in the very high frequencies is available to the hearing aid user. The selected cut-off frequency and compression ratio both depend upon the user’s hearing loss and  may be modified to reflect a person’s listening experiences. Frequencies lower than 2900 Hz (in this example) would be amplified as they would be normally.

The concept of the SR has been investigated in several studies, with the most recent efforts conducted at the University of Western Ontario.  Researchers looked at the results obtained for both children and adults with varying degrees of hearing loss, with and without the SR enabled. The results showed that on  average, the feature improved the recognition of high-frequency consonants and plural words without adversely affecting vowel recognition. The benefit was generally greater for individuals with the more severe hearing losses as well as for children.  As appears to be the rule in such research, a great deal of individual variability was observed. What I found particularly interesting in one of the studies were figures that displayed, via real-ear tests, the improved audibility of the /s/ sound with the SR feature enabled compared to when it was turned off. Without the feature, the acoustic spectrum of /s/ clearly fell below someone’s hearing thresholds at the high frequencies. With the SR was turned on, it could be visually observed that the energy in the /s/ sound was obviously audible, albeit at a lower frequency than it would be normally. I find this kind of demonstration particularly compelling. We know that people vary in their ability to utilize these modified high-frequency consonants, but this procedure demonstrates that at the least they can be heard.

Some observations can be made that apply to all three methods of frequency-lowering.  With each one of them improved detection of high frequency sounds is observed. This is a natural consequence of a technology that detects and lowers high frequency sounds (via compression, shifting, or transposition) to a lower frequency. The more important question, however, concerns how well the processed speech is understood and accepted. The auditory sensations produced by all three systems are initially rather strange. The cochlea is not “tuned” to hear high-frequency sounds delivered to lower points on the cochlea.   A period of adaptation is therefore recommended, regardless of which technique is used. One does not listen through one of these systems and expect the resulting auditory sensations to be “normal.” But it does seem that some adaptation is possible with each of them.  Of course,  a large degree of individual variation can be expected. For reasons not fully understood, some people seem to benefit more than others. Children, perhaps because of much greater neural plasticity, seem to benefit more than adults.

At this time, we have then three methods of improving the audibility of high-frequency sounds. What we don’t have, but should, is a comparison of all three methods tested on the same group of hearing-aid users. It seems pretty straightforward to me. However, I doubt that the manufacturers of the three different systems would undertake such a project; they’re not about to conduct a study that may prove their product inferior to the other two. Instead, this project should instead be undertaken by someone or some group in the audiological community. It may be that all three methods are fairly equal, but in any case it is information that would be helpful to hard of hearing people. Hopefully, soon?