Low Frequency Residual Hearing Revisited

This article first appeared in Volta Voices (March/April 2000)

We are living at a time characterized by major technological advances that impinge upon all aspects of our lives. As these developments concern children with hearing loss, we can note, for example, the introduction of cochlear implants, sophisticated digital signal processing hearing aids, and innovative classroom amplification systems. These developments offer choices and possibilities for children we could barely dream of a generation ago. Without in any way disparaging these marvelous developments — indeed they are truly commendable — in our focus on the “new” it is too easy to overlook the fact that information and concepts that developed forty or more years ago can still have contemporary relevance. In this paper, we will focus on the potential value of the residual hearing of children who are considered audiometrically “deaf”. While we would encourage the parents of such children to consider a cochlear implant, there are many children who, for a variety of reasons, are either not candidates or whose parents decide not to go that route. It is for these children that we are now revisiting the potential significance of low frequency residual hearing. But, first, a few words about hearing itself.

Although its absence clearly does not make anyone less a human being, still the sense of hearing is ordinarily an integral component of our biological birthright. While we are aware that audition provides the sensory basis for auditory-verbal language development — undoubtedly its most important function — it does more than this. Hearing provides a sensory channel to the brain that not only informs us about the world around us but also enriches our overall perception of the world. By being immersed in a three-dimensional acoustic sphere, we are able, consciously or unconsciously, to adapt to and feel part of our immediate surroundings. We are able to identify potentially dangerous or significant unseen events because we can selectively “tune in” to the sound waves that surround us. Unlike our eyes, our ears are always “open” to receive stimuli and thus we are able to continuously monitor our surroundings through this auditory connection. Just because someone has a profound hearing loss does not mean that he or she need be deprived of all the non-linguistic contributions that hearing can make. As long as a person possesses any residual hearing at all, some potentially significant sensory information from the auditory channel can still be obtained.

Over fifty years ago, Goodman (1949) pointed out that over 90% of the children in schools for the deaf possessed some residual hearing, that is that less than 10% were totally deaf. Twenty years later, Elliot (1967) corroborated these findings. Nothing, to our knowledge, has been published recently that disputes these findings. Visit any school for the deaf, go to any classroom, and if the audiograms are available, you will note that relatively few children are unable to perceive any sound stimuli at any loudness level. While many of the children in these classrooms possess a great deal of residual hearing, and thus are potentially capable of deriving primary benefit from appropriate auditory management, we shall only consider here children whose residual hearing is concentrated in the lower frequencies. Except for a few exceptions (whose accomplishments can provide important therapeutic lessons), these are not children for whom the auditory channel can serve as the primary mode for the development and use of auditory-verbal language. Rather, these children would be utilizing speechreading, cued speech, or a sign system for interpersonal communication. Nevertheless, even as they depend upon and require a visual system, we believe that they should not be denied the auditory sensations they are capable of receiving for whatever benefits these perceptions can confer. To be explicit about this point: we are not debating here the merits of one communication mode over another. As long as there is some measurable residual hearing at all, we believe that its appropriate use offers potential benefits regardless of a person’s preferred communication system.

Low Frequency Speech Energy

Acoustically, the energy of speech sounds ranges across the entire frequency spectrum, from very low (about 100 to 125 Hz) to very high frequencies (above 10,000 or 12,000 Hz for some sounds). While the vowels generally are louder and lower in frequency (pitch) than the consonants, it is the weaker, high frequency consonants that are the most important speech elements in comprehending speech. When one’s residual hearing is concentrated in the low frequencies, it naturally limits how much of the acoustic energy of speech can be perceived. Such a person should be able to detect (not identify, there is a difference) all the vowels and some of the lower frequency portions of the consonants. However, this person may actually be able to identify – and then only imperfectly — only several of the lower frequency vowels (e.g. /oo/, /ah/) and probably few if any consonants. Thus, it would be very difficult and unusual for someone to both develop and comprehend language through hearing alone with little or no residual hearing at 1000 Hz and above. If this were all low frequency residual hearing could offer, then the lack of urgency in ensuring its availability to a listener is understandable. But, in fact, much more is possible.

There is a rhythmic aspect to conversational, continuing speech, a kind of melody grounded on the fundamental pitch of a person’s voice. It is characterized by vowel and vowel-like sounds, periodically interrupted by the articulatory constrictions that produce the higher frequency consonants. The resulting so-called “speech envelope” — sound waves whose loudness varies in time — convey what is termed the prosodic components of speech signals. This is basically a low frequency phenomenon that is available perceptually to anyone who possesses low frequency residual hearing.

Prosodic elements include intonational (pitch) contours, syllabic and word stress (changes in intensity and duration) and the relative duration (timing) of sounds, words, and phrases. These rhythmic and temporal features of the speech waves convey the uniqueness of a person’s voice as well as contributing to the overall communication process. For example, raising or lowering one’s pitch lets us know whether an utterance is a statement or a question; word stress and prolongation can enrich and clarify the meaning of a statement (e.g. you’re going W H E R E?); while syllabic stress can change the meaning of a word (e.g. CONvict versus conVICT). None of these examples would be visible on the lips. They are generated at the vocal fold level in the larynx and have to be heard to be perceived. Low frequency perception can provide this information.

Low Frequencies as an Aid to Speechreading

Even though residual low frequency hearing may be insufficient to comprehend speech through audition alone, this does not mean that it cannot foster speech perception. The contribution that even a little residual hearing can make to combined audio-visual speech was demonstrated over fifty years ago at the Clarke School for the Deaf (Numbers & Hudgins, 1948). They, and many others, demonstrated that although profoundly hearing-impaired children could understand few or no words through audition alone, in combination with speechreading their recognition scores surpassed those obtained through vision alone. This phenomenon has been so frequently observed that it is no longer a matter of dispute. A typical example would be where a profoundly hearing-impaired child obtains very low auditory alone scores, say about 4%, and visual scores around 40% but achieves combined audio-visual scores of scores in excess of 60%. This bi-sensory improvement is observed even when auditory-alone speech perception scores are zero, as was shown in a study where the fundamental pitch and rhythm of the voice were extracted and transmitted to listeners (Boothroyd, Hnath-Chisolm, Hanin, L., & Kishon-Rabin, 1988).

There are several reasons for the contribution that audition is able to make in these instances. What apparently happens is that hearing-impaired people are able to utilize the overall envelope of the speech wave as auditory vowel markers. That is, by being able to hear the onset and offset of the strong vowels, they are not only able to discern the rhythm (intonation, stress, timing) of speech but also focus their attention on the intervening consonants. Additionally, identification is fostered when audibility (without identification) is combined with the unique lip-rounding that takes place when some of the vowels are spoken (such as between /oo/ and /ee/).

Another, perhaps more salient reason, for the improvement in speech perception seen in the combined mode is the fact that vision and audition serve as complementary avenues for the perception of speech. Fortuitously, it turns out that many speech sounds which are the most difficult to identify through hearing alone are the easiest to see on the lips and vice-versa, i.e. that sounds which are the most difficult to distinguish through vision are the easiest to hear. For example, it is through hearing the onset of voicing and certain subtle time differences that listeners can distinguish between p, b, and m sounds (which look alike on the lips), or identify which one in the following pairs were spoken, b or p, d or t, k or g, and f or v. In this particular example, speechreading would permit a listener to differentiate between b, d, and g, sounds that often get confused for one another through hearing alone.

Low Frequencies for Monitoring Speech Production

The speech quality of people who are congenitally profoundly deaf, and who have been trained with a visually based approach (speechreading or signing), is often problematical in many obvious respects. Perceptually, even when it is intelligible to a normally hearing listener, the speech of many deaf students often sounds harsh, strained, breathy, and nasal, with obvious pitch irregularities and prolonged and indeterminate vowels. What listeners must often respond to is not how accurately the speech is pronounced, but the voice quality characteristics. When these are clearly abnormal, the person’s speech is judged unacceptable, often regardless of how intelligible it may be. As it happens, voice quality characteristics are precisely the dimensions of speech conveyed by the lower audible frequencies, i.e. timing, pitch, stress, melody patterns, and nasality. Not only, as reported above, is it possible for someone to hear these vocal elements in the speech of others, it is also possible for children to hear these qualities in their own speech. This is what educators/therapists mean when they state as a therapy goal the necessity for children to develop their auditory-vocal monitoring system.

Like the other points made in this paper, this is not a new concept. Dennis Fry wrote about this over 35 years ago (Whetnall & Fry, 1964). He pointed out how the normal, motor activity of early babbling soon merges into a child’s control of speech production, based on the sounds they hear when they change their articulatory movements. That is, in the usual developmental pattern, normally hearing children soon learn (albeit unconsciously) to make the association between changes in what they hear and what they do with their mouths and breathing muscles. This is precisely the developmental stage we try to exploit with early amplification — or stimulate when hearing aids are fit after infancy. Of course, hearing underlies the monitoring of all speech production dimensions (i.e. the consonants and consonant blends in conversational speech), not just the prosodic aspects. But when children possess only low frequency residual hearing, it is still possible for them to self-monitor and modify dimensions of speech production that are mainly responsible for much of the reason the speech of deaf people is
often judged “deviant”.

Low Frequencies as Environmental Awareness and Alerting Signals

Whether we can hear them or not, we live immersed in a noisy world of sounds. Probably the only location in our industrial society in which we can experience total silence would be in a sound-proof room. Everywhere else, sounds are being continuously produced. Each such sound is an event that signifies that some force has acted on some object to produce vibrations that travel through the air around us. In a sense, the actions and movements that create these sounds signal the turbulence and reality of much of our modern lives. When a person has a profound hearing loss concentrated in the lower frequencies, much perhaps most, of these sound events may be unavailable. But not all.

Many environmental sounds have a broad frequency spectrum, that is they contain energy across a wide portion of the spectrum. With an appropriately adjusted hearing aid, the lower frequency components of these sounds events can be detected – and then learned to be identified. Many examples can be given; here are a few for which there is no effective assistive device to substitute for hearing.

  • Traffic sounds (horns, sirens, etc.)
  • Voice alerts
  • Dogs barking or growling
  • A broken home appliance (e.g. clattering, screeching)
  • The wind in the trees, the roar of the surf, thunder in a storm

Is it absolutely essential that a person with a hearing loss be able to detect the presence of such sounds? Of course not. It would be an overstatement to assert that one’s safety is always an issue when a person cannot hear the sounds of traffic or the growl of a menacing dog. In reality, the heightened visual alertness of profoundly deaf people ensures that dramatic consequences rarely occur while living in silence. People do compensate. Is it useful and convenient, however, to be aware of these sounds? Can being able to detect and identify their presence foster a sense of security not otherwise possible? Here, we think, the answer has to be “yes”. To be aware of the sounds of the environment means that, figuratively, one has “eyes in the back of one’s head”. It does make it easier to live in a world where sound signifies potentially important events.

In a sense, the provision of low frequency residual hearing for environmental sound awareness is precisely parallel to the situation that existed when the first generation of cochlear implants were introduced. While these early implants did not permit the auditory alone recognition of speech, they did permit the awareness and recognition of environmental sounds, as well as contribute to the speechreading process and the monitoring of speech production. What these single-channel implants showed is that exposure to even the most basic of sound dimensions could be extremely useful to the person with just low frequency residual hearing.

Low Frequency Contribution for Recognizing Intended Emotions

The contribution of the lower audible frequencies to the recognition of the emotional intentions of a speaker was investigated 27 years ago (Ross, Duffy, Cooker, & Sargeant, 1973). In this study, six trained actors and actresses read from various scripts intended to convey different emotional states. Each different script, however, contained one similar portion. (“There is no other answer. You’ve asked me that question a thousand times and my reply has always been the same. It always will be the same”.) These sentences were excised from the tape and played to groups of normally hearing listeners who were asked to identify which one of nine intended emotions were meant to be conveyed by the full script (anger, indifference, grief, amusement, doubt, fear, love, contempt, astonishment). In spite of the fact that the listeners heard exactly the lexical content (i.e. the same sentences) in all conditions, they were able to accurately identify the intended emotion of the speaker at levels far above chance. Clearly, it was the non-linguistic aspects of the utterance that conveyed the emotional content.

What makes this study relevant in this context were the next four conditions of the study. The sentences were then filtered so that all energy above 600 Hz, 450 Hz and 350 Hz, and 150 Hz respectively were eliminated. These conditions were meant to replicate various degrees of low frequency residual hearing in a person with a hearing loss. The listeners now had to listen to the tape and identify the intended emotion of the speaker under these four conditions of filtering.

At the two “best” conditions (600 Hz and 450 Hz), the listeners could still identify all the intended emotions at far above chance levels, even though the words themselves were not intelligible. Except for one emotion (amusement), this continued to be true for the 300 Hz condition as well. Even the most severe condition (all speech energy above 150 Hz eliminated), three of the emotions were correctly identified beyond chance levels. Subjectively, the stimuli were perceived as a pulsating, rhythmic, low frequency rumble. The scores for two of the three (love and indifference) showed little or no degradation across all the listening conditions (which suggests that in conveying love, it is not what one says that is important as much as how one says it!). In short, the results of this study do indicate that the lower audible frequencies contain and can convey a great deal of information about the emotional state of a speaker.

Providing Low Frequency Amplification

The assumption up to now is that a person’s low frequency residual hearing can be made audible with personal amplification. Clearly, this is now technically possible, but this was not always the case. For many years after personal hearing aids were first introduced, the lower frequency range of hearing aids was almost never below 300 Hz and usually somewhere around 400 or 500 Hz. We have often seen children whose residual hearing was concentrated below 750 Hz, whose hearing aids had a low frequency limit of 500 Hz or even higher. Such children were denied meaningful exposure to audition no matter how powerful their hearing aids; the aids were amplifying acoustic energy at frequencies where they had no residual hearing. For the average adult with a late-onset hearing loss, having a lower frequency limit of 400 or 500 Hz can be desirable, depending upon the nature of their hearing loss. For children whose residual hearing is concentrated in the lower frequencies, however, it can be a problem. For them, the acoustic information between a lower limit of, for example, 150 Hz or 400 Hz can make a significant difference.

Like all the other information presented in this paper, this is not new. This was pointed out 36 years ago by Dan Ling (1964) when he described the implications for profoundly hearing-impaired children of being able to hear below 300 Hz. The major reason relates to the fundamental frequency of people’s voices. For adult males, this is about 125 Hz; for adult females, about 250 Hz, and for children about 400 or 450 Hz. The strongest acoustic energy in a person’s voice is usually at the fundamental frequency. Lowering the frequency range of hearing aids fit on children with just low frequency residual hearing can increase their perception of these signals, thus fostering their awareness of the prosodic and melodic dimensions in human voices. Additionally, the other potential contributions of low frequency residual hearing, as reviewed above, cannot be realized if the hearing aids fit on such children do not provide for an appropriate low frequency response.

To summarize, the potential advantages of extended low frequency amplification in hearing aids, for children whose hearing is concentrated at these lower frequencies, seem quite straightforward. If there is a downside, we are not aware of it. As we already noted, these children are most likely audiological candidates for a cochlear implant, a course of action that no matter how much we would personally endorse, may not be appropriate or selected for many children with this type of hearing loss. For them, we think “revisiting low frequency residual hearing” has much merit. The sounds of the world, to whatever degree they can be accessed, and no matter what a person’s primary communication mode may be, offer a connection to the world not possible with any other sensory avenue. It should not be artificially restricted by an inadequate amplification device or dismissed as irrelevant.

References

  • Boothroyd, A., Hnath-Chisolm, T., Hanin, L., & Kishon-Rabin, L. (1988). Voice fundamental frequency as an auditory supplement to the speechreading of sentences. Ear and Hearing, 9, 306-312
  • Elliot, L. (1967). Descriptive analysis of audiometric and psychometric scores of students at a school for the deaf. Journal of Speech and Hearing Research, 10, 21-40
  • Goodman, A. I. (1949). Residual hearing capacity of pupils in schools for the deaf. Journal of Laryngology and Otology, 63, 551-562
  • Ling, D. (1964). Implications of hearing aid amplification below 300 CPS. Volta Review, 66, 723-729
  • Numbers, M. E. & Hudgins, C. V. (1948). Speech perception in present day education for deaf children. Volta Review, 50, 449-456
  • Ross, M., Duffy, R. J., Cooker, H. S., & Sargeant, R. L. (1973). Contribution of the lower audible frequencies to the recognition of emotions. American Annals of the Deaf, 118, 37-42.
  • Whetnall, E. & Fry, D. (1964). The Deaf Child. Charles E. Thomas: Illinois