Dr. Ross on Hearing Loss
Maximizing Residual Hearing
by Mark Ross, Ph.D.
It is a demographic fact that most children with hearing losses, even those being educated in schools for the deaf, possess some degree of residual hearing. Rarely, however, does it appear that this residual hearing is being fully utilized. Our challenge, as educators and clinicians, is to ensure that it is being utilized, so that these children can use their remaining hearing for whatever benefits it can confer upon them.
Not all children, of course, can benefit equally; a number of factors, such as the nature and degree of the hearing loss, are beyond our control. Given, however, the ultimate limitations imposed by an impaired auditory system, this paper will briefly review those auditory management factors over which we can exercise some control and which are, in aggregate, the primary determiners of how well a child will use and benefit from residual hearing.
Early Detection and Auditory Sensory Deprivation
There are many reasons we want to detect the presence of a hearing loss as early in a child's life as possible (Ross, 1990). By starting our therapeutic efforts at this time, we have an opportunity to employ a developmental rather than a remedial approach to the fostering of speech and language, and the reinforcement of the emerging auditory-vocal monitoring system. While these are certainly cogent reasons for early detection and management, they are basically derivatives of the factor I would like to bring up now, and that is the potential impact of auditory sensory deprivation.
More than 25 years ago, research with animals began to convincingly demonstrate that depriving them of sound right after birth produced anatomical and other structural changes in the auditory centers and neurons. These changes were associated with abnormalities in auditory skills necessary to resolve complex sound sequences, such as a speech signal would be. The longer the animals were deprived of sound stimulation, the less the likelihood that these functional effects could be completely overcome with later sound exposures. While there was always reason to believe that human beings were not exempt from this phenomenon, recent research (Gelfand & Silman, 1993; Brown, 1994; Mikic and Slavnic, 1995) has demonstrated that it can indeed occur with children.
We don't know at what point deprivation effects become permanent - three months, six months, one year, two years, later? - and we don't know to what degree that they can be partially or completely overcome with subsequent sound exposures. The presumption, however, should always be in a child's favor; the earlier a child is exposed to amplified speech sounds, the less likely that deprivation effects will have limited a child's auditory potential. Given the implications of this research, it is reasonable to assume that at least some of the past limits observed in children's ability to use residual hearing can be attributed to this factor alone.
The Amplified Speech Signal
Once children are fit with amplification, it is necessary to ensure that the pattern of amplification is appropriate and that binaural fitting be the routine practice. Our goal is to select hearing aids which will provide them with the maximum amount of speech information consistent with their hearing loss. This would appear to be a self-evident goal; logically, if we intend for children to fully employ their residual hearing, the prerequisite condition must be that they first detect as much of it as possible. The traditional procedure of comparing unaided versus aided thresholds, while useful, is too prone to error and at best only a very indirect indication of the output characteristics of the hearing aid (Seewald, R.L., Hudson, S.P., Gagne, J. P., and Cornelisse, L. E., 1989).
The most direct procedure for determining aided residual hearing is based on real-ear measures obtained with a probe-tube microphone. An example of such a procedure is the Desired Sensation Level (DSL), where one measures the actual output of the hearing aid in a child's ear canal to various intensity levels of speech-spectrum noise (Seewald, R., Zelisko, D. L., Ramji, K. & Jamieson, 1994). The difference between these outputs and the child's unaided thresholds is a direct measure of the aided speech information. For example, if the output of a hearing aid to a 70 dB sound pressure level (SPL) speech spectrum input is 110 dB (SPL) at a certain frequency, and the child's hearing loss is 90 dB SPL at the same frequency, then the aided sensation level is 20 dB. Note that I have used SPL in this example. One major advantage of the DSL procedure is that all measurements, such as a child's thresholds, the output of the hearing aid, and the speech-spectrum inputs are plotted in terms of the SPL in the ear canal. This circumvents the complications wrought by different reference levels (Hearing Level, used to plot an audiogram, and Sound Pressure Level, used to depict the electroacoustic characteristics of hearing aids), and coupler sizes (a 6 cc earphone, a 2 cc coupler, and real-ear responses). When this procedure is done properly, we can view the relationships between unaided thresholds, aided sensation levels for several speech inputs levels, tolerance limits, and the maximum output of the hearing aid. Necessary modifications can then be made to reach predetermined target outputs.
The Effect of Environmental Acoustics
Still, no matter how efficiently we conduct the electroacoustic selection process, in the presence of high levels of noise and reverberation our efforts can be virtually useless. The best selected hearing aids in the world will be virtually useless when the noise level exceeds that of the speech signal. Furthermore, we know that acoustic conditions which have only a mild effect upon speech perception for normally hearing people can completely obliterate speech comprehension for people with hearing losses (Ross, 1992a).
I can personally attest to the difficulty I've often had in carrying on a conversation in noisy classrooms, particularly pre-school classes, with normally hearing colleagues. Now if I, with an excellent command of the language, could not recognize it in that situation, how on earth can we expect children to develop their optimum auditory capabilities in the same kind of situation? The answer is, of course, that we cannot.
Speech buried in a background of noise cannot be used to learn to associate meaning with sound. During the language learning process, the speech has to be as audible as possible. We need to, in other words, ensure a high speech-to-noise (S/N) ratio. This can be done in two ways, both of which are desirable: one is reduce the level of the ambient sounds in any way we can, and the other is to employ a close-talking microphone to deliver the speech to the child.
Any close-talking microphone will do, but the most educationally flexible is a wireless FM microphone. While locating a microphone close to a talker's lips does not guarantee that a child will receive an optimal S/N ratio, the relatively intense speech signals impinging on the microphone is a necessary precondition (Ross, 1992b); if the input S/N is not maximized, then it would be impossible to achieve an optimum output S/N (that existing in a child's ear canal). By also controlling the frequency response and acoustic output of the environmental microphone circuit (that used by the child for self-monitoring and child-to-child communication), we can assure both a high speech-to-noise ratio and the desired sensation level output.
In my judgment, FM systems, when used properly, are one of the most powerful educational tools we have for hearing-impaired children. They're the most effective way we have now to reduce the impact of poor acoustical conditions.
The Speech Inputs
Up to this point, the focus has primarily been on maximizing the detection level, the lowest one in the auditory hierarchy. This is a crucial first step. We can hardly expect children to achieve their performance potential in the higher levels of the auditory developmental hierarchy (discrimination, identification and comprehension) if we have not optimized the lowest level. The purpose of this entire exercise, however, is to go beyond providing the acoustic raw material: we want the children to use their hearing to recognize language and to control their vocal output while producing oral language.
There is time to present only a few general principles (Ross, Brackett and Maxon, 1991).
Brown, D.P. 1994. Speech recognition in current otitis media: results in a set of identical twins. Journal of the American Academy of Audiology 5:1-6.
Gelfand, S. A. and Silman, S. 1993. Apparent auditory deprivation in children: implications of monaural versus binaural amplification. Journal of the American Academy of Audiology 44:313-318.
Mikic, B. and Slavnic, S. (1995). Influence of early auditory training on A.E.P. of severely hearing-impaired children. Paper delivered at the 18th International Congress on Education for the Deaf, Tel Aviv, Israel.
Ross, M. 1990. Implications of delay in detection and management of deafness. The Volta Review 92:69-79.
Ross, M. 1992a. Room acoustics and speech perception. In M. Ross (ed), FM Auditory Training Systems: Characteristics, Selection and Use. Baltimore, MD:York Press.
Ross, M.(Ed) 1992b. FM Auditory Training Systems: Characteristics, Selection and Use. Baltimore, MD: York Press.
Ross, M. Brackett, D. and Maxon, A.B. 1991. Assessment and Management of Mainstreamed Hearing-Impaired Children: Principles and Practices. Austin, TX: Pro-Ed.
Seewald, R.C. Hudson, S.P., Gagne, J. P. and Cornelisse, L. E. 1989. Comparing two methods for estimating the sensation level of amplified speech. Paper read at the annual convention of the American Speech-Language-Hearing Association, November 1989, St. Louis.
Seewald, R.C. Zelisko, D.L., Ramji, K. and Jamieson, D.G. 1994. Computer assisted implementation of the desired sensation level approach: Version 3.1. University of Western Ontario, London, Canada.