Dr. Ross on Hearing Loss Listening to Music Through Hearing Aids: The “Music” Program

This article first appeared in Hearing Loss (May/Jun 2009)

Recently I’ve noticed that the professional and trade journals are publishing more articles relating to listening to music through hearing aids and cochlear implants, rather than focusing on just speech. Without minimizing the overarching importance of speech communication, apparently this change reflects a growing appreciation of the importance that music plays in many of our lives. Indeed, for some people, being able to hear music well may be as important as speech communication. The unique factors related to listening to music through a cochlear implant were discussed in an earlier article; in this one, I’d like to focus exclusively on hearing aid users (still the overwhelming majority of people who use some sort of hearing prosthesis). Although the specific adaptations and needs of musical performers will not be discussed in this article – their requirements deserve special attention and are a topic in its own right – any hearing aid feature applicable to non-musician listeners would also apply to them.

What comes up time and again is the fact that traditional hearing aids were designed with the goal of optimally responding to the acoustic characteristics of a speech signal, not music. There are important consequences emanating from this design requirement. The acoustic characteristics of music are quite different from speech, and a hearing aid that works well for speech perception may not be appropriate when listening to music. For example, the range between the softest sounds of speech (the voiceless /th/) and the loudest (the vowel /aw/) is about 30-35 dB, while even the loudest speech signal rarely exceeds 85-90 dB. The current generation of digital hearing aids is designed to efficiently process this range of speech inputs. However, in music, the range between the softest and loudest sounds is in the order of 100 dB, with the most intense elements (such as with brass instruments) measuring as high as 120 dB. The implication of these acoustic differences is that while typical hearing aid users may be able to comprehend speech quite well if they can hear 30-35 dB of the signal across a wide range of frequencies, much more of a range is required when listening to music. .

In order for someone to fully hear and appreciate all the components in a musical selection, the hearing aid must be designed to deal with a dynamic range of inputs in the order of 100 dB, from about 20 dB to 120 dB. Moreover, unlike hearing aids designed to maximize speech perception by emphasizing the higher frequencies, in music it is the lower frequencies that are the more important. Furthermore, the hearing aid must be able to amplify the lower frequencies without exceeding the capacity of the analog-to-digital (A/D) converter found in all digital hearing aids. (This is the circuit that converts acoustic inputs to a digital format.) These A/D converters were designed to process speech signals and to do it without, or with minimal, distortion. While the first generations of digital hearing aids were not overly successful at this task, current models are able to manage the input range of speech very well. Many, however, are still not designed to deal with the range and intensity of inputs found in typical musical selections. When confronted with musical selections, hearing aids with less than a 16 bit A/D converter may produce high levels of harmonic distortion that can affect the overall quality of the listening experience.

Recently, Dr. Marshall Chasin, of the Musician’s Clinics of Canada, demonstrated major differences in harmonic distortion with five different hearing aids after they were exposed to sound inputs at 90 and 100 dB. At the 100 dB input level (typical for music), the harmonic distortion of three of these five aids exceeded 50% (a horrendous figure!). Distortion levels were significantly less at the 90 dB input level. According to Dr. Chasin, all five aids, including the three that distorted quite badly at the high input level, “did quite well” in regards to speech perception, for which typical input levels would be 90 dB or less. This is an important finding; it shows that a hearing aid’s performance in regard to speech perception does not predict its ability to process typical musical selections. The converse, however, may well be true, at least for people with mild or moderate hearing losses: Hearing aids that do best with music may also be able to deliver the best quality speech signals. And hearing aids that do well with music are those with wide frequency ranges and the capacity to process high input levels without distortion. A “Hi-Fi” system in other words – nothing new here!

In the same issue of the Hearing Review (February 2009) in which the Chasin article appeared, Dr. Mead Killion examined the relationship between speech perception and the judged quality of music, for both normal hearing and hard of hearing people, as heard through hearing aids. Using a manikin of a human head, he recorded various musical selections and a speech perception test through seven different digital hearing aids, an “open-ear” condition (no aid), as well as with the “Digi-K” (a hearing aid circuit that he developed). He then played these recordings back to both normal hearing and hard of hearing listeners, and asked them to judge the fidelity (0 to 100% fidelity) of the musical selection. How good, in other words, did the music sound to them? The results show that the fidelity ratings varied considerably for the seven different aids, with the highest scores obtained in the “open-ear” condition and with the Digi-K. The important conclusion of this research is that both groups (normal hearing and hard of hearing) rated the fidelity of all the aids similarly; hearing aids that sounded best and worst for the normal hearing listeners were rated similarly by the hard of hearing subjects. Evidently, the key factor was the quality of the reproduction through the hearing aid, and not whether the person listening was hearing-impaired or normally hearing.

In another component of the same study, Dr. Killion compared the speech perception scores in noise obtained by 26 hearing-impaired subjects with these seven different digital aids to the fidelity ratings that normally hearing listeners gave to the aids. He found an orderly relationship between the fidelity ratings given to the various hearing aids by the normal hearing listeners and the ability of the hearing-impaired users to understand speech in noise. The aids judged to reproduce music with the highest fidelity were also the ones with which hearing aid users understand speech best.

This is a point worth repeating: Dr. Killion provides evidence for the assertion that hearing aids that best reproduce musical selections would also be the ones with which the highest speech perception scores could be obtained. As indicated earlier, this requires a hearing aid that can respond to a large dynamic input range without distortion, as well as reproduce a wide acoustic frequency range (up to 16,000 kHz is often noted as the ideal, but this is hardly ever – if ever – realized in the real-ear). We should keep in mind, however, that all the subjects involved in the above studies had mild or moderate hearing losses (who also happen to be the majority of hearing aid and potential hearing aid users). We don’t know how applicable these results would be for people with severe or profound hearing losses; their amplification needs may be considerably different from those individuals with less severe hearing losses.

The Music Program

Hearing aid manufacturers are well aware of the acoustical differences between speech and music, and the different processing strategies that may be necessary in order for the aid to respond appropriately to either type of input. Some hearing aids with multiple memories devote one of them to a special “music” program. This can be selected automatically by the hearing aid based on the nature of the acoustical environment, or chosen deliberately by the hearing aid user when listening to music. While the specific acoustical modifications that different manufacturers include will differ, the recent literature suggests a few that should definitely be considered (in addition to the capacity to efficiently process a wide dynamic input range).

In music, as already mentioned, the low frequencies take on a significance not found when listening to speech. Consider, for example, that in an 88 key piano, 63 of the notes (72%) fall below 1000 Hz. It takes a soprano to match that fundamental pitch. For maximizing speech perception, on the other hand, it is the frequencies above 1000 Hz that are most important. In an audiogram, however, the lowest frequency measured is usually 250 Hz, while the frequency range amplified by hearing aids usually begins at about 300 or 400 Hz. In music one can find fundamental pitches extending down to 82 Hz (a guitar), and even lower for other instruments. Thus in ordinary circumstances there is a mismatch between the audiogram, the frequency range of hearing aids, and many of the important pitches in a musical selection. There are, for example, as many distinct notes on a piano between 100 and 200 Hz (12 of them) as there are between 1000 and 2000 Hz. If, therefore, we expect hearing aid users to fully enjoy music, they must be able to hear the lower part of the frequency range as much as possible. Thus, when somebody switches to a “music” program, it seems obvious that this should include an extension of the low frequency range of the hearing aid. This is not to suggest that the higher frequencies can be ignored, since it is in this higher frequency region that much of the energy of stringed instruments falls, as well as the harmonics of the lower pitch ones; what it does suggest is that the lower frequencies require an explicit focus.

It has also been suggested that, in contrast to what may be best for speech perception, when listening to music just one channel of amplification may be best (at least for people with mild and moderate hearing losses). While in order to optimize speech perception, it may be advantageous to deliver two or more channels of amplification (each one adjusted somewhat separately), the situation is different with music. Unlike speech, to fully appreciate musical selections, it is necessary to preserve the original intensity balance between the lower and the higher frequencies. If a hearing aid reproduces too many, or too few, low or high frequencies this balance will be distorted. The music has to sound as it was performed, or as close to that as possible. A single channel instrument will help preserve this balance. If in a multi-channel hearing aid it is not possible to provide just a single channel, then each individual channel should be adjusted alike so that the hearing aid functions “as if” it were a single channel instrument.

Finally, it has been recommended that, if possible, both the feedback and noise reduction programs be disabled when the aid is switched to a music program. Both of these features have been found very helpful in listening to speech: in one case permitting a higher degree of amplification before acoustic squealing occurs, and in the other case making it easier to hear in noise by reducing the gain in some frequency bands. However, these features may not be desirable when listening to music, since the original acoustical input may be unpredictably modified. A feedback circuit could attempt to cancel some desirable musical components (like narrow-band harmonics), simply because they “sound like” feedback to the sensor. In addition, with short-duration sounds, the hearing aid- created cancellation signal could actually become audible. The same logic applies to noise reduction systems; the program may classify some component of the input as “noise” and modify it in an unpredictable and, likely, undesirable fashion.

Would someone be able to enjoy music more if a hearing aid included such a music program? Judging from the literature, it is likely that people with mild and moderate hearing losses should be able to hear the difference. I don’t know how this would apply to people with severe and profound hearing losses, but as best as I can judge, they might also find such a program somewhat helpful. At least it is worth a try. Unfortunately, we do not have much research (other than the articles by Dr. Mead Killion) that has investigated the fidelity of hearing aids in reproducing music. Now that the literature has “discovered” the fact that hearing aid users enjoy listening to music, perhaps we’ll see more projects on this topic.

Additional Resources

Much more information about listening to music through hearing aids is available on the Internet. For performers, the organization “Association of Adult Musicians with Hearing Loss” (AAMHL.org) is an excellent resource. Hearing Education and Awareness for Rockers (hearnet.com) seems primarily aimed at the younger set and includes suggestions on how they can preserve their hearing while playing and listening to music. Finally, the Musicians’ Clinics of Canada (musiciansclinics.com) is chock full of relevant information on this topic, much of which I depended upon while composing this article. I am particularly indebted to Dr. Marshall Chasin, Director of Auditory Research for his patient and informative response to all of my queries.