Dr. Ross on Hearing Loss
Frequency Compression Hearing Aids
by Mark Ross, Ph.D.
All conventional hearing aids, analog or digital, amplify sounds for the purpose of making them audible for a person with a hearing loss. Of course, beyond this rather simplistic statement lies a great deal of sophisticated technology. With the newer generation of hearing aids, there is almost an infinite variety of ways that speech sounds can be processed. Hearing aids can be adjusted to separate the incoming signal into up to 20 bands, they can be set to provide different degrees of amplification for soft and loud sounds, and they can be designed to operate differentially in the presence of background noises. All very impressive. Nevertheless, in spite of these numerous options, the underlying goal is still to make speech sounds audible, within the constraints of a person's existing hearing thresholds and loudness tolerance levels. However, no matter how sophisticated the instrument, the usual type of hearing aid cannot make accomplish this purpose for someone with little or no residual hearing. There are many people with hearing loss whose hearing thresholds at the higher frequencies precludes the perception of any useful amplified sound at these points. For them, audibility at these frequencies is not possible. In order for them to receive usable information about incoming high frequency speech sounds, a different approach is needed.
One way this can be accomplished is by employing a different concept in hearing amplification, one that processes and delivers high frequency speech sounds to the lower frequencies, where people are likely to have more residual hearing. There have been many attempts to do this, going back more than 30 years. One such attempt converted high frequency sounds to surrogate low frequency tones (thus an /s/ may have sounded like a dull whistle), while another simply transposed just the higher frequencies to lower ones, overlapping and obscuring them to some extent. These attempts were not found to be clinically useful.
About ten years ago, the AVR Sonovation Company introduced their own solution, in the form of a body-worn transposer hearing aid. While there were a number of clinical reports testifying to its potential value, this aid met with little general acceptance, possibly because of cosmetic reasons or because of various kinds of malfunctions. In its operation, however, this hearing aid was quite sophisticated. It differed from previous attempts in that it not only electronically shifted the higher frequencies to lower ones, but also compressed (squeezed) them in the frequency domain (frequency compression) while leaving, if desired, the lower frequencies untouched. Moreover, it operated in a way that kept the ratio between adjacent sounds intact. Thus, if an incoming speech sound, like the broad spectrum sound /s/, had some energy peaks at 3000, 4000, 6000 Hz and 8000 Hz, and the frequency compressor used a factor of two, then all the peaks would be halved in frequency before being delivered to a listener. Thus, the energy peaks of the /s/ would now be at 1500, 2000, 3000 and 4000 Hz, possibly making at least some of the peaks (as well as other lower parts of the sound) audible for a person with a high frequency hearing loss.
This kind of "proportionate" shifting is very important for speech perception, since we know that perception is less affected by acoustic variations in the speech signal when the ratio relationship between energy peaks is maintained. For example, consider the energy peaks (the so-called first and second formants) in the vowel /ee/. For men this would be about at frequencies 200 Hz and 2000 Hz, for women at 300 Hz and 3000 Hz, and for children at 400 and 4000 Hz. Note that although the absolute acoustic location of these energy peaks are different in all three instances, the relationships (10 to 1) are the same. Thus, the /ee/ is correctly perceived by listeners whether the talker is a man, woman, or child, even though the actual acoustic position of the vowel energy is different in all three instances. This is the underlying theory behind the proportionate frequency compression provided by the initial AVR Transposer hearing aid, one that continues in later versions.
But to digress a bit more, I used the /s/ sound in this example because it may be the single most important sound in the English language. At the same time, it is the most difficult one for many people with hearing losses to perceive. Its energy usually begins around 3000 Hz and may extend up to 8000 or 10,000 Hz, precisely the areas where the hearing thresholds are generally the poorest for people with hearing loss. Moreover, the /s/ sound is not only one of the most common sounds in our language, but it also conveys more grammatical information than any other sound. (Some examples: pluralization, book/ books; possession, Pat's book; contractions, it/it's, that/that's, let us/let's; second person singular, he walks home, etc.). Clearly, it would be highly desirable for people with high frequency hearing loss to have access to the information conveyed by this sound. This would be particularly desirable for hearing-impaired children who are in the process of developing their auditory-verbal language system.
Recently, the AVR Company has marketed newer versions (the "impaCt" line) of their transposer hearing aids, now termed "frequency compression" hearing aids that perform "dynamic speech recoding". Four versions are currently available, three behind-the-ear (BTE) and one in-the-ear (ITE). All operate as conventional programmable hearing aids, with the added feature of frequency compression. Two of the three BTE models also accept an FM boot and can receive signals from an FM microphone-transmitter (a rather neat added feature). They also contain a telecoil for telephone usage or for listening through a neckloop attached to an assistive listening device. In the most recent addition (the LogiCom-20 DSR BTE-FM), no external FM boot or antenna is necessary for FM access. It's all built in. As of yet no telecoil is included in this newest model (the company reports that it "just ran out of room"). Only the ITE version does not provide for FM reception; given sufficient space, however, this version can incorporate a telecoil.
The system works by analyzing incoming speech signals and determining whether the sounds are voiced or voiceless. If voiceless, which signifies a high-frequency consonant, the incoming sound is frequency compressed to the pre-set degree (anywhere from 1.5 to 5.0 in steps of 0.25). This action takes place extremely rapidly, in the order of two to four milliseconds. When the next sound comes along, usually a vowel in the normal syllabic sequence, the aid reverts to its normal amplification pattern. The voiced sounds are simply passed through and processed as determined during the initial programming. When the next voiceless sound is detected, the frequency compression circuit is again activated.
In effect, what is happening in this alternating process is that all the high frequency consonants are squeezed and shifted lower in frequency, leaving the vowels and lower frequency consonants untouched (it is possible, with a different program, to also compress the vowels). The fitting challenge is to move the high frequency consonants far enough down so that at least part of their energy will be audible without so distorting the signal that speech perception is worsened rather than improved. This is a central focus when fitting one of these aids to consumers that is determining the optimal amount of frequency compression for an individual. An additional circuit, termed the Dynamic Consonant Boost, can be set to provide up to a 16 dB increase in amplification for just those high frequency sounds which receive frequency compression. By increasing the intensity of the voiceless sounds, the likelihood is increased that at least some of the consonant energy would be audible. But keep in mind that these high frequency voiceless consonants are actually heard in the lower frequencies, where the person has more residual hearing.
Frequency compressed speech does not sound quite "normal"; it produces a different listening sensation than people are accustomed to. The speech signals can sound a bit "mushy" and unnatural. Successful users soon learn how to adapt and to incorporate the new auditory sensations into their perceptual repertoire. If we've learned anything from the great success that cochlear implants have had, it is that the human brain is capable of great flexibility. Provide the brain with consistent and unique auditory sensations produced by the frequency compression process, and the brain is ready to try to make sense of them. As with cochlear implants, the brain may be the greatest ally that the new user of a frequency compressor hearing aid has.
While there are a great number of anecdotal reports attesting to the effectiveness of frequency transposition, there is very little research evidence. In one carefully conducted study, published several years ago using the earlier model, the results showed that two of the four subjects clearly benefited from the device. Importantly, performance did not worsen for the two subjects who did not clearly benefit from the device. This is an observation that others have reported when the device was tried on new users. Not everybody benefits, but rarely does anybody get worse.
Other than this project, and some impressively documented case reports, the evidence attesting to the potential value of a frequency compression hearing aid is basically anecdotal. I know a number of respected audiologists who are using this device on their clients and who report definite improvement in sound awareness (with children) and speech perception (with adults). They also report an improvement in articulation of the high frequency sounds. I don't doubt these observations at all; they are detailed and precise and buttressed by clinical notes.
My discomfort is that the profession of audiology, one that prides itself on being a scientifically based discipline, is not providing us with objective evidence regarding the potential value, and limitations, of this new type of hearing aid. The fact that it uses a completely different method of processing speech than all other hearing aids on the market makes the necessity for such research all the more imperative. A good model to follow in this regard would be the way cochlear implants have been researched. There have been many published research studies in which implanted children and adults have been evaluated carefully over a long period of time. In my judgment, this device requires the same approach before it can be unambiguously adopted. Still, I would not hesitate to recommend a carefully supervised trial program with the aids for someone with little or no high frequency residual hearing.