Listening to Music Through a Cochlear Implant: Part 1

My first exposure to music while using my cochlear implant (CI) occurred when I left the NYU Center, right after the implant was activated.  It was a cold day in January and I was lucky to find a cab right outside the Center to take me across town. The cabbie might have been the only one in NY whose radio was tuned to a classical music station. A familiar piano piece was being played; it sounded great, and I was thrilled.   This, I felt, was another good omen for successful implant use (in addition to being able to somewhat understand the implant audiologist’s speech at that initial stage). But since at the time I was mainly focused on understanding the cabbie’s speech, I stored the music experience in the back of my mind. This is not to say that I considered the ability to listen to and enjoy music to be unimportant. It is, as a matter of fact, the second most frequently expressed desire among CI recipients. Much of our cultural and social life is bound up in exposure to music.

In the ensuing months I did occasionally listen to music, with rather mixed results, I’m afraid. Although I was able to recognize a number of melodies, after a while I essentially stopped listening.  I think what happened is that my memory of what music had sounded like pre-implant was just too vivid in my mind; I would play some piece that I recognized and had liked in the past, hear some flat notes or atonal passages, and just quit listening. I liked music too much for too long to have the patience to listen to it being mutilated, or so it sounded to me.  So, while I had some “successes” with musical recognition (i.e., in spite of a few discordant notes, I could recognize a number of old favorites), I still considered speech perception to be the primary challenge, and that’s what I focused on.

Then it occurred to me that the music I listened to pre-implant, the sounds that I had so much enjoyed over the years, was itself distorted or modified in some fashion.  I’ve worn hearing aids for fifty-six years and except for the last year or so, I’ve spent my life listening to music (and everything else!) through them. But, clearly, the acoustical elements that I perceived and those that a normal hearing person would perceive could not be the same.  The music I was hearing was being delivered to an impaired auditory system by two imperfect hearing aids (and all of them are imperfect to some degree).  For example, hearing aids in the early years could not amplify high frequencies very well (3 or 4 KHz was the limit) and up to 10% distortion was considered acceptable (although enough to give an audiophile apoplexy). But still this did not prevent me from obtaining a great deal of pleasure while listening to music. This would be true, to a lesser or greater degree, for every hearing aid user.

What must have been happening is that over the years the musical sounds I heard via my hearing aids became my norm. It was what I was used to; it had evolved into the standard to which I was now comparing the music I heard through the implant. And, right now, the CI fell short.  It therefore seemed apparent that a similar developmental process would have to take place with the implant if I were to fully enjoy music again. I needed to find out whether what I heard through the implant could also evolve into some sort of standard, one that provide me with sufficient listening pleasure to make the effort worthwhile.  To make this determination, I needed to engage in a personal “musical auditory training” program, one that required a significant time commitment over several months. I’ll report on my experiences and impressions in Part 2 of this article in the next issue.

Given that my interest in this topic is both personal and professional, the first steps I took were to examine both the professional literature and the experiences of other implantees.  I am far from the only implant user going through this experience and CI manufacturers are well aware of the challenge they face in this respect. By design, CIs were engineered to improve speech perception, not music appreciation. There are significant acoustical differences between speech and music, and processing strategies that are appropriate for one modality may not necessarily work for the other.  In fact, while implant users can obtain excellent speech perception scores, their recognition and enjoyment of music still leaves much to be desired. In spite of large individual differences, implant users generally have noted that they have difficulty recognizing and enjoying music. For some implant users, particularly those for whom music had played an important role in their lives, this difficulty is distressing.

To better understand exactly where listening deficiencies occur, researchers have examined the various components of a musical signal, i.e., the beat, rhythm, pitch, timbre, and melody. “Beat” is a steady sound pulse, while “rhythm” is the grouping of beats to create any succession of durations of sound. It is that aspect of the signal that impels people to tap their toes and clap their hands.   Research has shown that implant users can perceive the rhythmical patterns of music as well as normally hearing people. So it seems that, at a minimum, people using a cochlear implant can respond to the rhythmical qualities of a musical piece and enjoy and respond to that feature of the music.

In regards to “pitch” and “pitch sequence” judgments, CI users do poorer than normally hearing people. They also do poorer in “timbre identification”; an example of this would be distinguishing between two musical instruments (such as a piccolo and a violin) playing the same note. However, timbre recognition and the judged pleasantness of the sounds can be improved somewhat with a systematic training program. In this type of training program, listeners first see and listen to the timbres of two different instruments, make the association between the visual image and the sound it emits, and then go on to identify – via hearing alone – the instrument that is playing.  A secondary effect of the training program is that the “appraisal” (or enjoyment) of the sound may also be improved as a consequence of the training program.

“Melody” shows us where the various aspects of the musical sensation come together. Melody itself is defined as any series of musical tones (pitches) that create a sense of unity or an impression of an organization. The perception of a melody is very subjective. Basically, it requires the ability to distinguish a sequence of pitches going up and down the musical scale.  If this cannot be done, or is done poorly, then the listener cannot easily recognize or appreciate the melodic aspects of music.  For me, however, this definition of melody is too analytical to be satisfying – it does not convey, for example, the feeling and pleasure that one can get in listening to music. I think that someone would be hard pressed to describe what some particular melody means to him or her and why he or she is moved by or loves a particular piece of music. Part of it of course is a person’s background, and part of it is that elusive concept of “taste”; but, whatever it is, we strive to experience melodic sensations, and we make judgments on what we do and don’t like.

In reality, implantees depend upon multiple cues to recognize familiar melodies, employing not just pitch sequence judgments, but the perception of rhythm, timbre, and lyrics as well.  Even using all these available cues, however, implant users still do not generally fare very well in recognizing previously familiar melodies. But as we would expect, considering what happens with speech perception, there is a large variation between individuals in respect to melody recognition. Furthermore, it seems that the people who do relatively well in speech perception are the same ones who also do well in music perception, so improvements in one modality may be reflected in improvements in the other.  Also, at least up to this point, there is no significant difference in music appreciation between any of the available devices or processing strategies. Published reports indicate that the results with all processing strategies and cochlear implants models have been similar.

The challenge faced by all CI manufacturers is to enhance a person’s enjoyment of music via an implant without jeopardizing speech perception.  Judging from the proceedings of a recent conference, it seems that the manufacturers are accepting this challenge. In October 2006, the inaugural workshop on music perception with cochlear implants took place at the University of Washington. Presenters came from Europe, Australia, and the United States, with the devices of all three manufacturers involved. While there have been numerous publications in the professional literature regarding how the implant processes music, this was the first such international conference specifically devoted to this topic.

As I examine the abstracts of the papers delivered at the conference, it seems to me that, at this point, the major focus was on defining perceptual capabilities and limitations of implant users.  Other topics included bimodal stimulation (HA in one ear and a CI in the other), children’s experiences with CIs,  the perception of intermediate pitches (between electrodes), the relative contributions of stimulation rate (temporal) and location (spatial) in pitch perception, and the development of a standardized clinical test of the underlying abilities that contribute to music perception.  And this focus on the perceptual capabilities and limitations of implant users is a perfectly understandable and necessary approach; before any type of processing or structural change can be introduced into an implant, it is first necessary to understand the current situation.

Only one study actually examined the possible difference that some programming modification could produce. This study examined the difference in judged musical quality that changes in the instantaneous input dynamic range (the range between thresholds and comfort levels) had in respect to the judged quality of five types of music. It turns out that for everyday use, the same input dynamic range (40 dB) is generally best for both music and speech perception.  More such studies, ones that modify some aspect of the programming, would be a welcome contribution. Inevitably, there will be a long gestation period between the findings of research studies and the consequent modifications in the structure or programming of a CI.  Some miracle implant (or hearing aid) may be around the corner, but in the meantime we have to work with what we have.

For the last few months, I’ve been listening to various kinds of music for about forty minutes a day, most every day. I’ll be continuing this same schedule for the next few months, noting my experiences, observations, and any perceptual changes that occur. These will comprise the content of Part 2 of this article, which will appear in the next issue.  At this point, I can say that my musical appreciation is better than I had originally feared it would be (i.e., very bad, based on what I had read) but not as good as my highest hopes would have it. Which, I think, is an observation that many CI recipients will share and one that applies to speech perception as well.