Nov 2017 New York Times

Rachel KolbFor those who are deaf, music is not just about sound. At age 20, Rachel Kolb received cochlear implants that gave her partial hearing. In virtual reality, experience how music felt for her, before and after. She is a Rhodes scholar and a doctoral student at Emory University focusing on American literature, deaf and disability studies, and bioethics.

When I got a cochlear implant seven years ago, after being profoundly deaf for my entire life, hearing friends and acquaintances started asking me the same few questions: Had I heard music yet? Did I like it? What did it sound like? I was 20 years old then. Aside from the amplified noises I’d heard through my hearing aids, which sounded more like murmurs distorted by thick insulation swaddling, I had never heard music, not really. But that did not mean I wasn’t in some way musical. I played piano and guitar as a child, and I remember enjoying the feel of my hands picking out the piano keys in rhythm, as well as the rich vibrations of the guitar soundboard against my chest. I would tap out a beat to many other daily tasks, too. For several years, I became privately obsessed with marching in rhythm when walking around the block, counting out my steps like a metronome: One, two. One, two. Watching visual rhythms, from the flow of water to clapping hands and the rich expression of sign language, fascinated me. But in the hearing world, those experiences often didn’t count as music. And I gathered that my inability to hear music, at least in the view the people I knew, seemed unthinkable.

“So you can’t hear the beautiful music right now?” I remember someone asking me when I was an undergraduate. We sat in a restaurant where, presumably, some ambient melody played in the background. When I said no, she replied, “Wow, that makes me feel sad.”  Sad - this is how some hearing people reacted to my imagined lifetime without music. Did it mean that some part of my existence was unalterably sad, too? I resisted this response. My life was already beautiful and rich without music, just different. And even if listening to music did not yet feel like a core part of my identity, I could be curious.

Once I got the cochlear implant, a transmitter of rough-hewn sound that set my skull rattling and my nerves screeching, I found that music jolted my core in ways I could not explain. Deep percussion rhythms burrowed into my brain and pulsed outward. A violin’s melody pierced and vibrated in my chest, where it lingered long after the song had ended. Other tunes sounded overburdened, harsh and cacophonic, and I longed to shut them off and return to silence — as I still do. The new contrast I’d found, between the thrill of sound and the relief of silence, showed me something that I had perhaps known for my entire life, but had never been able to articulate. Music was not just about sound. It never had been. Music, to me, also was, and is, about the body, about what happens when what we call sound escapes its vacuum and creates ripples in the world.

The summer after I got my cochlear implant, I started to explore more of what music might mean to me. I picked out some notes on the piano again. I went to my first symphony concert. That overwhelming time, and all the new things I was hearing, gave me new license to go make music of my own. At the symphony, the cochlear implant whisked me into a flush of sound, but I was still enthralled by the visual — watching the physical artistry of the musicians with their instruments. Not long after, I discovered the art of music videos performed in American Sign Language. The work of talented deaf artists like Jason Listman and Rosa Lee Timm made some songs, which I’d previously listened to with mild interest, suddenly roar to life. I watched those songs in A.S.L., and that was when I truly felt them, in a way an auditory or written rendering could never provide.

 MusicSoon after, I tried dancing. It wasn’t that I hadn’t danced before — just that I’d felt embarrassed. There had been a time, once, when I’d found myself on the dance floor surrounded by hearing friends who belted out song lyrics I couldn’t understand. I’d fielded the usual questions from them about how much of it I could really hear, which made me ask myself why I was there. Wasn’t deaf dancing an oxymoron, after all? Now, as the deaf model Nyle DiMarco has clearly shown on “Dancing With the Stars,” the answer is no — but I freely confess that, in the days before his performances, I had to discover this for myself. Again, my cochlear implant gave me license to try. When a friend persuaded me to go dancing for the first time in years, I discovered that, even though I undeniably enjoyed listening to the music, my favourite songs were the ones that thrummed with a deep rhythm, that sent the bass vibrating through my body. I danced not only by what I heard, but also by what I felt. The physical motion of dancing, once I released myself to it, swirled through my core. Then, when my friend and I started signing along to the lyrics, the realisation hit me: this celebration of feeling, motion, sensation and language was what mattered when I experienced music.

Not only does music ingrain itself in our bodies in ways beyond simply the auditory, it also becomes more remarkable once it does. “Can you hear the music?” Even though I now can, I think this question misses the point. Music is also wonderfully and inescapably visual, physical, tactile — and, in these ways, it weaves its rhythms through our lives. I now think a far richer question might be: “What does music feel like to you?”

Nov 2017 The Tomahawk

Doctor Joe Ray has struggled with hearing issues for many years. In fact, he failed the hearing test three times when he joined the Air Force. He wasn’t sure what caused the hearing loss, but he did recall running a jackhammer one summer in his youth. “I didn’t realise I had a hearing problem until I started square dancing at 58,” Ray stated. “Now at 66, I noticed I had trouble understanding the calls. I would ask them to turn down the music and to repeat calls.” According to Ray, his hearing issues were one of the reasons he retired from dental practice at 61. “I didn’t want my hearing loss to impact my profession,” he stated.

Ray’s hearing tests showed he only had hearing of eight percent in his left year and seven percent in his right year. “Hearing becomes a problem after hearing loss gets to 60 percent,” Ray stated.

Joe Ray After consulting with his doctors, Ray decided to go the route of a cochlear implant. “You lose your natural hearing,” Ray stated. “It replaces your natural hearing. It takes about three months before you can notice a difference. I can hear better already. Basically, I had to learn how to listen again.”

Typical side effects from the procedure include some dizziness for up to two days, but basically patients should feel fairly normal and recover quickly. He is glad he had the surgery. “I am hearing sound I haven’t heard before, he said. I had to quit doing things because I couldn’t hear.” Within three months, Ray expects he should be hearing pretty well.
According to Ray, if you are considering surgery, read about cochlear implants and any side effects. “It’s a process to learn to hear again,” he stated. “The closest way to explain how I hear right now is to compare the people’s voices to a tracheotomy patient.” The brain has to adjust to where his hearing becomes the new normal. “It’s a whole lot better to where I was before the operation,” he said. “Do your due diligence,” Ray advised. “It can be a life changing event.”


Nov 2017

The Hearing Journal, by Healy, Eric W

Anyone who has worked with individuals with hearing loss has heard something like this: “They work just fine when it's my wife and me in our kitchen, but if we go anywhere noisy, they just don't work well at all.” “They” in this case are typically hearing aids, although the same applies to cochlear implants.

 conductionThis statement highlights the most common complaint of hearing-impaired (HI) individuals–poor speech recognition when background noise is present. Accordingly, effective noise reduction can be considered one of our most important goals. Despite its importance, noise reduction currently implemented into modern devices is notoriously ineffective (after all, if it were effective, the complaints would stop). In fact, the literature shows that noise reduction often produces a subjective preference, but all too often, no actual increase in intelligibility.

A goal that has been long sought involves a single-microphone technique to improve speech intelligibility in noise. Single-microphone approaches have distinct advantages over microphone-array techniques like beamforming. But historically, they just haven't worked. Single-microphone techniques have been able to remove noise from speech, but in doing so, they produced distortions. So one starts with speech that is not intelligible because it's noisy, and ends up with speech that is not intelligible because it's distorted.

Recent years have brought about advances, including a solution pursued by our group at The Ohio State University that involves a machine-learning algorithm to improve intelligibility of noisy speech for HI listeners. This work has two main components: The first involves time-frequency (T-F) masking, and the second involves machine learning. In T-F masking, the speech-plus-noise signal is divided by time and frequency into small units—think checkerboard squares in the spectrogram view. In its simplest form, the units dominated by speech are retained, and the units dominated by noise are simply discarded. By dominated, I mean having a favourable/unfavourable signal-to-noise ratio (SNR). When only the speech-dominated T-F units are presented to listeners, the speech can typically be understood perfectly.

The second component in our approach involves machine learning. This task of classifying T-F units into two piles is well-suited for machine learning. In the classic example, a machine is shown images of apples and oranges. During a training phase in which it learns, the machine is also told the answer—what each one is (apple or orange). This makes it “supervised learning.” After being shown many examples, the machine enters an operation phase. It's shown new images of apples and oranges, ones it hasn't seen before, and is not told the answers. But it can effectively classify them. In our use of machine learning, the machine (a Deep Neural Network or DNN) is provided with a sentence mixed with noise. During the training phase, it is also given the answer, which is whether each T-F unit is dominated by speech or noise. Once trained this way, with many examples of speech in noise, the DNN can classify on its own. It learns what speech-dominant units look like and what noise-dominant units look like. Once the DNN labels the speech-dominant units for us, we can simply present those, and only those, to listeners.

 DNNDoes it work? Keep in mind that increases in intelligibility have historically been almost entirely elusive.  Below are some of our data from typical hearing aid users. Vast improvements are clearly obtainable using our approach. In fact, many HI listeners improved intelligibility from very low scores to scores of roughly 80 percent. Furthermore, in some conditions, the HI listeners with access to our algorithm actually outperformed young normal-hearing listeners (without the algorithm, of course) on speech intelligibility in noise. This suggests that an older (~70’s) HI listener using our algorithm could potentially understand speech as well or better than their young (~20’s) normal-hearing conversation partner in a noisy setting.


Can it work for cochlear implants too? Most of our effort has been focused on the largest population in need—individuals with sensorineural hearing loss and who wear hearing aids. But as previously noted, the speech-in-noise problem is similar for people with cochlear implants (CIs). If anything, they are even more hindered by noise than individuals who use hearing aids. Reasons our algorithm may also be well-suited for CI users include the following:

1. Since the speech-in-noise problem is even worse for these folks, they typically need a highly favourable SNR to understand speech. So even if an algorithm could only operate effectively at relatively high SNRs, it could produce improvements for CI users.

2. CI processing lends itself well to T-F unit processing because it already performs a somewhat similar decomposition of the signal.

3. CI processors are typically more powerful than hearing aid processors, potentially opening additional processing options.

But can it work in the real world? Two main considerations for implementing a machine-learning algorithm involve generalising to conditions not encountered during training and operating in real time. Our series of papers have focused on this first aspect, and strides have been made. We have demonstrated generalisation to untrained sentences, SNRs, segments of noise, and entirely novel noise types. Regarding real-time operation, the algorithm is essentially “feed-forward.” So although we have focused on effectiveness and haven't implemented real-time operation yet, there is no fundamental barrier.

Will it ever go behind the ear? The short answer is no, but that's the wrong question to ask. First, remember that most of the heavy lifting, the “computational load,” is encountered during training and prior to a person actually using it. We can use our most powerful computers to train the DNN, and we can train it for as long as we want—for a day, a few days, or a month—it makes little difference. What does matter is how efficiently it runs once it's trained—after all, that's what it will do once a person puts it on and goes out into the world. Fortunately, the computational load associated with operation is far smaller than that associated with training.

But even with this advantage—that the computational load is largely shifted to an earlier training stage—it's important to understand just how limited even the most powerful ear-worn devices are in terms of battery and processing power. Getting a trained DNN to operate on those platforms could be a challenge. We suggest thinking about the problem differently. Current smartphones possess massive battery and processing power that rival personal computers. They also possess the ability to transmit bi-directionally and wirelessly. We suggest a solution involving a smartphone-like device that can be placed in a pocket or handbag and can transmit wirelessly to earpieces. This solution provides convenient packaging, massive power, and earpieces potentially even smaller than those of current ear-worn devices, because they will simply need to include powered microphones and output transducers (speakers). Perhaps a machine can learn to solve the problem that has beset us for so long.

The Stroud Courier

Elena LaQuatraFormer Miss Pennsylvania 2016 Elena LaQuatra visited East Stroudsburg University to discuss how she managed to prosper in life despite losing her hearing and balance at the age of almost four due to bacterial meningitis. Her event was entitled “Overcoming Deafness to Compete for the Crown” and was sponsored by the ESU Sign Language Club. “Who here has faced adversity?” she began the event, and much of the large crowd raised their hands. “Truly, I like to think that disability and adversity are interchangeable.”

Ms. LaQuatra’s talk included home videos of herself before, during, and after her hearing loss, with explanations all the while of what was going on. February 5th, 1996 – the day she lost her hearing – was as LaQuatra called “the day that everything changed.” She described her struggles to first be diagnosed; her doctor misdiagnosed her several times as having an ear infection before her pediatrician diagnosed her correctly with meningitis; then to relearn how to talk, walk, and dance, and how her initial cochlear implant surgery in her right ear was unsuccessful before her surgeon and her audiologist Pam – both of whom LaQuatra stated repeatedly “[saved] her life”– replanted it in her left ear.

LaQuatra also explained how she or her parents never let her “use her disability as a crutch or an excuse.” LaQuatra, born and raised in Pittsburgh, attended DePaul School of the Hearing, one of only 41 schools in the country for the hearing impaired, and the only one in the tristate area.

After completing classes there, she attended Hoover Elementary School in Mt. Lebanon and later Mt. Lebanon High School, starring in musicals, dance recitals, and plays the entire time. She also received her Bachelor of Arts degree in Broadcast Communications at Point Park University after winning a $20,000 scholarship. LaQuatra participated in pageants her entire childhood and teen years, winning Miss Pennsylvania’s Outstanding Teen in 2007, Miss Teen Pennsylvania in 2010, and finally, Miss Pennsylvania in 2016.

She got her first job as a Digital Video Reporter for Pittsburgh’s online entertainment channel before landing her current job, a General Assignment Reporter at JET 24 and FOX 66 in Erie.
LaQuatra repeated her life motto several times: “When correctly encountered, a disability becomes a stimulus that impels towards higher achievement.”
She maintained a cheerful and positive attitude throughout her talk, making jokes about herself and recalling anecdotes from her late teen and early adult years. She also allowed for a Q&A session.
I asked her “If you could go back in time and give yourself your natural hearing back, would you?”

“No,” she answered, “I love being deaf… I’ve met so many people, had so many opportunities that I wouldn’t have had otherwise. I wouldn’t be where I am today if I weren’t [deaf]… I can even take my earpiece out at work if people annoy me!”

Elena LaQuatra remains an astonishing example of beating the odds and being an inspiration to those like her.