Nov 2017 stuff.co.nz

Being close to her family is what matters most to Sarah Krammer. The sound of her daughter's voice over FaceTime and the crack of leather on willow at her son's cricket matches are what she desperately wanted to hear and couldn't, until now. The 51-year-old Lower Hutt nurse's hearing had been getting progressively worse since her 20s but a new hearing device that links up to her smartphone will hopefully make her life just a little bit easier.

Sarah KrammerSarah KrammerCochlear implants have been available for more than 20 years. However, the latest development in the technology is the Nucleus 7 sound processor which can stream sound directly from an Apple device such as an iPhone or iPad via a special app. The user's processor can connect directly with the device when taking calls or using services such as Facetime or Skype. The Nucleus 7 can also be used to listen to music, watch videos and play games.

She is already noticing improvements in her hearing. "I had no idea my car beeped when it was in reverse."  She has owned the car for five years. Southern Cochlear Implant Programme audiologist Hatten Howard said Krammer was one of the first patients in the programme to get the Nucleus 7. He said the main advantage of the new technology was the ability to adjust and monitor the implant and processor from a mobile device such as a phone. The mobile device replaced the need to carry a separate remote control as used in previous versions of the Nucleus.

"It's a matter of convenience. A lot of people are put off by having to carry another gadget. It allows a listener to change settings in more challenging scenarios."
Although the implant and processor were still being fine-tuned to her needs, Krammer had been impressed by the improvement it had made. She had already tested it out by talking to her daughter, Rosina, who lives in Melbourne, on Facetime. "It's amazing to be able to talk to her. It's been quite overwhelming. She had also been able to approach her work with much more confidence.  "I was a good nurse before, but with this, I'll be invincible!"

 

Oct 2017 The Columbian

I played the saxophone in various dance bands while attending high school and college in Arizona. The best band I played with in Phoenix was Freddy Duarte’s 15-piece band that played only Mexican music, which at that time included a lot of mambos. Our piano player, when asked about his day job, always said he worked for the city. The truth was, he drove a garbage truck, and the guys in the band irreverently called him “G-Man.”

After I was married and had two young sons, they would laugh at their dad’s funny music when I played my jazz records. But when we attended a Newport Jazz Festival that included some of the best jazz musicians of that era, it was a life-changing event for Steve, our oldest son, who was inspired to pursue a musical career. He quickly learned to play the tenor saxophone I had bought when I was in college, and he put in many long hours of practice to develop his musical skills.
When Steve was in high school he, like his father, played in several dance bands. One of these bands played a lot of Mexican music. One evening, my wife and I went to hear that band play at a local club.

A very special moment occurred when we got up to dance to a Mexican polka, played by our son on the same horn I had used to play that very same polka a generation earlier. I call that a musical echo, with a 25-year delay.

Steve later majored in music at Arizona State University. My wife and I attended his senior recital, a requirement for all music majors. That evening, we were invited to hear one of Steve’s bands, playing for a traditional Mexican wedding. During one of the breaks, Steve introduced me to the bandleader, who knew I had once played with a Mexican band in Phoenix. He asked me who I had played with. When I said Freddy Duarte, he told me his father had also played with the Freddy Duarte band. We must have been in that band at different times, because his father and I had never met. Yet, two fathers, who played in the same band, had sons who — a generation later, with no planning or even awareness — also played in the same band. Life often takes us in interesting circles, which in this case I call another musical echo.

Since then, Steve’s musical career has included more than 20 years of performing and recording with Lyle Lovett’s Large Band, as well as touring with Mel Torme and Sammy Davis, Jr.
Unfortunately, as Steve’s musical career blossomed, an inherited genetic defect caused my hearing to deteriorate until I had too little residual hearing for a hearing aid to amplify. So, eight years ago, I got a cochlear implant, which bypasses normal ear functions and directly stimulates the auditory nerve. The minor implant surgery was an outpatient procedure. After two weeks to allow for healing, my new implant was activated. I’ll never forget, as I was leaving the hospital, hearing birds chirping for the first time in more than 20 years. So grateful. I try to see the glass half full, rather than half empty. So, in spite of my hearing loss, I have much to be grateful for.

I’m grateful that I had 40 years of normal hearing before my hearing loss began. I’m grateful that I chose chemistry instead of music for my career; unless your name is Beethoven, there’s not much demand for deaf musicians. I’m especially grateful that my inherited hearing loss affects only males, and is passed on only by females, which means my two sons, both professional musicians, will be spared. And finally, I’m grateful for the availability of high-tech cochlear implants. Although what I hear is distorted, and is nothing like normal hearing, it’s much better than the alternative of silence.

 

Nov 2017 New York Times

 Maureen ToweyProducer Maureen Towey and visual effects supervisor Orin Green look at Rachel Kolb through Lytro’s camera.

Seven years ago, when Rachel Kolb was 20, her friends pitched in to help her learn how to hear music. She was born profoundly deaf and had recently received a cochlear implant to give her partial hearing. “They were so gracious — they made a playlist and annotated it: At this time in the song, it’s this instrument coming in,” she recalled. “So I learned to recognise, oh, that’s a piano, because my friend wrote down, ‘At 35 seconds the piano starts to play.’ ”

Rachel is a former Rhodes scholar and current doctoral student whom I met through Peter Catapano, the editor of the Disability series, on the Opinion desk. I work as a senior producer in the VR department at The Times, creating both long- and short-form virtual reality videos. Part of my job is constantly looking for stories that will fit our uniquely experiential medium.

Peter introduced me to Rachel after she submitted an essay to him about her experiences of music both before and after receiving the implant. She described music as tactile and visual — not something that you just hear. We thought her story was a great match for the immersive treatment that virtual reality provides. We started to adapt Rachel’s article into a storyboard and quickly settled on a VR piece that would be a mix of animation and live-action, with narration from Rachel.

When we met Rachel in person, we were excited to see that she exhibited a natural on-camera presence — vibrantly intelligent and self-possessed. When we worked with her, she read our lips, since none of our team members knew American Sign Language (ASL). But lip-reading for more than two hours is tiring and it’s a less effective method for large groups, so Rachel often uses a sign language interpreter. During our continuing conversations about the development of the piece, she suggested that we try to find another deaf collaborator for our team. This made a lot of sense, especially because we wanted to keep the piece deaf accessible and we knew that more deaf perspectives during the creative process would strengthen the final product. After some research, we found James Merry, an animator for the production company Squint/Opera in London.

VIDEO FEATURE

Sensations of Sound:

For those who are deaf, music is not just about sound. In this immersive virtual reality experience, the scholar Rachel Kolb shares her experience of hearing music for the first time after receiving a cochlear implant.

Sensations of Sound Link to Youtube video - opens in a new window/tab

In person, James is quiet, but his animations are fast and lively. We nudged him toward a style that was loose and hand-drawn. (VR is very high-tech, and we wanted the animations to feel warm and approachable.) Luckily, James had already worked in VR and knew how to adapt his animations to an immersive environment. He was an easygoing collaborator, bringing new ideas to the table during each stage of development. James does not have a cochlear implant; he uses hearing aids. Like Rachel, he reads lips. Communicating on the phone with a deaf person is often not ideal, so we used video chat or email to correspond and give creative notes. I went to London to work with James and his colleagues for a few days, and interacting in person helped our process — but I could say that about any collaboration.

“When I work with voice-over, I’m hardly ever able to work out what’s being said from the voice track alone,” James explained. “I can hear when something is being said, but not so much what is being said. So I use a combination of the audio track and the transcript with time stamps. The audio waveform, which I can see on the computer screen, helps me to sync things up. If I still can’t work it out, there’s usually a friendly producer nearby who can help me fill in the gaps.”

At one point, before we had a crucial sound cue built in, James threw in some sound cues himself, but the volume levels were really loud so he could hear them. When we watched and listened to that version together, the hearing collaborators dove for the volume. We all laughed about it and James apologised, but it taught us a lesson: James’s sound cues were the only ones we felt as well as heard. That mattered for our sound design. In the moment when Rachel’s cochlear implants are turned on, we aimed to have a sound cue that jolted us physically.

We also wanted an acoustically extraordinary setting. Since there is a detail in Rachel’s op-ed about her first experiences hearing live music after getting her implant, we asked her where those experiences happened and which ones were the most impactful. The Santa Fe Opera, near her home in New Mexico, was at the top of her list. It generously opened its doors, and the back wall of its open-air stage, so we could film Rachel there with a twilit desert backdrop.

  Rachel KolbRachel Kolb stands in front of Lytro’s camera while Maureen Towey looks on.

While the animations were being developed, we landed a partnership with the technical wizards at Lytro to create our live-action scenes. Their light field technology can bring an extraordinary amount of depth and detail to a virtual reality image. The biggest VR camera we use regularly at The Times is about the size of a basketball. Lytro’s camera is the size of a sumo wrestler. It gathers about 475 times more visual information than we do for a standard VR piece. Lytro’s system also features Six Degrees of Freedom (6DoF), which enables the viewer in a headset to move around within the piece. If you are standing in front of a person, you can close the distance to them or step farther away. If you are watching an animation, you can crouch down or swivel to the side to see what the image looks like from a different angle. This freedom of movement increases the sense that you are right there with Rachel. It’s what we call “presence” — the magic of VR.

During this process, we were also working closely with Brendan Baker, a sound designer whom I knew from his work on the podcast “Love + Radio,” which is known for its innovative use of sound in storytelling. I thought of him as the mad scientist of the podcast world, which was exactly what we needed to create a sound design that could accurately represent the sounds that Rachel was hearing when she first turned on her cochlear implant. The production company Q Department came in on the back end to spatialise the sound design so the sound would adjust as you moved your head in the headset.

When Rachel got that playlist from her friends, she was trying to do something she hadn’t done before: hear music. In working on this piece, we were trying to do some things we hadn’t done before. We were trying to create a VR piece that was animated, that incorporated new 6DoF technologies and that told Rachel’s story with the depth and sensitivity it deserved. In the VR department, our jobs are the most exciting when we can use a new technology to shine a light on an important story.

There are several ways to experience “Sensations of Sound” and other NYT VR pieces. You can watch through the NYT VR apps in Oculus and Daydream headsets for an immersive experience. You can watch on your phone through the NYT VR app or by clicking on this link and waving your phone around to explore. Or you can watch on your computer and use your mouse to scroll around in the 360 video.

Nov 2017 University of Washington News and MedicalXpress

 HeadphonesA study examining how the brain decodes pitch could inform further development of cochlear implants.

Picture yourself with a friend in a crowded restaurant. The din of other diners, the clattering of dishes, the muffled notes of background music, the voice of your friend, not to mention your own – all compete for your brain’s attention. For many people, the brain can automatically distinguish the noises, identifying the sources and recognising what they “say” and mean thanks to, among other features of sound, pitch.

But for someone who wears a cochlear implant pitch is only weakly conveyed. For decades, scientists have debated how, exactly, humans perceive pitch, and how the ear and the brain transmit pitch information in a sound. There are two prevalent theories: place and time. The “time code” theory argues that pitch is a matter of auditory nerve fibre firing rate, while the “place code” theory focuses on where in the inner ear a sound activates. Now a new study bolsters support for the place code and could inform further development of the cochlear implant. The paper’s lead author is Bonnie Lau, a speech-language pathologist and postdoctoral fellow at the University of Washington Institute for Learning & Brain Sciences.

Pitch is one of the basic aspects of sound. It roughly corresponds to the periodicity of sound waves; sounds with a higher pitch have a higher repetition rate. Most often associated with music and voices, pitch contributes an aesthetic quality to what we hear. Think of an ocean, or a symphony; without pitch, Lau explained, “the things we love to listen to, those aesthetic aspects will be changed.” Pitch also functions as a cue to distinguish sounds, especially in noisy places. It makes listening in real-world environments like restaurants and public transportation more difficult when we don’t have access to pitch,” she added. Pinning pitch perception on a “place code” provides opportunities for improvement of cochlear implants that would not be possible if pitch were perceived only through a “time code.”

Here’s how the two codes work:

In a time code, which relies on a phenomenon called “phase locking,” auditory nerve fibres respond to a time-based pattern in a sound wave by firing at the same place every cycle, transmitting information to the brain — a process that works only up to a certain frequency. But beyond a certain repetition rate, the auditory nerve fibres can’t follow the periodicity in a sound.

In a place code, different frequencies activate different parts of the inner ear, with pitch organised from high to low, like a musical scale. Where the activation is indicates the pitch of a sound.

For Lau’s experiment, researchers tested 19 people (average age: 22) with a range of musical training (from no formal training to 15 years’ worth). Musical experience, Lau said, turned out to have no clear correlation with pitch perception in this study. The participants listened to and compared a series of high-frequency tones (greater than 8,000 Hz) with specialised headphones in a soundproof booth; after each tone, participants used a computer to indicate which sound was higher in pitch. Researchers chose only very high-frequency tones, embedded in background noise, in order to eliminate the possibility of a time code and focused instead on whether a place code was at work. And when these ultra-high frequency pure tones were combined in a harmonic complex (think musical notes), participants’ pitch perception improved significantly. “Our findings show that even when timing information is not available, you can still hear pitch,” Lau said.

The design of new technologies, then, could benefit from this finding, she said. Cochlear implants currently convey little pitch information to the user, but these results suggest that enhancing place information alone has the potential to improve pitch perception from a cochlear implant.