Feb 2020 The Conversation UK
When we hear a car hurtling towards us, we usually immediately know where it is coming from so we can get out of the way. Our brains have an amazing ability to rapidly separate the sound of the car from background sounds and track its location in the world – an ability called spatial hearing.
To do this, our brains exploit the differences in the sounds arriving at our two ears. For example, if a sound comes from our left side, it is louder in our left ear and, because sound takes time to travel through the air, it arrives at our left ear first. Using these cues, our brain builds a continuously evolving picture of where sound-emitting things in the world are, which it uses to locate sound sources and avoid threats, such as that out-of-control car. Unfortunately, for many people with hearing impairments, spatial hearing is often severely limited. This is particularly true for people who use a cochlear implant, who often find locating and separating different sounds very difficult.
We are trying a different approach. We take the information usually given by the difference in sound between the ears and present it through the sense of touch instead. The idea is that by providing the missing sound information through vibrations on the skin, the brain will be able to merge the two senses to improve perception.
Cochlear implants work by bypassing the damaged parts of the outer and middle ear and directly stimulating the auditory nerve. For thousands of people with severe hearing impairments, this technology has had an incredible impact on their lives, restoring some of their hearing and allowing them to follow conversations in quiet places similarly to people with normal hearing.
Unfortunately, most users only have an implant in one ear, usually because of the additional expense and risk of a second implantation surgery. Having only one implant means that those tiny differences in loudness and timing of sounds between the ears cannot be used for spatial hearing.
Although one implant works very well in a quiet place, this can make it difficult to cope in busy sound environments. Imagine trying to listen to your friend talk to you in a crowded restaurant - with clattering crockery and loud conversations - when you can’t tell which direction the different sounds are coming from.
Researchers have previously tried several approaches to improve spatial hearing in cochlear implant users by improving implant technology, but with limited success. We tested whether providing spatial hearing cues to the wrists as vibrations would help cochlear implant users locate sounds. To conduct our experiment, we set up a ring of loudspeakers. The participants were asked to identify which loudspeaker played the sound of a voice saying: “Where am I speaking from?” As expected, many implant users struggled when they only had the audio. But we found that when they had haptic stimuli (vibrations) alongside the audio, they could identify the location of the sound far more accurately.
After around 15 minutes of training with audio and haptic cues together, participants were able to effectively combine the two signals. We found that they were performing better with combined audio and haptic stimulation than with either sense by itself. This suggests that their brains were able to rapidly merge the information arriving through the two senses to improve spatial hearing.
These results highlight the huge potential of using haptics to aid hearing. In other work, we have already shown that haptics can improve speech perception in noisy environments in cochlear implant users.
In future work, we want to use haptics to help hearing-impaired listeners identify multiple sounds coming from different locations. For example, you might want to hear your friend’s voice to one side, music from the radio from another side, and the patter of rain against the windowpane. This could help us find new ways to help hearing-impaired people build a rich and more accurate picture of their world through hearing.