March 2020 The Hearing Journal

Do you recall the last time you tried to locate your car in a cavernous parking garage, following the car's honking via your remote key? You were relying on your sound localisation ability. Unlike other sensations, such as feeling where a mosquito lands on your skin or distinguishing low from high frequencies, the direction of sound cannot be directly read by your sensory organs—the ears. It must be computed—a feat your brain accomplishes by interpreting how much sooner a sound reaches one ear before it reaches the other, the so-called interaural time difference (ITD). The resulting spatial cues not only enable us to identify sound direction but also allow us to better comprehend speech in situations with background noise (where spatial cues help us differentiate between a target sound and the noise we are trying to ignore). Therefore, many studies over the past decades have focused on trying to understand how ITD is represented in the brain, how our ability to utilise it gets disrupted by hearing loss, and how it can be restored.

In people with normal hearing, ITD can reliably signal the direction of a sound source, in both anechoic and reverberant environments. Individuals with hearing loss, however, are often less capable of utilising ITD, a problem that is only partially due to the limitations of hearing devices. 

Similarly, cochlear implant users struggle to interpret ITD, consistent with altered processing in the brain as opposed to just the ears. The inability to benefit from acoustic ITD cues delivered to the ears may in part be rooted in how the brain decodes auditory direction from them.

Engineers have suggested that humans decode sound direction through a scheme akin to a spatial map or a compass in the brain, with ITD-sensitive neurons aligned from left to right that fire individually when activated by a sound coming from a given angle—say, at 30 degrees to the right of your nose. This neural compass is equivalent to a mathematical operation known as interaural cross-correlation. Excitingly, neurophysiologists have discovered early on that biological mechanisms for calculating the interaural cross-correlation function exist in the avian brain. This discovery spawned the development of computational models of human sound localisation that can now predict with high accuracy where listeners with normal hearing localise sound based on interaural cross-correlations.

However, it turns out that there is no hard evidence that the mammalian brain decodes location based on a binaural cross-correlation map as birds do. Instead, mammals appear to rely on a more dynamic neural model where different neurons fire at varying rates depending on directional signals. Computational models that assume the brain compares these rates across sets of neurons can also predict human perception of sound directionality with high accuracy. They do this by recognising neural response patterns that correspond to different sound directions and dynamically building new maps that link acoustic ITDs to a perceived location, depending on the context and the environment. A dynamic map is also plausible from an evolutionary perspective since our early mammalian ancestors were only able to interpret sound level differences across the ears as spatial cues. The ability to interpret ITDs evolved later.

While both interaural cross-correlation-based maps and dynamic population rate coding models can predict sound localisation with high accuracy, no neural imaging modality with sufficiently high resolution can determine which mechanism humans use. To date, evidence for either model has only been indirect.

We recently noticed, however, that the two models make different predictions depending on the sound volume. The dynamic rate coding model predicts that for faint low-frequency sounds, humans should make systematic errors by hearing sounds slightly closer to the midline of their head as opposed to where they truly are. In contrast, the interaural cross-correlation model predicts no such bias by sound intensity. We first confirmed this idea computationally by reconstructing neuronal responses to ITD in rhesus macaque monkeys (representing rate coding) and barn owls (a contender for the interaural cross-correlation-based compass). Next, we tested human ITD-based sound localisation behaviourally, and discovered that humans do make systematically biased response errors, confirming the prediction of the dynamic rate coding model.

We still cannot restore ITD-based spatial perception for many people with hearing aids and cochlear implants. However, our recent data suggest that this perceptual skill is based on a dynamic neural code, encouraging the notion that retraining peoples’ brains is a worthwhile pursuit. To restore ITD-based perception, we could program hearing aids and cochlear implants to compensate for an individual's hearing loss, as well as offer targeted rehabilitation strategies that leverage a person's ability to retrain themselves to use spatial cues from their devices. This would be particularly important for situations with background noise, where most people with hearing loss cannot single out a target sound and where the restoration of spatial cues could help.

Become a Member

Become a Cicada member
For only A$10 for life, you will receive a copy of Buzz magazine and can attend events.

Latest News

Vinaora Visitors Counter

082743
Today
Yesterday
This Week
Last Week
This Month
Last Month
All days
6350
4693
11043
38298
82743
0
82743

Your IP: 94.130.18.163
18-01-2021 21:08

Deafblindness

Here is a link to Deafblindness support and information.
They are based in Western Australia and supported by Senses Australia.

Hear For You logo

 

 

 

Hear For You web site

Vision Statement: “For all young people who are deaf to reach their potential in life.”

Go to top
JSN Boot template designed by JoomlaShine.com
Web Analytics