Sept 2016 Science Daily and Lehigh University
When it comes to hearing, precision is important. Because vertebrates, such as birds and humans, have two ears--and sounds from either side travel different distances to arrive at each one - localising sound involves discerning subtle differences in when sounds arrive. The brain has to keep time better than a Swiss watch in order to locate where sound is coming from.
In fact, the quality of this sound processing precision is a limiting factor in how well one detects the location of sound and perceives speech. In order for birds and mammals to hear, hair cells in the cochlea--the auditory portion of the inner ear--vibrate in response to sounds and thereby convert sound into electrical activity. Each hair cell is tuned to a unique frequency tone, which humans ultimately experience as pitch. Every hair cell in the cochlea is partnered with several neurons that convey information from the ear to the brain in an orderly way. The tone responses in the cochlea are, essentially, "remapped" to the cochlear nucleus, the first brain centre to process sounds. This unique spatial arrangement of how sounds of different frequencies are processed in the brain is called tonotopy. It can be visualised as a kind of sound map: tones that are close to each other in terms of frequency are represented by neighbouring neurons of the cochlear nucleus.
THE DIFFERENTIAL PROCESSING BETWEEN HIGH AND LOW CHARACTERISTIC FREQUENCY NEURONS SUGGESTS THAT SYNAPTIC INTEGRATION MAY DIFFER ALONG THE TONOTOPY.
Timing precision is important to cochlear nucleus neurons because their firing pattern is specific for each sound frequency. That is, their output pattern is akin to a digital code that is unique for each tone. "In the absence of sound, neurons fire randomly and at a high rate," says Burger. "In the presence of sound, neurons fire in a highly stereotyped manner known as phase-locking--which is the tendency for a neuron to fire at a particular phase of a periodic stimulus or sound wave."
The researchers investigated auditory brain cell membrane selectivity and observed that the neurons "tuned" to receive high-frequency sound preferentially select faster input than their low-frequency-processing counterparts--and that this preference is tolerant of changes to the inputs being received. A low frequency cell will tolerate a slow input and still be able to fire--but a high frequency cell requires a very rapid input and rejects slow input. Hair cells aren't very good at responding to high frequency tones as they introduce a lot of timing errors. Because of this, and because they occur at such a high rate, averaging these inputs is impossible and would smear information across multiple sound waves. So, instead, the high-frequency-processing cells use an entirely different strategy: they are as picky as possible to avoid averaging.
Understanding the mechanisms that allow cells of the cochlear nucleus to compute with temporal precision has implications for understanding the evolution of the auditory system. It's really the high frequency-processing cells that have uniquely evolved in mammals. Understanding these processes may also be important for advancing the technology used to make cochlear implants. Though an established and effective treatment for many, cochlear implants cannot currently simulate the precision of sound experienced by those with a naturally-developed auditory system. The sound processing lacks the clarity of natural hearing, especially across frequencies.