Nov 2020 The Hearing Journal

Age is the strongest predictor of hearing loss in adults. Roughly 25 percent of people over age 65 (since 360 million people worldwide) have some form of hearing impairment. Receptive communication problems are associated with social isolation, depression, and dementia in the elderly.  Among the auditory hallmarks of ageing, difficulty perceiving speech in noise (SIN) ranks among the most consistent challenges.  Unfortunately, even when hearing aids correct audibility, they often fail to improve these real-world listening skills. Moreover, while pathological changes in the inner ear are well established, less is known about how the rest of the brain—actually responsible for interpreting speech, language, and cognitive signals—is affected by hearing loss. This has guided emerging brain imaging work to identify changes in nervous system function (sometimes called central presbycusis) that might account for older adults’ SIN processing deficits. But how does one identify changes in the vast neural networks that process speech and language as our auditory system begins to fade?

To address these questions, our group has recently been harnessing big data science techniques to identify changes in brain organisation that accompany hearing loss. These tools, including machine learning, neural decoding, and functional connectivity, have allowed us to identify subtle changes in listeners’ brain activity that are related to not only their SIN perception but also the severity of their hearing impairment. The approach is entirely data-driven, capitalising on the rich complexity of EEG brainwaves that we record during simple perceptual tasks (e.g., phoneme identification).

Fig 1Figure 1: Decoding hearing loss via EEG. (a) The underlying sources of EEG can be localised to different regions of the brain based on established atlases. (b) Evoked potentials extracted from representative regions reflect the brain's response to speech in normal hearing (NH) and hearing-impaired (HI) listeners. The behavioral audiogram provides the ground truth as to which hearing group (NH vs. HI) a listener falls into. Properties of EEG signals (e.g., peak amplitudes, latencies, location) are then measured and inputted as features to neural classifier algorithms. (c) Classifiers attempt to optimally segregate the data measurements and predict a listener's group membership (i.e., NH v. HI) based on their EEG alone. Neural predictions can then be compared with the original audiogram to determine classifier performance. (d) Classifying hearing status from EEG is >80 percent accurate using data from the full brain. Hearing loss is also better decoded using left compared with right hemisphere activity, consistent with the leftward dominance of speech-language processing. Hearing loss, AI, technology.

HARNESSING BIG DATA FROM THE BRAIN

We recruited 32 older adults (aged 52-72 years), roughly half of whom had normal hearing (NH) and the other half with hearing impairment (HI) defined based on their behavioural audiogram. Age and gender were matched between groups. HI listeners had relatively mild hearing loss with typical high-frequency threshold elevations characteristic of age-related presbycusis. We then recorded multichannel (32 electrodes) EEGs as they performed rapid speech and SIN perception tasks. We used neural classifiers to “decode” (i.e., classify) the EEG data and predict the listeners’ hearing status based on their brain response to speech alone. This allowed us to also determine when and where the brain best differentiates NH from HI listeners (Fig. 1).

Our data showed that listeners could be correctly classified as having (or not having) hearing loss at over 80 percent accuracy. Interestingly, left hemisphere responses were also more predictive of hearing impairment than those of the right hemisphere, consistent with the brain's leftward dominance of speech-language processing. This confirmed there is ample information in the richness of EEG signals to determine a person's hearing status objectively and without the audiogram. But which brain areas drive these hearing-related changes in the cortex?

To address this question, we applied “variable/feature” selection tools from machine learning, which aimed to identify the most important structures among speech-sensitive brain areas that differed between hearing groups. For clean speech, this analysis identified a core set of 12 brain areas among more than 1428 EEG measurements that differed between groups. These included typical suspects of the speech-language system including auditory, inferior frontal (i.e., Broca's area), and parietal cortex.

More interestingly, in both groups, an overlapping but more expansive network was engaged for SIN processing, including the motor system (precentral gyrus) and areas in the right hemisphere. The involvement of the “non-language” side of the brain as well as non-auditory regions suggests that older adults require additional neural resources to help compensate for and aid in the analysis of degraded speech. These findings are exciting because they suggest a misallocation of brain resources18 that might explain why older adults expend more listening effort to understand speech in noisy environments.19

BRAIN GRAPHS: WINDOW INTO THE CEREBRAL EFFECTS OF HEARING LOSS

In related studies, we applied techniques from a branch of mathematics called graph theory to map changes in brain network organisation due to hearing loss. Doing so allows us to visualise the web of neural circuitry involved in, for example, speech perception and characterise how different brain areas communicate with one another during those behaviours (i.e., functional connectivity).      

Fig 2Figure 2. Graph theory applied to EEG reveals changes in brain network organisation with hearing loss. (a) Listeners with hearing impairment (HI) have more chain-like network configurations, suggesting less integration and more long-range neural signalling during speech perception; normal hearing (NH) listeners show more integrated (star-like) network organisation and improved perception.(b) Within the auditory-linguistic pathway (i.e., auditory cortexBroca’s area), the strength of efferent communication is stronger in listeners with hearing loss com- pared with those with normal hearing, suggesting increased top-down compensation.

These experiments have revealed large-scale changes in the topology of the brain's networks even with mild degrees of hearing loss (Fig. 2a). For example, we found that HI listeners have more extended, chain-like brain networks whereas NH listeners have a more integrated, star-like organisation. A chain-like graph (HI) is less efficient at circulating information than a star-like (NH) graph. Therefore, the more extended neural pathways in HI listeners might again reflect a form of compensatory processing, where additional cortical resources are marshalled to make up for lost sensory clarity from the cochlea. At the very least, our findings provide intriguing evidence that the brain starts to reorganise at a fairly global level with age-related hearing loss. Whether the same reorganisation occurs in younger HI listeners remains to be seen.

Similarly, we have looked in more detail at how hearing loss affects specific auditory and language circuits of the brain (Fig. 2b). The connection between primary auditory cortex (PAC) and inferior frontal gyrus (IFG) (canonical Broca's area) is responsible for the encoding of complex sounds and further linguistic interpretation of speech signals, respectively.  Interestingly, this important language circuit is engaged in both NH and HI listeners during SIN perception tasks but to a varying degree. Transmission from auditory to language areas (i.e., PACIFG) is similar between groups. But information flow in the reverse direction is stronger in HI listeners. These findings suggest listeners with age-related hearing loss have stronger top-down communication than their NH peers when processing speech. Moreover, the relative weighting between afferent versus efferent neural signalling seems to change.  More top-down control might be needed in HI listeners to compensate for poorer sensory information from the inner ear to help maintain adequate speech understanding.

CLINICAL OUTLOOK

Unfortunately, these types of analyses cannot determine the cause of the hearing-related changes we see in the EEG. In addition to peripheral damage, changes in central auditory pathways, decreased cortical gray and white matter, and eventually age-related atrophy that limits cognitive capacities all contribute to hearing issues in older adults.  Regardless of the underlying etiology, it is clear that changes in hearing manifest in a widespread neural reorganisation, which is decodable in scalp-recorded cortical potentials. But is there any utility for brain decoding in clinical hearing assessment?

Gold-standard hearing diagnostics rely on the behavioural audiogram. These are threshold (detection) measures and arguably, do not tap the complex processing relevant for robust SIN understanding. Instead, objective techniques can provide diagnostics for difficult-to-test or uncooperative patients (e.g., infants). Fortunately, several physiological measures are available in the audiologists’ test battery (e.g., ABR, OAEs). However, these tools offer only a limited snapshot of certain hearing subsystems (e.g., cochlear or brainstem integrity) rather than the perceptual-cognitive processes of speech communication. Moreover, while the ABR is normally recognised as having 90/80 percent sensitivity/specificity in detecting hearing loss, it has difficultly distinguishing audiometric configurations and is largely insensitive to hearing losses within the slight to mild range (i.e., <35 dB HL). Speech-evoked EEG might circumvent several of these shortcomings.

While multichannel EEG and cortical response testing are outside the typical audiologist's scope of practice, we hope that more widespread use of these objective measures and brain decoding techniques might become more widely available in the near future. Additional research is needed to see if these techniques can identify not only the presence or absence of hearing loss but also the different degrees of loss and audiometric configurations. Still, the use of wearable technologies for digital health care is rapidly growing, and mobile (wireless) EEG is becoming mainstream for at-home monitoring of various aspects of brain health.  Conceivably, such portable devices coupled with the ever-expanding developments in machine learning and artificial intelligence might offer new neurodiagnostics to identify early hearing issues, perhaps even before they are apparent via current clinical measures.

Become a Member

Become a Cicada member
For only A$10 for life, you will receive a copy of Buzz magazine and can attend events.

Latest News

Vinaora Visitors Counter

073945
Today
Yesterday
This Week
Last Week
This Month
Last Month
All days
2245
3330
2245
38298
73945
0
73945

Your IP: 3.239.109.55
17-01-2021 14:37

Deafblindness

Here is a link to Deafblindness support and information.
They are based in Western Australia and supported by Senses Australia.

Hear For You logo

 

 

 

Hear For You web site

Vision Statement: “For all young people who are deaf to reach their potential in life.”

Go to top
JSN Boot template designed by JoomlaShine.com
Web Analytics