Nov 2020 The Hearing Journal

Innovation, stewardship, and equality will be crucial in producing a sustainable framework for care delivery going forward. Applying novel analytic techniques—such as artificial intelligence (AI)—may play a key role in identifying possibilities to achieve these aims and develop novel methods of care delivery.

aiOne of the key advantages of AI is that the algorithms are largely free from the constraints of linearity assumptions and messy datasets with hundreds to thousands of variables. When we think of industries with complex, messy data, health care is often the poster child. As such, considerable interest in the application of AI to medicine has blossomed over the past decade. A variety of techniques exists, with the AI toolkit ranging from natural language processing for unstructured text translation and extraction to deep learning for pattern recognition in medical imagery. Deep learning techniques offer a unique opportunity for prediction and pattern recognition in nontraditional data formats such as images, video, and sound.

The proportion of the worldwide population with hearing loss and those expected to require audiologist expertise to diagnose hearing loss is growing.The increase in demand for audiologic and hearing care services is running up against limited capacity to deliver those services. Without sourcing more human resources to deliver hearing care, many people suffering from hearing loss will be left without care. Such a supply and demand gap presents an opportunity to re-engineer the process of diagnostic and case triage. We devised an approach to augment hearing loss diagnosis by integrating a deep learning model for automatic hearing loss type classification. The goal of our efforts was to generate a proof-of-concept model using a deep learning algorithm to facilitate automatic and accurate interpretation of hearing loss types from audiogram plot images.

Fig 1Figure 1: Schematic of the deep learning modelling workflow. Hearing loss, artificial intelligence, machine learning.

Deep learning algorithms require a lot of data to train an accurate model. In our study, we procured 1,007 audiogram plot images from adult patients from the electronic medical record system at the Sunnybrook Health Sciences Centre in Toronto, Ontario, Canada. The goal of the proof-of-concept model was to rapidly interpret the audiogram plot graphic as representing normal, conductive, mixed, and sensorineural hearing loss—all without human guidance. The audiogram images had previously been interpreted and labeled by audiologists.

The plots from the audiogram reports were the primary data input for our deep learning modelling. The plots were cropped from the diagnostic reports and resized to a standard 500 by 500 pixels. We randomly split our image database into two subsets: training (n = 806) and validation (n = 201). To increase the degree of difficulty and robustness of the model learning patterns from the plots, we randomly applied several image transformations, including image rotation, warping, contrast, lighting, and zoom. We chose a previously successful neural network architecture called ResNet to serve as the architecture of our neural network. After the neural network model was trained, the held-out validation image set was used to evaluate the classification accuracy of the trained model.

Our fully trained model had a peak classification accuracy of 97.0 percent in correctly assigning an audiogram plot as either representing normal, conductive, mixed, or sensorineural hearing loss. After this successful result, we dove deeper into the mechanics of our model to see how the network learned to differentiate between the hearing loss types using an interpretation step called Gradient-weighted Class Activation Mapping (Grad-CAM).  Much as human experts do, the neural network learned that the separation between the air conduction line and the bone conduction line represented a conductive loss. Of the few errors made by the prediction model, the model failed to recognise the bone conduction threshold below normal, which would classify an audiogram as mixed hearing loss.

IMPLICATIONS

Given the marked deficit in audiology workforce capacity, an expected exponential increase in incidence and prevalence of hearing loss worldwide, and the success of deep learning in diverse diagnostic tasks, a deep learning-based approach to the automatic interpretation of audiograms has the potential to be a useful innovation. Classifying a hearing test by the type of hearing loss does not fully service the fundamental task of determining an individual's total hearing loss deficit—such an approach does not integrate speech intelligibility and recognition. However, an enhanced neural network model that could also determine hearing loss severity coupled with speech intelligibility could be deployed to help triage patients with hearing loss for aural rehabilitation candidacy in regions with limited or no audiologist availability. Such a solution may seem limited in the context of first-world settings flush with audiologic resources, but much of the world does not have ready access to hearing care. More to the point, deep learning technology is scalable and can be deployed wherever a smartphone or laptop computer can be used to gain access to remote and isolated populations.

Despite the success of these algorithms and the rapidly expanding volume of published manuscripts in medicine using machine learning approaches, we are still at a considerable distance between our current efforts and wide adoption in everyday clinical practice. The audiologic and medical communities have yet to establish who might be held accountable for an algorithm that produces an erroneous prediction leading to harm. Who might be liable if an automated system misses the opportunity to refer a patient with a cholesteatoma or vestibular schwannoma? Are the individuals who trained the algorithm accountable, or is it the end-user who relies on the algorithm's output without individual and situational context? There are also significant technical hurdles. The training and deployment of deep learning solutions require significant computational resources. Depending on the use case, training deep learning models can take hours to complete using state-of-the-art computing power. We have also not begun to rigorously investigate the environmental impact of using such energy-intensive resources. Another major roadblock against the development and refinement of models is the availability of sufficient high-quality data. We and other stakeholders in the AI and medical communities are wrestling with this challenge.

AI offers the prospect of pushing analytical boundaries for big data beyond those that are constrained by traditional statistical methods. This endeavour has the potential for significant health systems change—especially when we apply these techniques to address significant supply and demand gaps in care delivery. Hearing loss will continue to be a significant global public health issue. Re-engineering the hearing care delivery process with novel deep learning approaches may help enhance access to the growing global population that is expected to require hearing care.

Become a Member

Become a Cicada member
For only A$10 for life, you will receive a copy of Buzz magazine and can attend events.

Latest News

Vinaora Visitors Counter

083078
Today
Yesterday
This Week
Last Week
This Month
Last Month
All days
6685
4693
11378
38298
83078
0
83078

Your IP: 66.249.73.28
18-01-2021 22:24

Deafblindness

Here is a link to Deafblindness support and information.
They are based in Western Australia and supported by Senses Australia.

Hear For You logo

 

 

 

Hear For You web site

Vision Statement: “For all young people who are deaf to reach their potential in life.”

Go to top
JSN Boot template designed by JoomlaShine.com
Web Analytics