March 2017  Blake S Wilson  IEEE Pulse

Even as recently as the mid-1980s, many experts in otology and auditory science thought that restoration of useful hearing with crude and pervasive electrical stimulation of the cochlea was a fool’s dream. What the “experts” missed at the time is the brain’s awesome power to process a highly impoverished and otherwise unnatural input and make sense of it. In retrospect, the main task in developing a useful hearing prosthesis for deaf or nearly deaf people was to provide enough information in the right form for the brain to take over and do most of the job. That is not to say that any input would do, as different strategies for stimulation at the periphery produce different results and the initial results were no better than what the experts had predicted. However, once a threshold of quantity and quality of information presented at the periphery was exceeded, the brain could indeed take over and do the rest. Designers needed somehow to exceed the threshold, and that is the story of the modern cochlear implant (CI).

Today, the CI is widely acknowledged as one of the great advances in medicine and something that even the most ardent proponents of CIs could not have foreseen at the beginning. The decades-long path to today included four steps:

  1. the pioneering step to implant the first patients and develop devices that were safe and could be used for many years in patients’ daily lives

  2. the development of devices that provide multiple sites of stimulation in the cochlea to take advantage of the tonotopic (frequency) organisation of the cochlea and the auditory pathways in the brain

  3. the development of processing strategies that utilised these multiple sites far better than before and thereby enabled high levels of speech recognition for the great majority of CI users

  4. stimulation in addition to that provided by a CI on one side, either with a second CI on the opposite side or with acoustic stimulation for people who have useful residual hearing in one or both ears, usually hearing at low frequencies only.

William F HouseWilliam F. House contributed the most in achieving the first of these steps and got us started on this great journey. He persisted in the face of the criticisms, and, without that determination, the development of the CI certainly would have been delayed, if initiated at all.

House was a physician and was assisted by Jack Urban, an electrical engineer, in designing and implementing the earliest devices starting in the mid-1960s. House was working at the House Ear Institute in Los Angeles (founded in 1946 by House’s older half-brother Howard), and Urban was president of an aerospace research company in Burbank. This partnership between a physician and an engineer presaged larger teams that included one or more physicians (usually more) and sometimes a goodly number of engineers, plus auditory and speech scientists, audiologists and oftentimes additional professionals. Many teams worldwide participated in those subsequent efforts. The modern CI most certainly could not have been developed without the engineers or the physicians. Such partnerships are, of course, what biomedical engineering is all about.

The approximate times for completion of the steps are shown below, along with the cumulative number of implant recipients between 1957 and December 2012. The dots in the graph show published data points, and an exponential fit to the data has a correlation higher than 0.99. If that exponential growth continues as expected, a million people will have received a CI or bilateral CIs by early 2020; according to unpublished industry records, the number of recipients had already reached a half million in early 2016.

graphFurther improvements in performance were made with adjunctive stimulation (step 4) for people who had enough residual hearing to benefit from combined electric and acoustic stimulation (EAS) and those receiving a second implant. Both combined EAS and bilateral CIs produced statistically significant increases in speech-reception scores, especially for difficult test items or speech presented in competition with noise or other talkers. In addition, bilateral CIs could reinstate at least some sound-localisation abilities, and combined EAS produced large gains in music reception and appreciation. The sound-localisation abilities are no doubt due to representations of the interaural level differences the brain uses to infer the positions of sounds in the horizontal plane, and the better music reception may be due to representations with the acoustic stimulus of the first several harmonics of periodic sounds, as those harmonics are vital for robust perception of fundamental frequencies and thus melodic contours.

Over the past several years, the development of the modern CI has been recognised by many prestigious awards and honours, including the 2013 Lasker–DeBakey Clinical Medical Research Award and the 2015 Fritz J. and Dolores H. Russ Prize, which is the world’s top award in bioengineering and one of three prizes conferred by the U.S.’s National Academy of Engineering popularly known as the “Nobel Prizes for Engineering.” Similarly, the Lasker awards are second only to the Nobel Prize in Physiology or Medicine for recognising advances in medicine and medical science; in fact, more than a third of Lasker laureates go on to win a Nobel Prize at a later time. The engineering and medical prizes for the CI reflect the partnerships that made the CI possible and indicate the importance of the CI to both fields.

The CI is by far the most effective and most utilised neural prosthesis to date. And thus, not surprisingly, it has become the principal model for the development (or further development) of other types of neural prostheses and a foremost exemplar of the power of engineering to improve human health. With respect to the latter point, the design of the CI is included in most every biomedical engineering program worldwide. In addition, it is a core component of the curricula for budding audiologists, auditory scientists, speech scientists, and otologists. But the path to success hasn’t been easy. Joshua Boger, one of the developers of ivacaftor (a drug for the treatment of cystic fibrosis), offers the following cogent and insightful observation about medical breakthroughs, which certainly captures the experience with CIs as well: “The development of ivacaftor was a high-wire act from beginning to end.… If you are looking for dramatic changes in medicine, you are not looking to be comfortable in research; every breakthrough project I know about has passionate detractors”

Some lessons biomedical engineers can learn from the development of the CI are that the experts are not always correct and that perseverance and teamwork are important. Thanks to the second point, most of today’s CI users can communicate fluently via the telephone, even with previously unfamiliar people at the other end and even with unpredictable and changing topics. That wonderful outcome could not have been reasonably imagined at the outset or, indeed, up to the early 1990s when new processing strategies were introduced into clinical practice and the number of implant recipients began to skyrocket. Although room remains for improvement, the present-day devices “allow children to be mainstreamed into regular schools, adults to have a wide range of job opportunities, and for all recipients to connect in new and important ways with their families, friends, and society at large”. The resulting human and economic benefits have been immense—benefits that were made possible by grit, brilliance, key discoveries, exquisite engineering, and multidisciplinary teams.

March 2017 Preston Leader

All-Star athlete Sam Cart­ledge is leading the charge for deaf athletes on and off the field.

Cartledge, 23, was recently named the Male Athlete of the Year by Deaf Sports Australia for an impressive 12 months. He was vice-captain of the Deaf Men’s Basketball team, the Goannas, who snared gold at the Asia Pacific Deaf Games in Taiwan. 

Sam cartledgeDespite the award, the 23-year-old remains humble in his approach to the game.

“When I’m not playing sport, I’m fundraising and seeking ways that deaf athletes can represent their country on the world stage, without having to pay out of their own pockets,” he said. Cartledge was born deaf, but received a cochlear implant in his left ear when he was two years old. The gold medal at the Asia Pacific Deaf Games in Taiwan was the first time Australia had won a gold medal in a team event in deaf sporting history.

March 2017 nihr hsric

The NIHR Horizon Scanning Research and Intelligence Centre has published a horizon scanning review of new and emerging technologies that are being developed for the management and reduction of the negative consequences of hearing loss. More than 11 million, (approximately one in six) people in the UK are affected by hearing loss, the majority (92%) experiencing mild to moderate hearing loss. The likelihood of hearing loss increases with age, with more than 70% of 70 year-olds experiencing some form of hearing loss. Hearing loss is however, not uncommon in children; there are over 45,000 children in the UK who have a profound hearing loss.

We identified 55 technologies that fitted the identification criteria: five educational programmes, six auditory and cognitive training programmes, five assistive listening devices, eleven hearing aids (HAs) and alternative listening devices, eight implants and devices, twelve drugs, one regenerative medicine approach, and seven surgical procedures. Most of the developments were in early or uncertain clinical research and would require additional evaluation before widespread adoption by patients and the NHS.

Experts and patients picked out technologies of interest including: apps for converting speech to text and sign language to speech, hearing aids and alternative listening devices to support listening in different environments, a fully implantable cochlear implant (CI) system, a closed-loop CI system, and three developments to support the tuning and optimisation of HAs. If these were successful they have the potential to change the CI landscape for patients, improve patient experience and use of HAs, and to affect service delivery and provision.

March 2017 The Register and Medical Xpress

Kids who've never heard need 'habilitation' – they've never had a skill to rehabilitate

Getting a computer to understand speech is already a tough nut to crack. A group of Australian researchers wants to take on something much harder: teaching once-deaf babies to talk. Why so tough? Think about what happens when you talk to Siri or Cortana or Google on a phone: the speech recognition system has to distinguish your “OK Google” (for example) from background noise; it has to react to “OK Google” rather than “OK something else”; and it has to parse your speech to act on the command. And you already know how to talk.

The Swinburne University team working on an app called GetTalking can't make even that single assumption, because they're trying to solve a different problem. When a baby receives a cochlear implant to take over the work of their malfunctioning inner ear, he or she needs to learn something brand new: how to associate the sounds they can now hear with the sounds their own mouths make. Getting those kids started in the world of conversation is a matter of “habilitation” – no “rehabilitation” here, because there isn't a capability to recover.

tablet Children interact well with apps. Can one teach children to talk?

GetTalking is the brainchild of Swinburne senior lecturer Belinda Barnet, and the genesis of the idea was her own experience as mother to a child with a cochlear implant. As she explained “With my own daughter – she had an implant at 11 months old – I could afford to take a year off to teach her to talk. This involves lots of repetitive exercises.“ That time and attention, she explained, is the big predictor of success. In the roughly 10 years since it became standard practice to provide implants to babies at or before 12 months of age, 80 per cent of recipients achieve speech within the normal range. What defines the 20 per cent that don't get to that point? Inability, either because of family income or distance from the city, to “spend a year sitting on the carpet with flash-cards”. That makes it hard for parents in rural or regional locations, regional, or low-income mothers, Barnet said.

  

Belinda barnet

Belinda BarnetThe idea for which Barnet and associate professor Rachael McDonald sought funding looks simple: an app to run on something like an iPad that gives the baby a bright visual reward for speaking. However, it does test the boundaries of artificial intelligence (AI) and speech recognition, because of a very difficult starting point: how can an app respond to speech when the baby has never learned to speak? Barnet elaborated on other ways child development interplays with what the app and the AI need. “When a child has not heard any sound, they don't understand that a noise has an effect on the environment. So the first thing has to be a visual reward for an articulation.” At 12 months, she continued, children respond well to visual rewards – and even an “ahhh” or “ohhh” should get a response from the app, if (a big if even for machine learning) it's a deliberate articulation. Being developed to run on iPads or other tablets, GetTalking gives infants a bright visual reward for speaking.

Leon Sterling, a Swinburne computer science researcher, had his interest piqued as a member of the university panel assessing the project, and is helping bring a long experience of AI research to the project. He explained the hidden complexities behind what needs to present itself as a simple app. “You've got to get the signal, you have to extract the signals, separate them from the background noise, the parents speaking, et cetera.”

 Leon SterlingSwinburne's Leon Sterling

Most of those problems have precedent, but GetTalking needs yet more machine learning – like trying to measure the child's engagement with the app. “You've got to look at the ability to observe, to tag video strings together with audio strings.” The team understands that an app can't replace a speech therapist or parent, but only support them – and that adds new complexities like “building in the knowledge of how children interact with physiotherapists. You need to understand the developmental stages of children when they're interacting with the app.”

After distinguishing between speech and “the kid threw a bit pumpkin at the screen”, the app has to respond at a second stage, called “word approximation”. Here, the system's going to have to at once recognise that “da” might be an approximation for “daddy” (with reward), and support the child's development from approximation to whole words. “That's quite difficult. Sterling added another layer the system has to learn: “Is 'da' today the same 'da' as the same child said the other day?” After the app recognises any kind of speech from the infant it then has to recognise what word the child was trying to say, rewarding them for speaking words and approximations of words. That needs to be cross-matched with thousands of articulations from normally-speaking babies,” Barnet explained. Swinburne's BabyLab will help here, with a large collection of speech samples the GetTalking team needs. Those samples will help GetTalking respond to the word-approximation by re-articulating the correct word, “and show the baby a picture of what they're saying”. 

Sterling's previous experience with AI to help children comes from a Swinburne project teaching an off-the-shelf "NAO" robot to help with physiotherapy. Since 2015, he's been part of a team using the familiar “humanoid” robots to keep children recovering from injuries engaged with their physio. The university's input is to write software specific to physio – for example, demonstrating exercises to children, and giving them encouragement to keep up with it. That work has given Swinburne a handle on how children interact with technology and while it's not a replacement for a physiotherapist, “you can't have a health professional with you 24/7”.  As both Barnet and Sterling emphasised, it's impossible to replace the role of the speech therapist or parent. “I've been working in AI research for 35 years,” Sterling said. “People have consistently overestimated what they expect.” Rather than outright automation, Sterling says, most of the time what matters is to provide AI as an aid for people – “how to make a richer experience for people, to help people with their environment”. In the case of GetTalking, one thing he reckons the AI behind the app will do well is do a better job of diagnosing whether or not the child is making progress. “It's a co-design problem; you work with speech therapists, parents, kids – and see what works”, Sterling said. GetTalking is in its early stages, with support from the National Acoustic Laboratories (which operates Hearing Australia). After the app development stages, GetTalking will need a clinical trial to demonstrate its effectiveness. Those aren't cheap, but Barnet said she hopes to secure federal funding at that point. Since disadvantage is so strongly associated with holding back children who receive the implants, Barnet's hope is that GetTalking could be free to those who need it. It's possible that not everything the GetTalking team needs has to be written from scratch. For example, while their speech recognition might be ground-up; both Barnet and Sterling said the team is looking at how a long-standing project, LENA, could lighten the development load.  The LENA Project has its own focus: measuring a child's early language development from birth to 48 months old. Some of its components, however, look tantalising: speech recognition and analysis directed towards GetTalking's target age group.

Apple never revealed the price it paid to acquire the team that developed Siri, but rumours of US$150 million don't sound unreasonable – and Siri takes its input from someone who knows how to speak. For all the effort that's gone into speech recognition and AI, we also know it remains so difficult it's been automated for only a couple of per cent of languages.