March 2019 Macquarie University Lighthouse
Children with hearing loss face challenges learning language, but a new Macquarie University project will explore how to help them more effectively learn to listen and speak.
Ten-year-old Hana is doing well at school, and is fluent in both Japanese and English – despite being born with profound hearing loss in both ears. Hana has had dual Cochlear implants since she was a baby. Hana’s language skills are even more exceptional when considering that most kids with hearing loss lag well behind their hearing peers in their language development, and many children come home from school exhausted from the efforts they must make to listen, understand and interact with others.
With poorer use of grammar and conversational ability, kids with hearing loss also typically have long-term problems with communication, educational performance and social wellbeing. Most kids with severe hearing loss now receive cochlear implants which send sound signals to their brains – but there are many sound nuances that children miss out on, and that makes learning to speak really difficult if you've only ever 'heard' electronic versions of language. Unlike wearing glasses, which lets you see what fully-sighted people see, the brain has to learn to interpret the electric sounds it is hearing – and that takes some effort.
Katherine Demuth is a Professor in Linguistics and the chief investigator on a new three-year ARC Linkage project that’s being run out of Macquarie University's Child Language Lab, which is part of the Australian Hearing Hub. She hopes to uncover what words and sounds, and which parts of language, are the hardest for children with hearing loss to understand. Her team includes audiologist Associate Professor Mridula Sharma, special education experts and speech pathologists from Macquarie, and partner organisations including Cochlear, the Shepherd Centre, Parents of Deaf Children, Royal Institute for Deaf and Blind Children and Australian Hearing.
The project includes three key studies which will measure how children with hearing aids or cochlear implants process language.
Researchers at the Child Language Lab have extensive experience with psycholinguistic techniques which will be used in these studies. These include eye-tracking and pupillometry, (measuring the size and reaction of pupils) as well as discourse interactions – exploring the way children speak and respond in conversation.
On the case: Associate Professor Mridula Sharma and Professor Katherine Demuth.
Cochlear implants have become far more sophisticated since their first incarnation in 1978 but still replace acoustic hearing with electric hearing – something that the brain must learn to decipher, explains Demuth. “Neither amplification of speech with hearing aids, nor cochlear implants, transmit normal speech sounds,” she says. “Unlike wearing glasses, which lets you see what fully-sighted people see, the brain has to learn to interpret the electric sounds it is hearing – and that takes some effort.” That effort is even greater for children, she adds. “Adults who lose hearing already have a language model, they can already speak, and can use ‘top down’ information to fill in the things they don’t hear very well.” But children with impaired hearing don’t have that advantage, she says. “Their challenge is to develop language, to learn the words they need and put them together in sentences to be able to communicate effectively, not only in their own speech, but also so that when they hear something, they can understand what it means and respond to it in a discourse-appropriate way.”
Demuth's new project will help find better ways to help clinicians teach language to the 1500 or so deaf children in Australia who get cochlear implants as babies each year, so that more deaf children can be as well-adjusted as Hana. When Hana was diagnosed with profound hearing loss in both ears, her parents decided that her best outcomes would come from two cochlear implants – one for each ear. “But the decision was an uncommon one in Japan at the time, which didn't make a whole lot of sense to my wife Hitomi and I,” says her dad, Professor Jason Hollowell, who is a Visiting Fellow at Macquarie’s Child Language Lab.
Early opportunities: Hana, who had cochlear implants as a baby, with her dad, Professor Jason Hollowell.
“One big reason to have two cochlear implants was localisation – so that Hana could identify the direction that sounds were coming from, such as an approaching car,” he says. “But I was also aware of research at the time that showed that having two sources of sound aids considerably with listening and sound boundaries, as well as language, which made a lot of sense to me.” Mainstream thinking has shifted since then and the importance of two sources of sound – binaural hearing – is now far more accepted, he says.
Hollowell is a linguist who initially specialised in second language acquisition but switched his main research interest to first language acquisition since his daughter’s birth. Hana learned Japanese from her mum Hitomi, a university administrator, and English from her dad as she grew up, though Hollowell says that growing up in Japan has made Hana’s Japanese language skills far stronger than her English. It’s not apparent when talking to Hana, whose English is flawless. She says that attending school in Australia for the past year has been fun – she especially loves doing art and sport – and that she can mostly hear things in the classroom, now that she is used to her teachers’ Australian accents. She says that the hardest thing to hear is when other students whisper – but now that her friends know to whisper into the transmitter for her cochlear device, which sits on her skull, Hana can hear them much better.
Hollowell came to Macquarie University during a sabbatical from Musashi University in Tokyo, partly to work with experts at the Child Language Lab. Demuth says that Hana has been a willing participant in pilot studies which show the potential language skills for children with profound hearing loss given the right opportunities in early life. “Raising Hana bilingual may have helped her language acquisition, because it’s given her more linguistic awareness, a better understanding of what it means to pull apart some of the sounds to convey meaning,” says Demuth. The Child Language Lab has extensive studies on how normal-hearing children build a language model, Demuth says. “That serves as a baseline for our understanding of what typical language learning processes are. From there we can start to identify the particular challenges for hearing-impaired children who depend on a device, where the sounds are sometimes not as well transmitted.
“Our first studies will look at how children with hearing aids and cochlear implants distinguish certain sounds and words,” she says. For example, words like ‘pear’ and ‘bear’ can be hard to distinguish via a cochlear implant; the project will allow researchers to build a better understanding of the challenges children with hearing loss face when hearing these words in everyday conversation.
Understanding irony, or responding to questions, can also be challenging when the change in intonation is difficult to distinguish, Demuth adds; and part of the project will look at ‘discourse interactions’ to identify how these aspects of language are eventually learned. “For clinicians working with hearing impaired children, this will give clearer guidelines on where to put their time and effort to maximise the outcomes for these children.”