June 2018 The Week India
At first glance, Ananya Nakra is a normal 16-year-old. She pays attention and speaks with confidence. “I want to be part of the UN and help people,” says Ananya. Nothing about her betrays her disability—hearing impairment. If she can dream big today, she has mainly the cochlear implant to thank; it enables her to listen clearly and differentiate sounds. It also makes it possible for her to talk on the phone and watch television. “Ananya was a healthy child and she was achieving all her milestones early. Once when she was one and a half, a balloon burst and she did not react at all. We then rushed to the paediatrician and a few tests later, it was identified that she had severe to profound bilateral hearing loss,” reminisces Ritu, Ananya’s mother. It meant that both her ears were affected. Ananya got a cochlear implant when she was two and nine months old. She then underwent therapy for a year and a half. “While in the start I had to wear the device on a backpack and lug it around, now I wear a small, light device on my ear. Sounds are also a lot clearer now,” she says.
Ananya Nakra with her mother, Ritu
Many like Ananya have benefitted from advanced cochlear implants and are able to lead normal lives. But sadly there are many more who do not get help on time and live on with the disability. According to a national survey, four out of every 1,000 children are born deaf in India, with about 25,000 babies born deaf every year. A very small percentage of them get implants, say experts.
Both hearing aids and implants have technologically improved in the last few years. Breakthrough in bioinert material has been one of the greatest advancements in this technology, says Dr Mohammad Naushad, director, Dr Naushad’s ENT Hospital and Research Centre, Kochi. “Silicone and titanium, materials which are compatible with the body, have made implants a lot better,” he says. Also, increase in the number of electrodes has made sounds clearer.
The sound processor also has better speech discrimination, points out Dr Shankar Medikeri, who heads Medikeri’s Super Speciality ENT Center in Bengaluru. “It also has better wireless accessories,” he says. “The TV steamer can be coupled with the implant, so can a phone. There are also aqua accessories that can be used in water.” The processor has Bluetoothlike technology that connects to different devices. “In a classroom, a lapel mic can be given to the teacher which is directly connected to the implant so that the child can hear clearly.”
While the hearing aids and implants have improved technologically, to make the most of the advancements, the impairment should be detected early. Testing for hearing loss can be done as early as the third day after birth. Kerala has mandatory newborn screening in government hospitals and the government is hoping to make it mandatory in private hospitals, too. Age is the most important factor when it comes to cochlear implants for congenital hearing loss. Intervention must start at six months of age, says Sameer Pootheri, assistant professor of audiology and head, Centre for Audiology and Speech Pathology, department of ENT, Government Medical College, Kozhikode. “First a hearing aid is fitted and if the patient is not getting too much benefit from it, an implant is done,” he says. Shalabh, who has been doing implant surgeries for about 16 years now, says his youngest patient was a nine-month-old. “If you implant before one year, they grow up completely normal,” he says. “In the first three years, there is maximum brain development for speech and language. So if done before three, the child can pick up language skills well. After three it is difficult to learn language and even rigorous therapy may not help as much,” adds Sameer. Mansi Jain, a 12-year-old from Punjab, is struggling to talk even after three years of the implant. “While the problem was detected early, we got the implant done when she was nine. Till then she would use lip language to manage but she could not speak,” says Shikha, Mansi’s mother. “After the implant when I called out her name even from a distance of 10ft she could hear and respond. But even after the therapy, she finds it difficult to understand fast speech. She also cannot speak complete sentences. If implanted late, speech distortion will be there.”
An important part of being able to hear and talk post an implant is therapy. “Once an implant is in, rigorous speech therapy is needed to help the patient understand language,” says Dr Haneesh M.M., junior consultant ENT, Government General Hospital, Ernakulam. “Surgery is 20 per cent of the work done, while the other 80 per cent is therapy,” he says. After an implant, the patient hears sounds, but they are new and the brain does not know how to interpret them. “This is why auditory verbal therapy plays a crucial role in helping the patient learn and interpret language,” explains Sameer. Along with therapy, mapping needs to be done after an implant and the audiologist and therapist work together on a patient for almost two years post an implant. “The electrical impulse going to the internal device needs to be adjusted. This process is called mapping and is done by the audiologist,” says Sameer.
If hearing impairment happens in adulthood due to reasons like infections, then an implant followed by very little therapy will help, since the person already knows language. “This is called acquired hearing loss. Since they already have learnt language, if they get an implant within a year or two of the hearing loss, they should be able to function normally,” says Sameer.
Many doctors believe the future lies in implantable devices with no external unit. While this already exists in the form of TIKI (Totally Implantable Cochlear Implant), it has a few drawbacks. “Currently, the outside unit has a battery and the microphone. When the unit with the microphone is also implanted inside, the person can hear even bodily sounds like the sound of a blood vessel or movement of hair,” says Shankar. The processor can now be recharged from outside, but once it is implanted, re-implantation will be required once the battery dies. Experts believe the technology needs to improve in such a way that for cosmetic reasons, its effectiveness is not compromised.
In the future, implanting both ears will be popular, feels Naushad. “Currently, very few bilateral implants are being done simultaneously. But bilateral implants are better as language hearing will improve and so will hearing in noisy situations,” he says.
Another development that is going to change the face of hearing technology is stem cell research, says Naushad. “If we are able to successfully cultivate stem cells inside the cochlea, which can connect with the nerve, we will not need any implant,” he says. But many experts believe it may be a long way ahead before this becomes a reality. Along with these developments, Haneesh believes it is important that the cost of the implant comes down. “There has to be social emphasis and government programmes to help bring the cost down or help people get the treatment,” he says. “There also has to be more awareness and parents should get their kids screened at a young age.”
June 2018 Mic Network Inc
When Apple killed the headphone jack, it offered AirPods as a convenient, wireless alternative to traditional headphones. Now the wireless earbuds are getting a new feature. Apple’s next operating system, iOS 12, will allow audio detected by the iPhone’s microphone to be passed through to the AirPods in real time. Many dedicated devices already do this, like the Starkey Halo, the ReSound Cala and 68 others that support the Made for iPhone hearing aid standard.
Though, technically speaking, AirPods still won’t be hearing aids. Instead, they will join a class of gadgets known as personal sound amplification products. PSAPs don’t address all facets of hearing loss, but they are able to amplify the sounds around the user. For example, iPhone users can set their device on the table in front of them while at a bar or in a meeting and, with their AirPods in, hear more clearly.
Apple first added support for hearing aid-like devices back in 2013 with the first iPhone-certified hearing device arriving in 2014. The AirPods being a first-party solution to hearing help offers tighter integration with the phone’s operating system, allowing for features like the new widget in Control Centre.
The Live Listen feature for AirPods in iOS 12 will receive a Control Centre widget.
PSAPs aren’t full-on hearing aids, but they have their advantages. In addition to being available over-the-counter, the fact that they connect to your phone allow for additional features like taking calls and listening to music. When it comes to helping those with hearing loss, many admit their usefulness. David Grissam, a 911 dispatcher who’s been legally deaf since the age of six, wouldn’t be able to do his job without his Cochlear Baha 5, a PSAP implanted in his skull, CNET reports. “I’m able to hear more than others in the room because of that direct link,” said Grissam about the implant.
There are some notable downsides of PSAPs too. Neil DiSarno, an audiologist with the American Speech-Language-Hearing Association, told the Wall Street Journal that hearing aids are designed to treat the specific type of hearing loss a person is diagnosed with, while PSAPs are a simple increase in volume. And as Consumer Reports points out, all ambient sounds are amplified, even that loud emergency vehicle going by. Apple does offer volume controls for hearing devices in their hearing aid support page, whether PSAP users can access the volume slider in time before the ambulance passes by is another story.
While hearing aids continue to be a better solution, AirPods’ new feature may prove useful in a pinch. The Live Listen option helps justify the AirPods’ relatively high $159 price tag for people who are hard of hearing or those who just want to spy on folks in the other room.
June 2018 Business Insider Australia
Apple will reportedly add Live Listen technology to its AirPods later this year.
Live Listen allows you to harness your iPhone and Airpods to improve what you hear in crowded situations.
It could represent the beginning of the long-awaited era of in-ear computing.
When iOS 12 comes to iPhones everywhere later this year, Apple’s very popular $US159 AirPods will get Live Listen, a nifty feature that makes it easier to hear conversations in noisy places. Live Listen has been around since 2014, but only on select Apple-certified hearing aids. Essentially, Live Listen turns your iPhone into a microphone: If you’re in a crowded bar, point your iPhone’s microphones at the person across the table from you, or even slide it over, and you’ll hear what they have to say in your hearing aid – or, soon, your Apple AirPods.
The AirPods, which have been hailed as one of Apple’s greatest inventions in recent memory, will expand the reach of Live Listen, and let far more people take advantage of a potentially very handy feature. That said, people with hearing loss should still get an actual medical device, and not rely on a pair of consumer earbuds like the AirPods.
The really exciting part is when you look at what this could mean for the future of the AirPods, and for Apple itself. When Apple first launched the AirPods they were referred to as “Apple’s first ear computer.”
Apple’s Live Listen feature, as it exists today.
Indeed, the sky seemed to be the limit. Because AirPods give users one-touch access to the Siri virtual assistant, and because they linked up with the iPhone’s tremendous galaxy of apps, pundits were hopeful that the AirPods could enable all kinds of superpowers beyond what any other headphones could do. Almost two years later, though, those superpowers have yet to manifest, and the AirPods are still best suited for music and maybe phone calls.
Still, we’ve gotten a glimpse of what the future could look like, thanks to some of Apple’s competitors. Doppler Labs, a startup, released the Here One, a pair of earbuds that could increase the bass at a concert to quiet the sounds of a crying baby. Google, for its part, recently launched the Pixel Buds, which feature real-time language translation. Those products may have been too far ahead of the curve:Doppler Labs went out of business in 2017, after its cool technologies couldn’t overcome the inherent challenges of the hardware market. And the Google Pixel Buds received lukewarm reviews, and they haven’t become nearly as ubiquitous among gadgetheads as the Apple AirPods.
So it’s no wonder that Apple, which famously prefers being right to being first, has been slow to push nontraditional uses of the AirPods. With the addition of Live Listen, though, it means that Apple is still right on track to bring so-called audible computing to the masses, even if it’s happening slower than some would like. Once it gets going, though, things are going to get wild. It’s not hard to imagine how Apple’s App Store would get apps specifically for the AirPods – language translation is an obvious one, but what about putting Apple’s Shazam acquisition to work by automatically cataloging every song you hear in a day? Or prank apps that make it sound like your boss has inhaled helium during your big weekly meeting?
So yes, Live Listen is one little feature, but it’s one that points to a bold new future for Apple, where your headphones actually help you do things you couldn’t before. You may just have to wait a little while for it to fully come to pass.
June 2018 Albuquerque Journal
Dana Suskind of the University of Chicago is a paediatric cochlear implant surgeon who implants hearing aids in hearing-impaired babies and toddlers. From the beginning of her surgical practice in 2005, Suskind encountered a frustration: While her paediatric patients from middle- and upper-income families rapidly caught up in language acquisition and speech, the children of low-income families did not. She set out to discover why. That is when she discovered the seminal work of psychologists Betty Hart and Todd Risley, who in the 1980s were the first to identify the language gap.
Their research followed 42 families in Kansas City, Kan., over three years, observing baby development from 9 months to nearly 4 years old. Based on characteristics like parental occupation, maternal education and income, they divided the families into three groups: high, middle and low socioeconomic status families. After recording and analysing everything “done by the children, to them and around them” for an hour per family each month over the course of three years, Hart and Risley found remarkable similarities in parenting approaches and goals. Parents all “socialised their children to a common cultural standard,” and the kids all learned to talk.
But the difference in the language they heard – the quality and quantity of words – was stunning. On average, in the course of one hour, the highest socioeconomic status children heard 2,000 words; the children of low-income families heard only 600. The highest-income parents responded to their kids an average of 250 times an hour; the lowest-income parents about 50 times.
The gap by 4 years old? Thirty million words. “But the most significant and most concerning difference? Verbal approval,” Suskind wrote in her book, “Thirty Million Words: Building a Child’s Brain.” “Children in the highest socioeconomic status heard about 40 expressions of verbal approval per hour. Children in welfare homes, about four.” Some scholars have questioned Hart and Risley’s findings, discrediting the small sample size and challenging the idea that altering language at home could help children overcome extreme social inequality.
But Suskind focused on a subtle point in her study: The essential factor that determined a child’s future learning trajectory wasn’t socioeconomic status. It was the quality – and positive nature – of the language spoken. Money didn’t matter; words did. “Children in homes in which there was a lot of parent talk, no matter the educational or economic status of that home, did better,” Suskind wrote. “It was as simple as that.”
In Chicago, the Thirty Million Words project today teaches new parents to “tune in, take turns and talk more” – the three T’s for paying attention to a child’s cues, taking conversational turns and talking more. Suskind envisions the program being implemented in prenatal care, at birthing hospitals, in paediatric clinics and home visits around the city.
In Pensacola, hospitals were an obvious place to start, since virtually every birth in the city occurs in one of the three facilities. Baptist Hospital, Sacred Heart Hospital and West Florida Hospital have been cutthroat competitors in nearly every medical specialty. But when the CEOs were individually approached, each agreed to collaborate.
Brain Bags are the brainchild of a Pensacola nonprofit called the Studer Community Institute, founded by health care guru Quint Studer, who made a fortune in hospital consulting before setting his sights on improving Pensacola. Each bag contains a binder with a bib and rattle reminding parents to “talk, talk, talk.” There is a picture book, “P is for Pelican,” and a workbook with developmental milestones for parents. The bags, free to parents, cost $25 apiece to produce. The $108,000 project is privately funded by a network of women donors and embraced by the business community; its outcomes will be tracked once babies born in the past year hit kindergarten. “In health care, the majority of money is spent on symptoms – the same thing in education,” said Studer, who has led the “Early Learning City” effort.
“In Florida, we tried to get some money for (age) 0 to 5, but it all went to ‘K’ and above because that is who has got all the lobbying power: the public school system and the universities,” he said. “But the reality is, if 85 percent of the brain is developed by age 3, that is where we need to be focused. What we’re really trying to do is treat the cause.”
Pensacola has given up waiting on the state and federal governments. Business leaders a year ago were persuaded by emerging brain science showing that about 85 percent of a child’s brain – including its 100 billion neurons – is hard-wired by the end of age 3. Language is what builds these brain connections and enhances a child’s capacity to learn; the most important component for building strong brains is parent talk.
Brain Bags emerged as one potential solution, as well as an effective fundraising tool for the philanthropic and private sectors. The Studer Community Institute acknowledges that it will take more than a bag and a delivery room conversation to change decades of inequality. It plans to reinforce the “tune in, take turns and talk more” message to parents wherever it will have the most impact. Reggie Dogan, who helped shape the institute’s mission, calls the outreach effort “a day-to-day struggle.” Parents, especially mothers in poverty, “may increase their reading today, but will they continue three years, five years, down the road?” he asked. “Or will this child fall by the wayside just like the parent did? That is what scares me.”