New articles are published every month under the headings below.

Lend an ear: Technological advancements have made life much easier for the hearing impaired

June 2018 The Week India

At first glance, Ananya Nakra is a normal 16-year-old. She pays attention and speaks with confidence. “I want to be part of the UN and help people,” says Ananya. Nothing about her betrays her disability—hearing impairment. If she can dream big today, she has mainly the cochlear implant to thank; it enables her to listen clearly and differentiate sounds. It also makes it possible for her to talk on the phone and watch television. “Ananya was a healthy child and she was achieving all her milestones early. Once when she was one and a half, a balloon burst and she did not react at all. We then rushed to the paediatrician and a few tests later, it was identified that she had severe to profound bilateral hearing loss,” reminisces Ritu, Ananya’s mother. It meant that both her ears were affected. Ananya got a cochlear implant when she was two and nine months old. She then underwent therapy for a year and a half. “While in the start I had to wear the device on a backpack and lug it around, now I wear a small, light device on my ear. Sounds are also a lot clearer now,” she says.

Aanya Nakra Ananya Nakra with her mother, Ritu

Many like Ananya have benefitted from advanced cochlear implants and are able to lead normal lives. But sadly there are many more who do not get help on time and live on with the disability. According to a national survey, four out of every 1,000 children are born deaf in India, with about 25,000 babies born deaf every year. A very small percentage of them get implants, say experts.

Both hearing aids and implants have technologically improved in the last few years. Breakthrough in bioinert material has been one of the greatest advancements in this technology, says Dr Mohammad Naushad, director, Dr Naushad’s ENT Hospital and Research Centre, Kochi. “Silicone and titanium, materials which are compatible with the body, have made implants a lot better,” he says. Also, increase in the number of electrodes has made sounds clearer.

The sound processor also has better speech discrimination, points out Dr Shankar Medikeri, who heads Medikeri’s Super Speciality ENT Center in Bengaluru. “It also has better wireless accessories,” he says. “The TV steamer can be coupled with the implant, so can a phone. There are also aqua accessories that can be used in water.” The processor has Bluetoothlike technology that connects to different devices. “In a classroom, a lapel mic can be given to the teacher which is directly connected to the implant so that the child can hear clearly.”

While the hearing aids and implants have improved technologically, to make the most of the advancements, the impairment should be detected early. Testing for hearing loss can be done as early as the third day after birth. Kerala has mandatory newborn screening in government hospitals and the government is hoping to make it mandatory in private hospitals, too. Age is the most important factor when it comes to cochlear implants for congenital hearing loss. Intervention must start at six months of age, says Sameer Pootheri, assistant professor of audiology and head, Centre for Audiology and Speech Pathology, department of ENT, Government Medical College, Kozhikode. “First a hearing aid is fitted and if the patient is not getting too much benefit from it, an implant is done,” he says. Shalabh, who has been doing implant surgeries for about 16 years now, says his youngest patient was a nine-month-old. “If you implant before one year, they grow up completely normal,” he says. “In the first three years, there is maximum brain development for speech and language. So if done before three, the child can pick up language skills well. After three it is difficult to learn language and even rigorous therapy may not help as much,” adds Sameer. Mansi Jain, a 12-year-old from Punjab, is struggling to talk even after three years of the implant. “While the problem was detected early, we got the implant done when she was nine. Till then she would use lip language to manage but she could not speak,” says Shikha, Mansi’s mother. “After the implant when I called out her name even from a distance of 10ft she could hear and respond. But even after the therapy, she finds it difficult to understand fast speech. She also cannot speak complete sentences. If implanted late, speech distortion will be there.”

An important part of being able to hear and talk post an implant is therapy. “Once an implant is in, rigorous speech therapy is needed to help the patient understand language,” says Dr Haneesh M.M., junior consultant ENT, Government General Hospital, Ernakulam. “Surgery is 20 per cent of the work done, while the other 80 per cent is therapy,” he says. After an implant, the patient hears sounds, but they are new and the brain does not know how to interpret them. “This is why auditory verbal therapy plays a crucial role in helping the patient learn and interpret language,” explains Sameer. Along with therapy, mapping needs to be done after an implant and the audiologist and therapist work together on a patient for almost two years post an implant. “The electrical impulse going to the internal device needs to be adjusted. This process is called mapping and is done by the audiologist,” says Sameer.

If hearing impairment happens in adulthood due to reasons like infections, then an implant followed by very little therapy will help, since the person already knows language. “This is called acquired hearing loss. Since they already have learnt language, if they get an implant within a year or two of the hearing loss, they should be able to function normally,” says Sameer.

Many doctors believe the future lies in implantable devices with no external unit. While this already exists in the form of TIKI (Totally Implantable Cochlear Implant), it has a few drawbacks. “Currently, the outside unit has a battery and the microphone. When the unit with the microphone is also implanted inside, the person can hear even bodily sounds like the sound of a blood vessel or movement of hair,” says Shankar. The processor can now be recharged from outside, but once it is implanted, re-implantation will be required once the battery dies. Experts believe the technology needs to improve in such a way that for cosmetic reasons, its effectiveness is not compromised.

In the future, implanting both ears will be popular, feels Naushad. “Currently, very few bilateral implants are being done simultaneously. But bilateral implants are better as language hearing will improve and so will hearing in noisy situations,” he says.

Another development that is going to change the face of hearing technology is stem cell research, says Naushad. “If we are able to successfully cultivate stem cells inside the cochlea, which can connect with the nerve, we will not need any implant,” he says. But many experts believe it may be a long way ahead before this becomes a reality. Along with these developments, Haneesh believes it is important that the cost of the implant comes down. “There has to be social emphasis and government programmes to help bring the cost down or help people get the treatment,” he says. “There also has to be more awareness and parents should get their kids screened at a young age.”

The new iPhone update will turn Apple’s AirPods into pseudo-hearing aids

June 2018 Mic Network Inc

When Apple killed the headphone jack, it offered AirPods as a convenient, wireless alternative to traditional headphones. Now the wireless earbuds are getting a new feature. Apple’s next operating system, iOS 12, will allow audio detected by the iPhone’s microphone to be passed through to the AirPods in real time. Many dedicated devices already do this, like the Starkey Halo, the ReSound Cala and 68 others that support the Made for iPhone hearing aid standard.

Though, technically speaking, AirPods still won’t be hearing aids. Instead, they will join a class of gadgets known as personal sound amplification products. PSAPs don’t address all facets of hearing loss, but they are able to amplify the sounds around the user. For example, iPhone users can set their device on the table in front of them while at a bar or in a meeting and, with their AirPods in, hear more clearly.

AirPods

Apple first added support for hearing aid-like devices back in 2013 with the first iPhone-certified hearing device arriving in 2014. The AirPods being a first-party solution to hearing help offers tighter integration with the phone’s operating system, allowing for features like the new widget in Control Centre.

The Live Listen feature for AirPods in iOS 12 will receive a Control Centre widget.

Widget

Control CentrePSAPs aren’t full-on hearing aids, but they have their advantages. In addition to being available over-the-counter, the fact that they connect to your phone allow for additional features like taking calls and listening to music. When it comes to helping those with hearing loss, many admit their usefulness. David Grissam, a 911 dispatcher who’s been legally deaf since the age of six, wouldn’t be able to do his job without his Cochlear Baha 5, a PSAP implanted in his skull, CNET reports. “I’m able to hear more than others in the room because of that direct link,” said Grissam about the implant.

There are some notable downsides of PSAPs too. Neil DiSarno, an audiologist with the American Speech-Language-Hearing Association, told the Wall Street Journal that hearing aids are designed to treat the specific type of hearing loss a person is diagnosed with, while PSAPs are a simple increase in volume. And as Consumer Reports points out, all ambient sounds are amplified, even that loud emergency vehicle going by. Apple does offer volume controls for hearing devices in their hearing aid support page, whether PSAP users can access the volume slider in time before the ambulance passes by is another story.

While hearing aids continue to be a better solution, AirPods’ new feature may prove useful in a pinch. The Live Listen option helps justify the AirPods’ relatively high $159 price tag for people who are hard of hearing or those who just want to spy on folks in the other room.

Apple's amazing AirPods are taking a baby step towards their full potential

June 2018 Business Insider Australia

Apple AirPods

Apple AirPods

Apple will reportedly add Live Listen technology to its AirPods later this year.
Live Listen allows you to harness your iPhone and Airpods to improve what you hear in crowded situations.
It could represent the beginning of the long-awaited era of in-ear computing.

When iOS 12 comes to iPhones everywhere later this year, Apple’s very popular $US159 AirPods will get Live Listen, a nifty feature that makes it easier to hear conversations in noisy places. Live Listen has been around since 2014, but only on select Apple-certified hearing aids. Essentially, Live Listen turns your iPhone into a microphone: If you’re in a crowded bar, point your iPhone’s microphones at the person across the table from you, or even slide it over, and you’ll hear what they have to say in your hearing aid – or, soon, your Apple AirPods.

The AirPods, which have been hailed as one of Apple’s greatest inventions in recent memory, will expand the reach of Live Listen, and let far more people take advantage of a potentially very handy feature. That said, people with hearing loss should still get an actual medical device, and not rely on a pair of consumer earbuds like the AirPods.

The really exciting part is when you look at what this could mean for the future of the AirPods, and for Apple itself. When Apple first launched the AirPods they were referred to as “Apple’s first ear computer.”

Live listenApple’s Live Listen feature, as it exists today.

Indeed, the sky seemed to be the limit. Because AirPods give users one-touch access to the Siri virtual assistant, and because they linked up with the iPhone’s tremendous galaxy of apps, pundits were hopeful that the AirPods could enable all kinds of superpowers beyond what any other headphones could do. Almost two years later, though, those superpowers have yet to manifest, and the AirPods are still best suited for music and maybe phone calls.

Still, we’ve gotten a glimpse of what the future could look like, thanks to some of Apple’s competitors. Doppler Labs, a startup, released the Here One, a pair of earbuds that could increase the bass at a concert to quiet the sounds of a crying baby. Google, for its part, recently launched the Pixel Buds, which feature real-time language translation. Those products may have been too far ahead of the curve:Doppler Labs went out of business in 2017, after its cool technologies couldn’t overcome the inherent challenges of the hardware market. And the Google Pixel Buds received lukewarm reviews, and they haven’t become nearly as ubiquitous among gadgetheads as the Apple AirPods.

So it’s no wonder that Apple, which famously prefers being right to being first, has been slow to push nontraditional uses of the AirPods. With the addition of Live Listen, though, it means that Apple is still right on track to bring so-called audible computing to the masses, even if it’s happening slower than some would like. Once it gets going, though, things are going to get wild. It’s not hard to imagine how Apple’s App Store would get apps specifically for the AirPods – language translation is an obvious one, but what about putting Apple’s Shazam acquisition to work by automatically cataloging every song you hear in a day? Or prank apps that make it sound like your boss has inhaled helium during your big weekly meeting?

So yes, Live Listen is one little feature, but it’s one that points to a bold new future for Apple, where your headphones actually help you do things you couldn’t before. You may just have to wait a little while for it to fully come to pass.

Teaching new parents to talk to their babies

June 2018 Albuquerque Journal

Dana Suskind of the University of Chicago is a paediatric cochlear implant surgeon who implants hearing aids in hearing-impaired babies and toddlers. From the beginning of her surgical practice in 2005, Suskind encountered a frustration: While her paediatric patients from middle- and upper-income families rapidly caught up in language acquisition and speech, the children of low-income families did not. She set out to discover why. That is when she discovered the seminal work of psychologists Betty Hart and Todd Risley, who in the 1980s were the first to identify the language gap.

Their research followed 42 families in Kansas City, Kan., over three years, observing baby development from 9 months to nearly 4 years old. Based on characteristics like parental occupation, maternal education and income, they divided the families into three groups: high, middle and low socioeconomic status families. After recording and analysing everything “done by the children, to them and around them” for an hour per family each month over the course of three years, Hart and Risley found remarkable similarities in parenting approaches and goals. Parents all “socialised their children to a common cultural standard,” and the kids all learned to talk.

But the difference in the language they heard – the quality and quantity of words – was stunning. On average, in the course of one hour, the highest socioeconomic status children heard 2,000 words; the children of low-income families heard only 600. The highest-income parents responded to their kids an average of 250 times an hour; the lowest-income parents about 50 times.

The gap by 4 years old? Thirty million words. “But the most significant and most concerning difference? Verbal approval,” Suskind wrote in her book, “Thirty Million Words: Building a Child’s Brain.” “Children in the highest socioeconomic status heard about 40 expressions of verbal approval per hour. Children in welfare homes, about four.” Some scholars have questioned Hart and Risley’s findings, discrediting the small sample size and challenging the idea that altering language at home could help children overcome extreme social inequality.

But Suskind focused on a subtle point in her study: The essential factor that determined a child’s future learning trajectory wasn’t socioeconomic status. It was the quality – and positive nature – of the language spoken. Money didn’t matter; words did. “Children in homes in which there was a lot of parent talk, no matter the educational or economic status of that home, did better,” Suskind wrote. “It was as simple as that.”

In Chicago, the Thirty Million Words project today teaches new parents to “tune in, take turns and talk more” – the three T’s for paying attention to a child’s cues, taking conversational turns and talking more. Suskind envisions the program being implemented in prenatal care, at birthing hospitals, in paediatric clinics and home visits around the city.

In Pensacola, hospitals were an obvious place to start, since virtually every birth in the city occurs in one of the three facilities. Baptist Hospital, Sacred Heart Hospital and West Florida Hospital have been cutthroat competitors in nearly every medical specialty. But when the CEOs were individually approached, each agreed to collaborate.

Mother and babyBrain bagsBrain Bags are the brainchild of a Pensacola nonprofit called the Studer Community Institute, founded by health care guru Quint Studer, who made a fortune in hospital consulting before setting his sights on improving Pensacola. Each bag contains a binder with a bib and rattle reminding parents to “talk, talk, talk.” There is a picture book, “P is for Pelican,” and a workbook with developmental milestones for parents. The bags, free to parents, cost $25 apiece to produce. The $108,000 project is privately funded by a network of women donors and embraced by the business community; its outcomes will be tracked once babies born in the past year hit kindergarten. “In health care, the majority of money is spent on symptoms – the same thing in education,” said Studer, who has led the “Early Learning City” effort.

“In Florida, we tried to get some money for (age) 0 to 5, but it all went to ‘K’ and above because that is who has got all the lobbying power: the public school system and the universities,” he said. “But the reality is, if 85 percent of the brain is developed by age 3, that is where we need to be focused. What we’re really trying to do is treat the cause.”

Pensacola has given up waiting on the state and federal governments. Business leaders a year ago were persuaded by emerging brain science showing that about 85 percent of a child’s brain – including its 100 billion neurons – is hard-wired by the end of age 3. Language is what builds these brain connections and enhances a child’s capacity to learn; the most important component for building strong brains is parent talk.

Brain Bags emerged as one potential solution, as well as an effective fundraising tool for the philanthropic and private sectors. The Studer Community Institute acknowledges that it will take more than a bag and a delivery room conversation to change decades of inequality. It plans to reinforce the “tune in, take turns and talk more” message to parents wherever it will have the most impact. Reggie Dogan, who helped shape the institute’s mission, calls the outreach effort “a day-to-day struggle.” Parents, especially mothers in poverty, “may increase their reading today, but will they continue three years, five years, down the road?” he asked. “Or will this child fall by the wayside just like the parent did? That is what scares me.”

‘Awesomely different’

June 2018 School News Network

Local family visits classroom to teach students about personal differences

Ridgeview Elementary students’ hands were quick in the air. “I have braces,” said one. “My grandma has to wear hearing aids,” said another. Added others, “He is very tall.” “She has glasses.”  “I saw someone with lots of scars once.”

“Everyone is different in one way or another,” Kellie Hetler said. “Some differences are just more obvious than others.”

Her mission as a mother of “one who is awesomely different” is to help children find ways to explore differences in a positive way and to get involved with children they see as different.

Hetler and her husband Joe  visited all first- and second-grade classrooms at Ridgeview with daughters Addy, a first-grader there, and Gabby, who was born in July 2015 with some physical differences. Gabby is profoundly deaf and has Duane syndrome, which causes irregular eye movements. She was also born with congenital birth defects in her arms, limiting arm and hand movements.

Hetler’s goal is to make children’s early encounters with someone who looks different from them positive. “Then maybe when they see a child at the park or at the store with a limb difference or in a wheelchair it won’t be as awkward or scary for them,” said Hetler. “Maybe these kids will go up to these ‘different’ children and become their friends. Each child that we reach is one more chance that these different but awesome kids have a friend and one more chance that they will be included.”

Kellie HetlerHetler familyKellie Hetler explains to students how Gabby’s cochlear implant works; the Hetler family poses for a picture in the Ridgeview classroom

Hetler showed students Gabby’s cochlear implant, a surgically implanted device, and explained how it helps Gabby hear. Students peppered her with questions and told stories about others they knew with hearing problems. Gabby was all smiles when she demonstrated how she negotiates tasks with limbs that do not function in the same way theirs do. “It isn’t hard for her; it is the way she picks things up,” said mom. But the students found it  difficult to open a package of fruit snacks using only the fingers Gabby is able to use. A few managed to get a snack successfully to their mouths, but others scrambled onto the floor to pick up snacks that scattered with the clumsy opening of the package.

“There is no reason to feel sorry for her. It is the only way she has ever done it,” Hetler said.

Dad Joe Hetler sported a T-shirt that said “High-four” in honour of a family tradition that keeps Gabby included in the game. Keeping children with differences involved is why the family visits school classrooms. “Some kids may play a little differently, but they all like to play,” Kellie Hetler told students. “If you see a child that is different at the park, maybe you could ask them if they want to play with you.”

Having the Hetlers visit is part of an ongoing effort at Ridgeview, said special education teacher Mary Kuzawa, noting that last year Todd Pasick, “Hockey Todd,” an amputee who excelled at the game, visited the school. “Repeat exposure is important, as is allowing the students to ask questions.”

Many female preschool teachers suffer from hearing-related problems

June 2018 News-Medical.net

Preschool teachersSeven out of ten female preschool teachers suffer from sound-induced auditory fatigue, one out of two has difficulty understanding speech and four out of ten become hypersensitive to sound. This is a considerably higher share than among women in general and also higher than in occupational groups exposed to noise, according to research at Sahlgrenska Academy, Sweden. "We have an occupational group with much higher risk for these symptoms, and if nothing is done about it, it's really alarming. We have to lower sound levels, have a calmer preschool," says Sofie Fredriksson, an audiologist with a doctorate from the Occupational and Environmental Medicine Department at Sahlgrenska Academy. She has previously attracted attention with a study of hearing-related symptoms such as tinnitus among obstetric personnel due to the screams of women giving birth. In continued work on her dissertation, she has studied preschool teachers. 

Of the preschool teachers surveyed (4,718 women), 71 percent experienced sound-induced auditory fatigue, making them unable to listen to the radio, for example, after a day at work. The corresponding share in the control group (4,122 women) was 32 percent. Almost half, 46 percent, had trouble understanding speech, compared with 26 percent of the controls. Thirty-nine percent said that at least once a week they experienced discomfort or physical pain in their ears from everyday sounds that are not necessarily loud at all. The corresponding share with hyperacusis in the control group was 18 percent.

Preschool teachers are exposed to voices and screams that often convey important information, communication-intensive noise that is difficult to screen out. Unlike a machine in an industrial environment, children have to be listened to, even if one's hearing takes a beating. "Preschool teachers have a much higher risk than those who work in environments with a similar noise rating. The symptoms can be triggered by the boisterous environment, and it's also difficult to use hearing protection," says Sofie Fredriksson.

Hearing loss and tinnitus were the second most common symptoms affecting preschool teachers, but in this case the differences with women in general were not as pronounced.

The solution to the preschool teachers' problems are complex, Sofie Fredriksson emphasises. It is not just about how large the groups of children are, but also about opportunities for good periods spent outdoors and much more. "Hearing protection devices are normally the main intervention if the sound level cannot be reduced in another way, and it may be necessary if you have a child who subjects your ears to crying for a whole day during their introductory period at preschool. But the design of the premises and room acoustics also have to be considered. In a large room with solid walls, it becomes noisy no matter how educational and strategic you are in your work," she says.

Hearing Aids: Limitations and Opportunities

May 2018 The Hearing Journal 

As technology advances, hearing aids continue to improve. But in recent years, most improvements have been limited to aesthetics, comfort, or secondary functions (e.g., wireless connectivity). With respect to their primary function—improving speech perception—the performance of hearing aids has remained largely unchanged. While audibility may be restored, intelligibility is often not, particularly in noisy environments (Hearing Health Care for Adults. National Academies Press, 2016).

Why do hearing aids restore audibility but not intelligibility? To answer that question, we need to consider what aspects of auditory function audibility and intelligibility depend on. For a sound to be audible, it simply needs to elicit a large enough change in auditory nerve activity for the brain to notice; almost any change will do. But for a sound to be intelligible, it needs to elicit a very particular pattern of neural activity that the language centres of the brain can recognise.

Hearing Aids

The key problem is that hearing loss doesn't just decrease the overall level of neural activity, it also profoundly distorts the patterns of activity such that the brain no longer recognises them. Hearing loss isn't just a loss of amplification and compression, it also results in the impairment of many other important and complex aspects of auditory function. A good example is the creation of distortions: When a sound with two frequencies enters the ear, an additional sound is created by the cochlea itself at a third frequency that is a complex combination of the original two. These distortions are, of course, what we measure as distortion product otoacoustic emissions (DPOAEs), and their absence indicates impaired cochlear function.

But these distortions aren't only transmitted out of the cochlea into the ear canal. They also elicit neural activity that is sent to the brain. While a hearing aid may restore sensitivity to the two original frequencies by amplifying them, it does not create the distortions and, thus, does not elicit the neural activity that would have accompanied the distortions before hearing loss.

These distortions themselves may not be relevant when listening to broadband sounds like speech, but they are representative of the complex functionality that hearing aids fail to restore. Without this functionality, the neural activity patterns elicited by speech are very different from those that the brain has learned to expect. Because the brain does not recognise these new patterns, perception is impaired.

A useful analogy is to think of the ear and brain as two individuals having a conversation. The effect of hearing loss is not simply that the ear now speaks more softly to the brain, but rather that the ear now speaks an entirely new language that the brain does not understand. Hearing aids enable the ear to speak more loudly, but make no attempt to translate what the ear is saying into the brain's native language. In this sense, hearing aids are like tourists who hope that by shouting they will be able to overcome the fact that they are speaking the wrong language.

Why don't hearing aids correct for the more complex effects of hearing loss? In severe cases of extensive cochlear damage, it may be impossible. Even when hearing loss is only moderate, it is not yet clear how a hearing aid should transform incoming sounds to elicit the same neural activity patterns as the original sounds would have elicited before hearing loss.

But there is reason for optimism. In recent years, advances in machine learning have been used to transform many technologies, including medical devices. In general, machine learning is used to identify statistical dependencies in complex data. In the context of hearing aids, it could be used to develop new sound transformations based on comparisons of neural activity before and after hearing loss. But machine learning is not magic; to be effective, it needs large amounts of data. Fortunately, there have also been recent advances in experimental tools for recording neural activity. These new tools allow recordings from thousands of neurons at the same time and, thus, should be able to provide the required “big data.”

The combined power of machine learning and large-scale electrophysiology provide an opportunity for an entirely new approach to hearing aid design. Instead of relying on simple sound transformations that are hand-designed by engineers, the next generation of hearing aids will have the potential to perform sound transformations that are far more complex and subtle. With luck, these new transformations will enable the design of hearing aids that can restore both audibility and intelligibility—at least to a subset of patients with mild-to-moderate hearing loss.

iHEAR Medical Launching OTC Hearing Solutions in Drugstores Nationwide

May 2018 Bristol Herald Courier

iHEAR Medical announced plans to launch advanced over-the-counter (OTC) hearing solutions in major drugstore chains and independent pharmacies across the United States. The company also announced the appointment of John Luna as its Chief Executive Officer to lead iHEAR's expansion into mass retail, offering affordable, high quality hearing products to millions of Americans currently denied access to effective hearing solutions.

The OTC Hearing Aid Act of 2017 was recently passed into law to improve the affordability and accessibility of hearing aids. About 86% of Americans with hearing loss, over 30 million people, currently go untreated, making hearing impairment one of the most common forms of disability in the United States. The consequences of untreated hearing loss include lower income and higher incidence of depression, social isolation and cognitive decline. Recent reports have highlighted how hearing aid use can be a life changing experience for many. iHEAR will launch the TReO, the first prescription-quality hearing amplifier for OTC markets, and the iHearTest kit, the first FDA-cleared home hearing screener, in 500 drugstores in June 2018. The launch will expand to over 1,300 stores by the end of 2018, including major drugstore chains and independent pharmacies served by leading wholesalers. The TReO will retail for $299, compared to $2,400 on average for comparable programmable hearing devices sold in traditional hearing aid centers. The iHearTest will retail for $69 and is eligible for FSA (Flexible Spending Account) reimbursement.

"I am excited to join iHEAR at this pivotal time in the hearing industry. Offering our advanced hearing solutions over-the-counter will break persisting barriers and improve the accessibility and affordability of high quality hearing solutions," stated Luna. Luna brings over 25 years of leadership experience in the sales and marketing of innovative hearing health products, including executive and senior management roles at InSound Medical, Unitron, Bernafon, Sonic Innovations, ReSound and Phonak. Most recently, he

founded Luna Family Hearing, a retail chain of 15 audiology and hearing aid centers, offering premium hearing aid products in the Pacific Northwest.

Events Coming Up

No events

Become a Member

Become a Cicada member
For only A$10 per year, you will receive a copy of Buzz magazine and can attend events.

Deafblindness

Here is a link to Deafblindness support and information. They are based in Western Australia and supported by Senses Australia.

VCNT - Visitors 2018

Today 35

Yesterday 45

Week 35

Month 1126

All 11353

Currently are 50 guests and no members online

Kubik-Rubik Joomla! Extensions

Hear For You

Hear For You logo

Hear For You web site

Vision Statement: “For all young people who
are deaf to reach their potential in life.”

Web Analytics