It’s not every day that I get to test an a11y app, that later became into a product. Google recently launched two apps to make the life of people who are deaf or hard of hearing easier: Live Transcribe and Sound Amplifier.
At the end of last year, I attended theDeafIT conference in Munich. This is my report about this awesome conference.
The DeafIT conference is targeted towards IT professional who are deaf or hard of hearing. From what I heard during the conference, there are two main goals of the event. One is to enable deaf IT professionals to network among each other and with potential employers. The other one is to make IT content and knowledge accessible to them (which is not necessarily the case on other IT conferences).
DeafIT started small, as a tiny conference back then in 2013 in Munich. In 2018 they celebrated their 5th anniversary, which included a nice gala dinner program in the evening in the rather fancy conference hotel.
The Audience and Speakers
While the conference is called DeafIT, the audience is mixed with respect to the level of hearing loss. Besides the Deaf, also hearing and hard of hearing people are welcome as speakers and audience. However, those latter two groups were in the minority in this event.
This led to the – for me – very interesting experience of being in a room of ~100 people and everyone is talking, but you still barely hear a sound. I often feel being in the minority among hearing people, but this time I was in the minority in a different way.
I heart about this conference a few years back already, but so far I had been rather hesitant to attend. Mostly because I am not deaf (yet?) and as such not part of the deaf community. While there was no indication of this on the conference page, I was just not sure whether I was welcome there (which turns out, was complete nonsense).
While I am hard of hearing and that at a profound level, I would not even call myself part of the community of hard of hearing people, mainly because I lost my hearing rather late (in my late 20ies) and never actively sought out the community by attending meetups etc.
Nevertheless I am somewhat active in the media with respect to hearing loss, one example being this very blog. Over the years, I have build up a network of people who are hard of hearing or professionals in that area. And those send me links to the DeafIT conference a couple of times.
After my initial hesitation and in particular when I learned that 2018’s conference will take place in Munich, I thought I would give it a try. And it was totally worth it.
When entering the conference room, the first thing you notice is the highly sophisticated technical setup. The room is equipped to accommodate the needs of speakers and attendees who are hearing, hard of hearing, or deaf.
That means for speakers presenting in German sign language, a translator was sitting in the first row, speaking what they read into a microphone in German spoken language. Some signing speakers did not sign in German sign language, but in their respective mother tongue, which in this case was Russian and Brazilian. Also here, a translator was able to read the foreign sign languages an simultaneously translated it into German spoken language. Additionally for the people in the audience who communicate in German sign language, a second translator would be located on stage and simultaneously sign the content in German sign language.
When a presenter was presenting in spoken language, a sign language translator would accommodate them on state and translate simultaneously into German sign language as well.
Before I attended this event, it never occurred to me that in this setup, when you want to ask a question in sign language, you actually have to go on stage, because otherwise the audience would not be able to read you.
Additionally, a remote captioning service was used during the talks (provided by Verbavoice). That means somewhere else on the globe somebody was listening in on the audio stream of the conference and typing the spoken words really really fast. These live subtitles were projected on the canvas below the presentation’s slides.
Last but not least, there was an additional service for the hard of hearing. If you were wearing hearing aids with a T-coil, could you borrow receivers to connect your hearing aids directly to the audio stream. I could have used that, but the crystal clear audio signal from the speakers together with the live captions covered everything I needed.
I was really overwhelmed by the effort this small community makes to include everyone. While I was here in a minority among people whose primary language is sign language, I felt a lot more welcome than I usually feel among the hearing. I wish other IT conferences would have the willingness to include people as much as DeafIT does.
The conference program consisted of one track of presentations about various areas of IT. The key note started at 8:45 and everyone was up and running at that time already. While I was still trying to wake up with a coffee I was wondering if being an early bird is somehow related to hearing loss or whether they just started so early, because the agenda was so packed.
While the program covered a lot of general IT topics ranging from web development to IT security, a large part of the talks were related to sign language or hearing loss. Some of the general talks were really interesting, but the ones related to hearing and signing were the most interesting to me. I guess mostly, because I do attend other IT conferences as well and those never tend to cover these topics.
You can find the entire schedule here and the slides for download here and the official report here.
My favorite talks
Gebärdenspracherkennung mit Deep Learning – Sign Language Recognition with Deep Learning (Slides)
This was a summary of a PhD thesis where the author used machine learning to recognize (one-hand) signs and created a system that would translate them simultaneously into a sign transcript language. He had a very impressive demo where he would form different gestures with one hand and the computer would translate them immediately and correctly.
HAND TALK – Digitale Uebersetung mit Hugo / HAND TALK – Digital translations with Hugo (Slides)
Hugo is a cute little avatar and the face of an app, which helps to translate spoken or written language into sign language. It is developed in Brasil, primarily for Brazilian sign language. In Brasil, illiteracy is higher than in Europe for example, in particular among the deaf, where many people never study a spoken language in their life.
Chancen und Risiken der Künstlichen Intelligenz in der Gehörlosengemeinschaft / Chances and risks of Artificial Intelligence in the Deaf Community (Slides)
This was not a presentation, but a panel discussion with a couple of the speakers of the conference. The chances of AI for the Deaf community were rather easy to understand, like all the cool projects which use machine learning to translate sign language, sign transcriptions, spoken language and written language into each other. What I found more eye-opening were the risks. So far it did not occur to me that technology that relies on the ability of speaking and proper grammar are not necessarily accessible to the Deaf. This includes Siri and Alexa, which are trained on hearing people’s voices and grammar and hence fail when addressed by someone whose voice or grammar deviates too much from the hearing majority. The understandable fear of the community here is that these technologies will spread to all areas of life and others might be deprecated in favor of them. Imagine a doctor’s office where you can only make appointments by talking to a speech bot, which does not understand you.
I really loved this conference. The people where awesome, the talks were interesting. I regret not at all having to get up at 6m to get there on time. I learned a lot about the community, sign language and some stuff about IT as well. I felt welcome and included and despite the packed agenda, I for once was not totally exhausted after attending talks for eight hours straight. I wish more conferences would be like that.
I attended the 32c3 last year and watched the talk “Unpatchable“, a talk which is related to hacking medical devices. In this case it wasn’t hearing aids, but pacemakers. Interestingly, the speakers raised similar questions as I did in my talk at 28c3.
The questions being for example:
This device is part of my body, why do I not know what code is in it?
How can I trust that the device is not vulnerable from the outside?
Does a doctor have to tell me when he flashes the firmware or that the device is tracking my very personal data?
Agreeable, the consequences for patients wearing pace makers are more impactful than for patients wearing hearing aids or cochlear implants. However, I still found the talk worth watching, I hope you do too.
It has been nearly a year since 28c3, the chaos communication congress where I held my talk “Bionic Ears”. It’s been an interesting time since then with lots of developments that I hadn’t anticipated when I handed in the proposal for the talk. I have been planning to write a “what happened since then” post for a while and now, shortly before 29c3, here it is.
I love to point to this article, about a boy who was unhappy about having to wear hearing aids. His mother ask Marvel comics if there are any superheros with hearing aids and they send her a copy. Things like that make me smile.
The following article proposes to use hearing aid technology to enhance consumer electronics, specifically to tune out annoying noises from your environment.
I personally think he overestimates the current state of the art in hearing aid technology and especially the quality of today’s signal processing algorithms, but I like the idea. I’d love to see the two markets merge in the future, since it will most probably result in dropping prices for hearing aids and awesome features for consumer headphones.
Most hearing aids are not waterproof. That leads to a lot of situations which are perfectly normal for hearing people but exclude hearing-impaired ones. For example: social water sports, pool parties, sauna with friends, a trip to the beach with friends, watching a movie with wet hair after you just had a shower, open air concerts in the rain, muddy festivals, listening to audiobooks or watching TV while lying the bathtub. I could go on and on. Also simply sweat is a problem for many people, especially those who perform a lot of sports.
There are a few hearing aids on the market that claim water resistance, I hope it will be standard and affordable soon.
When my hearing aids break, I am not able to go to work. I mean I could go there, but I would not be able to communicate with my coworkers properly. Also, I have to spend time at the audiologist, to hand in my hearing aids and get the spare hearing aids (roughly) tuned. Wearing poorly tuned hearing aids cause me headaches, which also reduce my work performance. The audiologist visit takes time, but I have no idea if I can officially call in sick for that. I am wondering if my employer can actually fire me if that happens too often. I asked this question several audiologists and none of them could give me a definitive answer.
Same applies to when my hearing aids break when I was doing something that might be considered “risky” with respect to the hearing aids. For example, am I allowed to attend a martial arts class with my hearing aids? Can my insurance refuse to pay the reparation if they break during that class? What about when I accidentally have a shower with my hearing aids on? (It happens because when you wear them every day, you forget that you are wearing them.) What about when I attend an open air concert, it starts raining and I did not seek cover, because I did not want to miss the awesome performance? Those situations might sound constructed, but actually they happen if you are are not an old grandpa, but a young person with an active life.
There are a lot of situations related to hearing aids where there is no legal certainty for the patient. Sick leave and reparation costs are only examples here.
In order for hearing aids to improve a patient’s life, they have to be tuned correctly. The tuning is done in many meetings with an audiologist. The process of tuning hearing aids takes months and still after that most patients are not entirely happy with the result.
From my own experience I guess that one reason for this is that the tuning never takes place in a realistic hearing setting, but in the audiologist’s soundproof cabin. The only information he has is what I describe about that situation where I was not able to understand my friend at that party last week. In comparison, if you bring your car to a car repair shop and describe in what situations it makes problems, they for sure will try to reproduce the situation in order to examine it and fix the problem. The only thing audiologists do sometimes is taking you out on the street for a minute or two .
So getting your hearing aids tuned is a frustrating and time-consuming experience. As a patient you are totally dependent on your audiologist and spend a lot of your valuable time in his office without being happy about the result afterwards. It is no surprise that there are quite a number of people who started to tune their hearing aids themselves.
The problem with self-tuning is: you need special hard- and software and you need the knowledge. The knowledge is out there, although it might not be too easy, it is possible to teach yourself the required audiology.
The hardware and software on the other hand is hard to get. Officially it is only sold to audiologists and doctors . There is no way to buy it on the free market like on ebay etc., because those hardware is classified as medical devices and as such not obtainable by patients.But for every market, there is a black market and when people are frustrated, they find a way. So, there are quite some people out there which tune their hearing aids, because they can do that wherever and whenever they want and not just when their audiologist is willing to give them an appointment and willing to do the tuning outside his office. So the situation is that there are people self-tuning, but because they have to do it inofficially, they don’t get any support for that. Support means software updates, manuals, maintenance material for the hardware, trainings for the software, warranty for the tuning hardware and their hearing aids etc.
What I want here is that interested and skilled patients are allowed to get a “hearing aid tuning license”. Similar to a driving license, I imagine that you take some classes and maybe have to do a test and in the end you are allowed to tune your own hearing aids. Most hearing conditions are permanent, which means as a young person, you are facing several decades of having to wear hearing aids. In that situation you might as well spend some weeks on learning how to tune your hearing aids yourself in order to be more independent.
Also, it would great, if the software would comply to common standards, meaning that it has open APIs which everyone can use to extend its functionality. The hardware should comply to open standards and be legally purchasable by patients. It would be even better if you don’t need special hardware at all but can use consumer hardware. Why do you need a special device when you hearing aids can talk bluetooth in the near future? So, why can’t I just tune my hearing aids using my smart phone or my tablet PC while I am sitting in the subway?
 I heart rumors, that some audiologists come home to people and adjust the hearing aids there. Of the four audiologists that I have seen so far, none of them offered that.