Category Archives: Language : English

Why 90% is Not Cool

I recently stumbled upon this talk by professional real-time captioner Mirabai Knight about why human captioning (still) matters:

I highly recommend watching the talk in its entirety. I found it super interesting and learned a lot. However, if you have only 5 minutes, I suggest watching starting at minute 10:38, which contains my personal highlight.

Continue reading Why 90% is Not Cool

Just Use a Microphone Already

I recently stumbled upon this article which totally resonates with me: A Note From Your Colleagues With Hearing Loss: Just Use a Microphone Already

It is awkward enough to have to ask a speaker to use a microphone in front of the full audience. It is even more awkward if they refuse to do so. You’d be surprise how often “No, thanks, I am good” is the answer.

However, my low light so far was: ‘No, I don’t like to use the microphone, I don’t want to feel tied to the podium and prefer to walk around.’ It seems to be hard to assess the priorities of ‘convenience for me’ and ‘necessity for someone else’.

Google Live Transcribe

It’s not every day that I get to test an a11y app, that later became into a product. Google recently launched two apps to make the life of people who are deaf or hard of hearing easier: Live Transcribe and Sound Amplifier.

Official announcement:
https://www.blog.google/outreach-initiatives/accessibility/making-audio-more-accessible-two-new-apps/

I am a fan of Live Transcribe in particular. The apps listens to your call or the surroundings of your phone and generates captions automatically. It works with even several languages.

I had the honor of testing it early and I am happy with the result. I hope it does a good job for you too.

DeafIT 2018

At the end of last year, I attended theDeafIT conference in Munich. This is my report about this awesome conference.

The Conference

The DeafIT conference is targeted towards IT professional who are deaf or hard of hearing. From what I heard during the conference, there are two main goals of the event. One is to enable deaf IT professionals to network among each other and with potential employers. The other one is to make IT content and knowledge accessible to them (which is not necessarily the case on other IT conferences).

DeafIT started small, as a tiny conference back then in 2013 in Munich. In 2018 they celebrated their 5th anniversary, which included a nice gala dinner program in the evening in the rather fancy conference hotel.

The Audience and Speakers

While the conference is called DeafIT, the audience is mixed with respect  to the level of hearing loss. Besides the Deaf, also hearing and hard of hearing people are welcome as speakers and audience. However, those latter two groups were in the minority in this event.

This led to the – for me – very interesting experience of being in a room of ~100 people and everyone is talking, but you still barely hear a sound. I often feel being in the minority among hearing people, but this time I was in the minority in a different way.

My Motivation

I heart about this conference a few years back already, but so far I had been rather hesitant to attend. Mostly because I am not deaf (yet?) and as such not part of the deaf community. While there was no indication of this on the conference page, I was just not sure whether I was welcome there (which turns out, was complete nonsense).

While I am hard of hearing and that at a profound level, I would not even call myself part of the community of hard of hearing people, mainly because I lost my hearing rather late (in my late 20ies) and never actively sought out the community by attending meetups etc.

Nevertheless I am somewhat active in the media with respect to hearing loss, one example being this very blog. Over the years, I have build up a network of people who are hard of hearing or professionals in that area. And those send me links to the DeafIT conference a couple of times.

After my initial hesitation and in particular when I learned that 2018’s conference will take place in Munich, I thought I would give it a try. And it was totally worth it.

The Setup

When entering the conference room, the first thing you notice is the highly sophisticated technical setup. The room is equipped to accommodate the needs of speakers and attendees who are hearing, hard of hearing, or deaf.

That means for speakers presenting in German sign language, a translator was sitting in the first row, speaking what they read into a microphone in German spoken language. Some signing speakers did not sign in German sign language, but in their respective mother tongue, which in this case was Russian and Brazilian. Also here, a translator was able to read the foreign sign languages an simultaneously translated it into German spoken language. Additionally for the people in the audience who communicate in German sign language, a second translator would be located on stage and simultaneously sign the content in German sign language.

When a presenter was presenting in spoken language, a sign language translator would accommodate them on state and translate simultaneously into German sign language as well.

Before I attended this event, it never occurred to me that in this setup, when you want to ask a question in sign language, you actually have to go on stage, because otherwise the audience would not be able to read you.

Additionally, a remote captioning service was used during the talks (provided by Verbavoice). That means somewhere else on the globe somebody was listening in on the audio stream of the conference and typing the spoken words really really fast. These live subtitles were projected on the canvas below the presentation’s slides.

Last but not least, there was an additional service for the hard of hearing. If you were wearing hearing aids with a T-coil, could you borrow receivers to connect your hearing aids directly to the audio stream. I could have used that, but the crystal clear audio signal from the speakers together with the live captions covered everything I needed.

I was really overwhelmed by the effort this small community makes to include everyone. While I was here in a minority among people whose primary language is sign language, I felt a lot more welcome than I usually feel among the hearing. I wish other IT conferences would have the willingness to include people as much as DeafIT does.

The Program

The conference program consisted of one track of presentations about various areas of IT. The key note started at 8:45 and everyone was up and running at that time already. While I was still trying to wake up with a coffee I was wondering if being an early bird is somehow related to hearing loss or whether they just started so early, because the agenda was so packed.

While the program covered a lot of general IT topics ranging from web development to IT security, a large part of the talks were related to sign language or hearing loss. Some of the general talks were really interesting, but the ones related to hearing and signing were the most interesting to me. I guess mostly, because I do attend other IT conferences as well and those never tend to cover these topics.

You can find the entire schedule here and the slides for download here and the official report here.

My favorite talks

Gebärdenspracherkennung mit Deep Learning – Sign Language Recognition with Deep Learning (Slides)

This was a summary of a PhD thesis where the author used machine learning to recognize (one-hand) signs and created a system that would translate them simultaneously into a sign transcript language. He had a very impressive demo where he would form different gestures with one hand and the computer would translate them immediately and correctly.

HAND TALK – Digitale Uebersetung mit Hugo / HAND TALK – Digital translations with Hugo (Slides)

Hugo is a cute little avatar and the face of an app, which helps to translate spoken or written language into sign language. It is developed in Brasil, primarily for Brazilian sign language. In Brasil, illiteracy is higher than in Europe for example, in particular among the deaf, where many people never study a spoken language in their life.

Chancen und Risiken der Künstlichen Intelligenz in der Gehörlosengemeinschaft / Chances and risks of Artificial Intelligence in the Deaf Community (Slides)

This was not a presentation, but a panel discussion with a couple of the speakers of the conference. The chances of AI for the Deaf community were rather easy to understand, like all the cool projects which use machine learning to translate sign language, sign transcriptions, spoken language and written language into each other. What I found more eye-opening were the risks. So far it did not occur to me that technology that relies on the ability of speaking and proper grammar are not necessarily accessible to the Deaf. This includes Siri and Alexa, which are trained on hearing people’s voices and grammar and hence fail when addressed by someone whose voice or grammar deviates too much from the hearing majority. The understandable fear of the community here is that these technologies will spread to all areas of life and others might be deprecated in favor of them. Imagine a doctor’s office where you can only make appointments by talking to a speech bot, which does not understand you.

Summary

I really loved this conference. The people where awesome, the talks were interesting. I regret not at all having to get up at 6m to get there on time. I learned a lot about the community, sign language and some stuff about IT as well. I felt welcome and included and despite the packed agenda, I for once was not totally exhausted after attending talks for eight hours straight. I wish more conferences would be like that.

Take-away from 32c3

I attended the 32c3 last year and watched the talk “Unpatchable“, a talk which is related to hacking medical devices. In this case it wasn’t hearing aids, but pacemakers. Interestingly, the speakers raised similar questions as I did in my talk at 28c3.

The questions being for example:

  • This device is part of my body, why do I not know what code is in it?
  • How can I trust that the device is not vulnerable from the outside?
  • Does a doctor have to tell me when he flashes the firmware or that the device is tracking my very personal data?

Agreeable, the consequences for patients wearing pace makers are more impactful than for patients wearing hearing aids or cochlear implants. However, I still found the talk worth watching, I hope you do too.

 

 

What happened since 28c3?

It has been nearly a year since 28c3, the chaos communication congress where I held my talk “Bionic Ears”. It’s been an interesting time since then with lots of developments that I hadn’t anticipated when I handed in the proposal for the talk. I have been planning to write a “what happened since then” post for a while and now, shortly before 29c3, here it is.

Continue reading What happened since 28c3?

Hearing aid technology in consumer electronics?

The following article proposes to use hearing aid technology to enhance consumer electronics, specifically to tune out annoying noises from your environment.

I personally think he overestimates the current state of the art in hearing aid technology  and especially the quality of today’s signal processing algorithms, but I like the idea. I’d love to see the two markets merge in the future, since it will most probably result in dropping prices for hearing aids and awesome features for consumer headphones.

http://chrismaury.com/post/20294864914/its-time-to-level-up-headphone-tech