Users with impairments will soon be able to access Amazon Alexa using sign language.
With smart speakers becoming a go to device in the everyday home, it seems more people are using some form or voice assistant technology such as an Amazon Alexa, Google Assistant, Apple Siri or Microsoft’s Cortana.
56% of digitally excluded have a disability or long term condition and 27% of adults with a disability (3.3m people) have never been online - NHS Digital.
Dedicated developers are adapting existing technology to make it more accessible to those with impairments, that previously led them to being digitally excluded by the original products design.
It is important that the most vulnerable members of society are not held back from accessing advancements in technology.
Abhishek Sing has gone the extra mile by creating an ‘ASL – American Sign Language interface’ that detects hand signals (sign based), using a laptop and it’s onboard web camera.
The end user can now sign directly to the laptop and receive ‘Amazon Alexa’ responses presented back in text form, in realtime.
Currently this is only functional for American Sign Language, although with deep learning technology it could be adapted for other variations of sign language such as BSL – British Sign Language.
This opens doors and opportunities for many individuals, who previously never had the ability to access smart speaker technology, mainly due to it’s limitations and lack of accessibility due to voice only control.
How to get access to this technology?
The developer has indicated that he is releasing a open-source kit version of his work. We are looking to support individuals to access this technology at some point in the near future.