© 2019 Strange Loop
In the US alone, approximately 3% of the population (10 million) are either deaf or have moderate to profound hearing loss. This is 3 times as many people than those in wheelchairs yet reasonable disability accommodations for the deaf or hearing impaired only require an ASL (American Sign Language) interpreter in certain circumstances such in official political, legal, education, law enforcement, and employment events and situations. Problem with this is only a fraction of the functionally deaf (250-500 thousand) speak ASL and, in any case, often since allowed by the law only a fraction of the functionally deaf (250-500 thousand) speak ASL (also called "signers") so how can the hearing impaired, written notes will be provided as a sufficient replacement for synchronous forms of communication, which deprive them from real-time engagement and what is really said verbatim. And this is only from circumstances where reasonable accommodation is required. What about Meetups? Debates? Conferences?
Existing solutions focus strictly on providing closed caption services, when available, video relay services, which are more suitable for two way communication, or speech recognition which can work very well but since they appear on a different screen don't allow the deaf person to engage with the event as just another audience member.
How can machine learning methods solve this problem with lip reading?
Graduate student in data science, a computer scientist, a former webmaster for one of the largest poker sites in the world, the founder of a search engine startup focused on user-behavior and decision-making and someone passionate about using data and technology to improve peoples lives.