Real Time Prediction of American Sign Language Using Convolutional Neural Networks
The American Sign Language (ASL) was developed in the early 19th century in the American School for Deaf, United States of America. It is a natural language inspired by the French sign language and is used by around half a million people around the world with a majority in North America. The Deaf Culture views deafness as a difference in human experience rather than a disability, and ASL plays an important role in this experience. In this project, we have used Convolutional Neural Networks to create a robust model that understands 29 ASL characters (26 alphabets and 3 special characters). We further host our model locally over a real-time video interface which provides the predictions in real-time and displays the corresponding English characters on the screen like subtitles. We look at the application as a one-way translator from ASL to English for the alphabet. We conceptualize this whole procedure in our paper and explore some useful applications that can be implemented.
More on this paper can be found here