AMERICAN SIGN LANGUAGE RECOGNITION SYSTEM

ABSTRACT

Inability to speak is considered to be true disability. People with this disability use different modes to communicate with others, there are a number of methods available for their communication one such common method of communication is sign language. Developing sign language application for deaf people can be very important, as they’ll be able to communicate easily with even those who don’t understand sign language. Our project aims at taking the basic step in bridging the communication gap between normal people and deaf and dumb people using sign language. The main focus of this work is to create a vision based system to identify finger spelled letters of ASL. The reason for choosing a system based on vision relates to the fact that it provides a simpler and more intuitive way of communication between a human and a computer.




PRESENTATION




DOWNLOADS


> Project Report

> Code

> GitHub



KEY REFERENCES

> Robin R. Murphy, Jesus Suarez, “A Hand gesture recognition with depth images: A review”, 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication

> Christian Vogler, Dimitris Metaxas, “Toward Scalability in ASL Recognition: Breaking Down Signs into Phonemes", International Gesture Workshop, GW 1999: Gesture-Based Communication in Human-Computer Interaction

> Ying WuThomas S. Huang, “Vision-Based Gesture Recognition: A Review”, International Gesture Workshop, GW 1999: Gesture-Based Communication in Human-Computer Interaction

> Christian Vogler,Dimitris Metaxas, "Handshapes and Movements: Multiple-Channel American Sign Language Recognition", International Gesture Workshop, GW 1999: Gesture-Based Communication in Human-Computer Interaction

> Becky Sue Parton, "Sign Language Recognition and Translation: A Multidisciplined Approach From the Field of Artificial Intelligence", The Journal of Deaf Studies and Deaf Education, Volume 11, Issue 1, Winter 2006, Pages 94–101