Download PDFOpen PDF in browserUsing Computer Vision and Deep Learning to Aid the DeafEasyChair Preprint 68165 pages•Date: October 9, 2021AbstractThis paper talks about the use of computer vision and machine learning to create a sign language translator for the deaf to convey their message to the general public. Majority of the world doesn’t understand sign language and it makes it harder for the deaf to have a normal interaction. However, by applying these latest technologies it can bridge this gap and make the life of both the less abled and the abled easier. By detecting the gestures made by a person using sign language, it is possible to translate it into a spoken language. Thus, making sign language translatable like any other language. Giving a voice to the people who haven’t been born with a voice is the primary goal of this piece of technology. We propose a solution where there is no bulky equipment required or any new modifications needed for the translation. Keyphrases: Convolutional Neural Networks, Gesture Detection, Open CV, Python, Sign language translator, computer vision, contour detection, deep learning, gesture recognition, machine learning, sign language
|