Abstract
SIGNSPEAK: BRIDGING COMMUNICATION THROUGH DEEP LEARNING
Sathvik Rao*, Shruthi D. V. (Assistant Professor)*, Sandeepa T. N., Anirudha R. and
Keerthan V.
ABSTRACT
As the primary communication channel for individuals with hearingand speech impairments, sign language bridges the auditory gapthrough a nuanced tapestry of visual gestures and signs. However,seamless interaction between these individuals and the hearingpopulation necessitates a shared understanding of the specific signlanguage dialect in use. Unfortunately, the widespread adoption ofIndian Sign Language (ISL), characterized by its intricate blend ofstatic and dynamic, uni- and bi-manual expressions, remains limitedwithin the general populace. Further complicating this landscape are regional variations inISL interpretations of even basic alphabetic symbols. This underscores the urgent need fortechnological interventions to bridge this persistent communication divide within thecommunity. With this objective in mind, this study embarks on a comprehensiveinvestigation of diverse approaches for ISL recognition. We delve into the intricacies ofvarious image and video preprocessing techniques, including noise attenuation, segmentationalgorithms, and feature extraction methodologies that capture the essence of these visualexpressions. Furthermore, we explore the efficacy of established machine and deep learningalgorithms in meticulously deciphering and accurately recognizing the dynamic vocabularyof ISL signs. This insightful survey illuminates the existing knowledge gaps and challengesthat persist within the domain of ISL recognition, paving the way for future advancements in this critical field.
[Full Text Article] [Download Certificate]