Not Lost in Translation: AI Increases Sign Language Recognition Accuracy
Additional data can help differentiate subtle gestures, hand positions, facial expressions
The Complexity of Sign Languages
Sign languages have been developed by nations around the world to fit the local communication style, and each language consists of thousands of signs. This has made sign languages difficult to learn and understand. Using artificial intelligence to automatically translate the signs into words, known as word-level sign language recognition, has now gained a boost in accuracy through the work of an Osaka Metropolitan University-led research group.
It’s All about Accuracy
Previous research methods have been focused on capturing information about the signer’s general movements. The problems in accuracy have stemmed from the different meanings that could arise based on the subtle differences in hand shape and relationship in the position of the hands and the body.
Graduate School of Informatics Associate Professor Katsufumi Inoue and Associate Professor Masakazu Iwamura worked with colleagues including at the Indian Institute of Technology Roorkee to improve AI recognition accuracy. They added data such as hand and facial expressions, as well as skeletal information on the position of the hands relative to the body, to the information on the general movements of the signer’s upper body.
“We were able to improve the accuracy of word-level sign language recognition by 10-15% compared to conventional methods,” Professor Inoue declared. “In addition, we expect that the method we have proposed can be applied to any sign language, hopefully leading to improved communication with speaking- and hearing-impaired people in various countries.”
About OMU
Established in Osaka as one of the largest public universities in Japan, Osaka Metropolitan University is committed to shaping the future of society through “Convergence of Knowledge” and the promotion of world-class research. For more research news, visit https://www.omu.ac.jp/en/
Original Publication
Journal: IEEE Access
Article Title: Word-Level Sign Language Recognition With Multi-Stream Neural Networks Focusing on Local Regions and Skeletal Information
Article Publication Date: 11 November 2024
DOI: 10.1109/ACCESS.2024.3494878
Media Contact
Yung-Hsiang Kao
Osaka Metropolitan University
Email ID: koho-ipro@ml.omu.ac.jp
Source: EurekAlert!
Media Contact
All latest news from the category: Information Technology
Here you can find a summary of innovations in the fields of information and data processing and up-to-date developments on IT equipment and hardware.
This area covers topics such as IT services, IT architectures, IT management and telecommunications.
Newest articles
Humans vs Machines—Who’s Better at Recognizing Speech?
Are humans or machines better at recognizing speech? A new study shows that in noisy conditions, current automatic speech recognition (ASR) systems achieve remarkable accuracy and sometimes even surpass human…
Breaking the Ice: Glacier Melting Alters Arctic Fjord Ecosystems
The regions of the Arctic are particularly vulnerable to climate change. However, there is a lack of comprehensive scientific information about the environmental changes there. Researchers from the Helmholtz Center…
Global Genetic Insights into Depression Across Ethnicities
New genetic risk factors for depression have been identified across all major global populations for the first time, allowing scientists to predict risk of depression regardless of ethnicity. The world’s…