4 November 2013
BEIJING: US software giant Microsoft has developed a new cost-effective sign language translator that converts signs into spoken and written language - and vice versa.
In collaboration with researchers in China, Microsoft created the Kinect Sign Language Translator, a prototype system that understands the gestures of sign language and converts them to spoken and written language--and vice versa.
The translator uses a computer and a Kinect camera that recognises signing gestures, then gives a spoken and written translation of languages for people who can hear.
The system captures a conversation from both sides: the deaf person is shown signing, with a written and spoken translation being rendered in real-time, while the system takes the hearing person's spoken words and turns them into accurate, understandable signs.
The system takes a person's spoken words and translate them into accurate signs carried out by an on-screen avatar.
Originally developed for gaming, the Kinect's sensors read a user's body position and movements and, with the help of a computer, translate them into commands.
It has tremendous potential for understanding the complex gestures that make up sign language and for translating the signs into spoken or written words and sentences.
"We knew that information technology, especially computer technology, has grown up very fast. So from my point of view, I thought this is the right time to develop some technology to help (the deaf community)," said Professor Xilin Chen, deputy director of the Institute of Computing Technology at the Chinese Academy of Sciences.
Chen has spent much of the past decade studying sign language recognition, hoping to devise a way to enable signed communication between people with hearing loss and their hearing neighbours.
"That's the motivation," Chen said. The project was a collaboration between the Chinese Academy of Sciences, Beijing Union University, and Microsoft Research Asia.
Published by: The Times Of India