English News

Tianjin University develops AI sign language translation system to make more people with hearing impairment “heard”


Alwihda Info | Par peoplesdaily - 18 Juillet 2022


Sign language recognition and speech recognition share one similarity - both of them must rely on a large corpus to work. Today, the corpus for speech recognition has been very mature, but that's not the case for sign language recognition, Yuan said.


By Li Jiading, People’s Daily

Members of the R&D team discuss development issues. (Photo courtesy of the Technical College for the Deaf, Tianjin University of Technology)
Tianjin University of Technology (TUT) has developed a system of real-time translation for sign language under comprehensive scenarios, which is expected to be put into use soon.

To make more people with hearing impairment “heard” and bring more conveniences to the world with AI, the R&D team of the system took nearly five years to build a corpus containing over 300,000 video materials of sign language.

Zhang Yibin is a class-of-2019 student majoring in network engineering from the TUT Technical College for the Deaf. He's also a member of the R&D team that developed the sign language translation system.

The team consists of nearly 60 people, and over half of them are students with hearing impairments like Zhang.

According to statistics, China has 27.8 million people with hearing impairment.

“Sign language is still their ‘mother tongue’ today,” said Yuan Tiantian, deputy dean of the Technical College for the Deaf, adding that speech recognition applications are still focusing on the hearing people though widely employed.

“Apart from being understood, what people with hearing impairment really want is to be ‘heard,’” said Yuan.

Sign language is a visual language that has its own specific grammar and order. It combines gestures, facial expressions, and body movements.

Sign language recognition and speech recognition share one similarity - both of them must rely on a large corpus to work. Today, the corpus for speech recognition has been very mature, but that's not the case for sign language recognition, Yuan said.

Wang Jianyuan is a class-of-2018 student majoring in network engineering from the TUT Technical College for the Deaf and a founding member of the R&D team.

Wang’s job in the team is to enrich the sign language corpus, which consists of video materials. According to him, he and his team have collected over 300,000 videos, and the corpus is almost as large as the vocabulary in the HSK Level 4, which assesses test takers’ abilities in the application of everyday Chinese.

Sun Yue, who’s in her first year of postgraduate study at the School of Computer Science and Engineering, TUT, is responsible for algorithm building in the team.

“The word orders of sign language and that of Chinese may differ a lot even in the same sentence,” she explained, adding that she and her partners have gradually established an algorithm framework for sign language recognition.

“In plain language, we made a sign language textbook for the computer,” she noted. This “textbook” tells the computer how to translate sign language into Chinese.

At present, the algorithm framework is basically able to translate sign language under comprehensive scenarios.

In 2019, the system was listed as a key project supported by the Ministry of Industry and Information Technology and received a grant from the government, which accelerated the pace of the R&D team.

In May the last year, the team brought their research outcomes to the World Intelligence Congress held in Tianjin. “By that time, the system had covered education, law, dining, transport and other scenarios. It could recognize 95 percent of the sign language with sufficient lighting,” Yuan introduced.

Furthermore, the system was employed to provide news reporting services during the Beijing 2022 Winter Olympics, presenting the charm of winter sports to people with hearing impairment.

Now the team is seeing more and more cooperation partners, which signals a bright future for the sign language recognition system.

“Our goal is to build a corpus that includes a million video materials and basically covers all common scenarios of daily life,” Yuan noted.

Dans la même rubrique :