This post is also available in:
עברית (Hebrew)
A new AI-powered platform developed by researchers in China is aiming to improve communication access for people with hearing impairments by translating written or spoken text into sign language. The system, currently in development, uses lightweight artificial intelligence models capable of running on mobile devices and smart glasses, making the technology portable and adaptable to various everyday environments.
The platform features virtual avatars that interpret text or speech input and translate it into real-time sign language. Alternatively, it can display text transcriptions to facilitate direct communication between deaf and hearing individuals. The goal is to address communication barriers in areas such as education, healthcare, and the workplace.
Designed for practical use, the system can be integrated into wearable devices like smart glasses, allowing for on-the-go translation in live scenarios. Early discussions are underway with local authorities and innovation hubs in China to support pilot programs and future deployment. The technology is intended not only for consumer use, but also as a tool for reducing social inequality and improving access to services for people with disabilities.
According to Interesting Engineering, beyond sign language translation, the research team is developing other assistive technologies, including lip-reading systems that convert visual speech into text, and early-stage brain-computer interfaces (BCIs) capable of converting neural signals into written language. These technologies are being positioned as part of a broader framework for human-AI interaction, with future use cases extending to transportation, education, and digital health.
One contributing factor to the pace of this development is the availability of large-scale datasets in China, particularly in healthcare. Hospitals often provide medical data for research, accelerating AI training in areas such as emotion recognition, diagnostic imaging, and patient support.
In collaboration with robotics developers, the AI system is also being adapted for emotional interaction, using machine learning to read facial cues and support users with cognitive or developmental conditions. As accessibility regulations expand globally, such platforms could play a role in reshaping how assistive communication technologies are delivered and adopted across sectors.
The findings were first posted in the South China Morning Post.

























