Home » Innovation » Talk Without Talking: AI Neckband Turns Silent Words Into Your Voice; Here’s How It Works

Talk Without Talking: AI Neckband Turns Silent Words Into Your Voice; Here’s How It Works

AI neckband
A smart AI neckband reads silent speech from neck movements and turns it into your own voice. Photo Credit: POSTECH

Scientists at Pohang University of Science and Technology(POSTECH) in South Korea have developed a new wearable device that lets people speak without making a sound.

The soft silicone neckband detects subtle neck movements when a person silently mouths words. It then converts those movements into speech that sounds like the user’s own voice.

The idea behind the device is simple but powerful. When we speak, we do not only produce sound. Our neck muscles and skin also move in very specific ways. These movements are small, but they follow clear patterns for each word. Researchers describe them as a kind of ‘silent fingerprint’ of speech.

READ ALSO: Silicon Photonics Gets an Unexpected Upgrade That Changes Data Link Limits

Earlier technologies tried to capture this hidden signal using methods such as electromyography (EMG), which measures muscle activity, and electroencephalography (EEG), which records brain activity. However, these systems often required bulky machines and sticky electrodes. They were uncomfortable and usually worked only in controlled lab settings.

The POSTECH team built a lightweight neckband using soft silicone, a tiny camera, and motion sensors. This design makes the device more comfortable and easier to wear in daily life. It also avoids the limitations of older systems.

At the center of the device is a Multiaxial Strain Mapping Sensor. This sensor tracks how the skin stretches and moves when a person forms words. It does not just measure how much the skin moves, but also its direction. This gives the system a much richer understanding of speech patterns.

To make the readings accurate, the researchers printed small reference markers directly onto the silicone surface. A miniature camera tracks these markers in real time. This allows the device to measure even the smallest changes in skin movement with precision.

WATCH ALSO: Chinese company’s humanoid robot has mastered Webster flip, defies physics

The team also solved another common problem. Wearable devices often shift slightly when you put them on. This can affect performance. To fix this, the researchers created an algorithm that adjusts for small changes in position. This ensures the device works reliably even if it is not worn in exactly the same way each time.

Once the neckband collects the movement data, an AI model processes it. The model has been trained to recognize specific patterns linked to words. In testing, the system focused on the NATO phonetic alphabet, which includes words like ‘Alpha,’ ‘Bravo,’ and ‘Charlie.’ These words are designed to be clear and easy to understand, even in difficult conditions.

Across 26 words, the system achieved an accuracy of 85.8 percent. This shows strong performance, especially for a wearable device. After recognizing a word, the system sends it wirelessly to a server. There, a text-to-speech model converts it into audio.

What makes this system unique is that the voice sounds like the user. The AI model is trained using less than 10 minutes of the wearer’s recorded speech. It then recreates the user’s tone and speaking style with high similarity. The researchers say the generated voice closely matches real speech waveforms.

READ ALSO: US Navy Tests New Winged JDAM That Strikes Targets 200 Miles Away

The device also performs well in noisy environments. In tests with white noise at around 90 decibels, similar to a construction site, the system maintained a strong signal. It reached a signal-to-noise ratio of up to 33.75 decibels. According to the team, this is better than many commercial EMG systems under similar conditions.

Professor Sung-Min Park, who led the research, highlighted its potential impact. He said the technology can help people with speech disorders regain their ability to communicate. He also pointed to its use in noisy workplaces and silent communication scenarios.

Beyond healthcare, the applications are wide. The system can be useful in factories, emergency response situations, aviation, maritime work, and military operations. The team even tested it during a gas blowback rifle demonstration, where both noise and vibration were present.

Despite its promise, the technology still has limits. Right now, it only works with a fixed set of 26 words. It cannot yet handle free, natural conversations. Accuracy also drops when the user moves a lot, such as while walking or turning their head.

In some cases, performance fell to about 39.72 percent during heavy movement. The researchers are working to improve this. Future plans include expanding the vocabulary, testing with more users, and improving motion compensation.

WATCH ALSO: A South Korean humanoid robot has performed Michael Jackson’s Moonwalk, leaving all viewers stunned

This is not the first attempt at silent speech technology. Researchers at the University of Cambridge previously developed a similar choker-based system. Their version achieved higher accuracy and allowed more flexible speech input. They also expanded it to detect emotional states.

However, the POSTECH team stands out for focusing on voice personalization. By recreating the user’s own voice, their system adds a more human touch to digital communication. It brings silent speech closer to feeling natural and personal.

As development continues, this technology may change how people communicate in challenging environments. It also opens new possibilities for those who cannot speak. The idea of talking without sound is no longer science fiction; it is quickly becoming a reality.

Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *