Wednesday, June 18, 2025
Psychology Aisle
  • Home
  • Health
    • Brain Research
  • Mental Health
    • Alzheimers Disease
    • Bipolar Disorder
    • Cognition
    • Depression
  • Relationships
  • More
    • Mindfulness
    • Neuroscience
  • Latest Print Magazines
    • Psychology Aisle Summer 2024 Proposed
    • Psychology Aisle Spring 2024
    • Psychology Aisle January 2024
  • Contact
No Result
View All Result
Mental & Lifestyle Health
No Result
View All Result
Home Brain Research

AI-Equipped Eyeglasses Read Silent Speech

Editorial Team by Editorial Team
April 7, 2023
in Brain Research
AI-Equipped Eyeglasses Read Silent Speech
Share on FacebookShare on Twitter


Summary: Researchers have developed a wearable interface called EchoSpeech, which recognizes silent speech by tracking lip and mouth movements through acoustic sensing and AI. The device requires minimal user training and recognizes up to 31 unvocalized commands. The system could be used to give voice to those who are unable to vocalize sound or communicate silently with others.

Source: Cornell University

Cornell University researchers have developed a silent-speech recognition interface that uses acoustic-sensing and artificial intelligence to continuously recognize up to 31 unvocalized commands, based on lip and mouth movements.

The low-power, wearable interface – called EchoSpeech – requires just a few minutes of user training data before it will recognize commands and can be run on a smartphone.

Ruidong Zhang, doctoral student of information science, is the lead author of “EchoSpeech: Continuous Silent Speech Recognition on Minimally-obtrusive Eyewear Powered by Acoustic Sensing,” which will be presented at the Association for Computing Machinery Conference on Human Factors in Computing Systems (CHI) this month in Hamburg, Germany.

“For people who cannot vocalize sound, this silent speech technology could be an excellent input for a voice synthesizer. It could give patients their voices back,” Zhang said of the technology’s potential use with further development.

In its present form, EchoSpeech could be used to communicate with others via smartphone in places where speech is inconvenient or inappropriate, like a noisy restaurant or quiet library. The silent speech interface can also be paired with a stylus and used with design software like CAD, all but eliminating the need for a keyboard and a mouse.

This shows a man wearing glasses
The low-power, wearable interface – called EchoSpeech – requires just a few minutes of user training data before it will recognize commands and can be run on a smartphone. Image is in the public domain

Outfitted with a pair of microphones and speakers smaller than pencil erasers, the EchoSpeech glasses become a wearable AI-powered sonar system, sending and receiving soundwaves across the face and sensing mouth movements. A deep learning algorithm then analyzes these echo profiles in real time, with about 95% accuracy.

“We’re moving sonar onto the body,” said Cheng Zhang, assistant professor of information science and director of Cornell’s Smart Computer Interfaces for Future Interactions (SciFi) Lab.

“We’re very excited about this system,” he said, “because it really pushes the field forward on performance and privacy. It’s small, low-power and privacy-sensitive, which are all important features for deploying new, wearable technologies in the real world.”

Credit: Ruidong Zhang

Most technology in silent-speech recognition is limited to a select set of predetermined commands and requires the user to face or wear a camera, which is neither practical nor feasible, Cheng Zhang said. There also are major privacy concerns involving wearable cameras – for both the user and those with whom the user interacts, he said.

Acoustic-sensing technology like EchoSpeech removes the need for wearable video cameras. And because audio data is much smaller than image or video data, it requires less bandwidth to process and can be relayed to a smartphone via Bluetooth in real time, said François Guimbretière, professor in information science.

“And because the data is processed locally on your smartphone instead of uploaded to the cloud,” he said, “privacy-sensitive information never leaves your control.”

Summary written with the assistance of ChatGPT AI technology.

About this AI research news

Author: Becka Bowyer
Source: Cornell University
Contact: Becka Bowyer – Cornell University
Image: The image is in the public domain

Original Research: The findings will be presented at Association for Computing Machinery Conference on Human Factors in Computing Systems (CHI)



Source link

Advertisement Banner
Previous Post

Flow experiences on TikTok and Instagram linked to mental health issues

Next Post

Lewis Capaldi Opens Up About Mental Health Struggles, Second Album Jitters in ‘How I’m Feeling Now’ Documentary – Variety

Next Post
MacArthur Award recipient to reveal how data unravels mysteries of … – Virginia Tech Daily

Lewis Capaldi Opens Up About Mental Health Struggles, Second Album Jitters in ‘How I’m Feeling Now’ Documentary - Variety

Discussion about this post

Recommended

  • Strategies to Reduce Attrition in a Competitive Healthcare Market
  • Neglecting Dads’ Mental Health May Harm Kids’ Development
  • Brain Adapts to Neuron Loss Through Rapid Rewiring
  • 10 Ways to Celebrate Dad (Without Reinforcing Stereotypes)
  • The Power of a Summer Camp State of Mind

© 2022 Psychology Aisle

No Result
View All Result
  • Home
  • Health
    • Brain Research
  • Mental Health
    • Alzheimers Disease
    • Bipolar Disorder
    • Cognition
    • Depression
  • Relationships
  • More
    • Mindfulness
    • Neuroscience
  • Latest Print Magazines
    • Psychology Aisle Summer 2024 Proposed
    • Psychology Aisle Spring 2024
    • Psychology Aisle January 2024
  • Contact

© 2022 Psychology Aisle

×

Please fill the required fields*