VisuAssist is a web app that provides real-time environmental descriptions to support visually impaired users. Activated by voice commands and mobile camera input, it uses image recognition and machine learning to describe surroundings. Users can say commands like “show me what you see” to hear a verbal description, with follow-up prompts for more detail. It’s especially helpful in complex situations where traditional aids fall short, and works on both Android and iPhone browsers for broad accessibility.
TA : Mehrdad Eshraghi Dehaghani
Advisor: Mehdi Morad
Team members: Yimo Ning,Caleb Chung, Sunkyo Yoon, Jiehao Fu
…Read more
Less…