NeuroSpeak Project

NeuroSpeak

AI Assistive Communication Device for Non-Verbal Users

The Mission

NeuroSpeak is a multi-modal communication device designed to empower non-verbal users. It integrates real-time eye-tracking using OpenCV with TensorFlow-powered predictive speech modeling, creating an intuitive interface for seamless communication and personal expression.

The Challenge

The primary challenge was creating a highly accurate and responsive system capable of understanding diverse input patterns while maintaining low latency for real-time communication, all while integrating hardware sensors with AI algorithms on resource-constrained IoT devices.

Technologies Used

TensorFlow
OpenCV
React Native
Firebase
IoT
Python
Eye-Tracking
Speech Synthesis

Key Features

👁️

Real-time Eye Tracking

OpenCV-powered eye-gaze detection for natural, intuitive control without additional input devices

🧠

Predictive Speech

TensorFlow models predict next words with 97% accuracy for faster communication

📱

Mobile Companion App

React Native health monitoring app for caregiver integration and usage analytics

Results & Impact

  • 97% accuracy in predictive speech modeling and eye-gaze detection
  • Reduced communication latency to under 150ms for real-time interaction
  • Improved communication speed by 3x for non-verbal users compared to traditional AAC devices
  • Featured in national assistive technology exhibitions and research conferences