top of page
Search

🧠 From Code to Care: Building NeuroAI in the Act Phase

  • Writer: hejer ayedi
    hejer ayedi
  • May 19, 2025
  • 2 min read

In the final stretch of the Challenge-Based Learning journey, our team at Esprit Engineering School turned ambition into application. The Act Phase of the NeuroAI project was all about transformation — transforming raw data into intelligent insights, concepts into working systems, and theoretical models into a full-scale real-time mental health platform. Here's how we did it.


🎯 Tackling Multiple Objectives

The Act Phase was split into multiple technical objectives, each member of our team focusing on a distinct angle of emotion recognition and analysis:

  1. Facial Emotion Recognition (Real-time, via webcam)

  2. EEG + Keypress Emotion Classification

  3. Brain-to-Text Prediction using EEG

  4. ECG-based Stress Classification

  5. Speech Tone Emotion Detection

  6. Conversational AI Agent Integration

  7. Deployment via Microservices & Web Interface

Each task aimed to push the boundaries of how emotion can be interpreted from human data — whether it comes from brainwaves, voice tone, facial expression, or heart rate.


📷 Real-time Facial Emotion Recognition

Using the AffectNet dataset, we trained deep learning models (like ResNet50 and DenseNet121) to classify facial emotions. We experimented with Keras and PyTorch pipelines, applied image augmentation, and evaluated results with confusion matrices and classification reports. Grad-CAM, LIME, and Integrated Gradients were used for XAI to interpret what regions of the face influenced the model’s predictions.


🧠 EEG + Keypress Emotion Detection

We tackled multi-modal physiological data, combining EEG signals with keypress interactions to classify 14 emotional states. We applied advanced preprocessing (denoising, normalization, SMOTE balancing), used a custom LSTM model, and validated performance using classification accuracy and LIME interpretability.


💬 Brain-to-Text Prediction

Using spike power EEG data, we designed a sequence-to-sequence model to predict the intended text from neural activity. The architecture included LSTM encoders and decoders, paired with tokenized textual targets — an early step toward brain-computer interfaces for expressive communication.


❤️ ECG-Based Stress Detection

Using the WESAD dataset, we built a Deep Neural Network to classify stress vs. non-stress states from ECG signals. Feature engineering included heart rate variability, RMSSD, and frequency-domain metrics. The model achieved strong results (F1 = 0.81, Accuracy = 92%) and demonstrated real-world potential for wearable stress monitoring.


🔊 Speech Tone Emotion Classification

With data from RAVDESS, CREMA-D, SAVEE, and TESS, we trained CNN-based models using MFCC, RMSE, and ZCR features. Data augmentation (noise, pitch shift, time-stretching) improved generalization. The final model achieved reliable classification with audio-only input and was integrated into the NeuroAI platform.


🤖 The Empathetic Conversational Agent

We didn't stop at emotion recognition — we embedded emotional intelligence into a chatbot. By feeding tone and facial emotion outputs into the agent, it adapted its responses to match the user's affective state, creating an emotionally aware dialogue system ready for therapy or companionship scenarios.


🚀 Full-Stack Deployment

All models were packaged as Flask microservices, containerized with Docker, and served via a central gateway. The frontend, built with Next.js, features:

  • A real-time emotion dashboard

  • Separate pages per modality (EEG, ECG, etc.)

  • Patient session summaries

  • Downloadable PDF reports

  • A live chat interface with the emotion-aware agent


📌 Final Takeaway

The Act Phase turned our CBL challenge into a comprehensive AI platform capable of recognizing, analyzing, and reacting to human emotion across multiple modalities. NeuroAI is not just a technical project; it's a vision of a future where AI listens, understands, and supports mental health professionals and patients alike.

 
 
 

Recent Posts

See All
Meet the Minds Behind the Models

As we shift into the core of the Act  phase, the focus now moves from theory to practice. With our survey paper well underway and a...

 
 
 

Comments


©2025 by Virtus.

bottom of page