Contact
Back to Insights
National Science Foundation Case Study

Building AI That Listens

Speech Analytics for Inclusive Classroom Communication

AA
Ariana Abramson, MSc
NSF Vital Prize Winner • Published November 2025
NSF
Vital Prize Winner
40%
Higher Engagement
15%
Comprehension Gains
8
Month Timeline
EXECUTIVE SUMMARY

During the pandemic, classroom communication broke down. Teachers couldn't gauge understanding through screens, while students—especially those from linguistically diverse backgrounds—were getting lost in translation.

The verbal cues that make learning work were disappearing into digital static.

The Solution

We developed speech analytics technology that analyzes classroom dialogue in real-time, identifying verbal signals that indicate true comprehension and communication patterns that inspire learning.

Key Results

40%
Higher Engagement
Students more likely to ask follow-up questions
15%
Comprehension Improvement
Mathematical word problems
5
Communication Categories
Inclusion, Integrity, Courtesy, Translation, Charisma
NSF
Vital Prize
AI Educational Technology

The Problem: When Communication Breaks Down, Learning Stops

"Without dialogue, there is no communication, and without communication, there can be no true education."

Paulo Freire's words hit differently when you're watching a 5th-grade math class struggle through Zoom.

During the pandemic, I watched teachers try to gauge understanding through glitchy screens, while students—especially those from linguistically diverse backgrounds—were getting lost in translation. The verbal cues that make classroom communication work were disappearing into digital static.

Between 2023 and 2024, our research team at Columbia University tackled this challenge head-on. That work led to a question that changed my approach to AI: What if AI could help teachers and students communicate more effectively, rather than replacing human connection?

The Vision: AI as Communication Bridge

Traditional educational AI often focuses on replacing human judgment—automated grading, algorithmic assessment, one-size-fits-all feedback.

We flipped the script.

Our research focused on a deceptively simple question: Can we build AI that helps teachers understand when students actually "get it," and helps students decode what their teachers are really saying?

The answer led us to develop a speech analytics system that analyzes classroom dialogue in real time, identifying the verbal signals that indicate true comprehension and the communication patterns that inspire learning.

The Technical Challenge: Making AI Understand Context, Not Just Words

Building AI for classroom communication meant solving problems that don't show up in typical NLP applications.

1. Context-Aware Classification

We developed binary classification models using transformer-based architectures that could distinguish between "effective" and "ineffective" verbal signaling cues across five categories:

Instead of just transcribing speech, we focused on turn-level context analysis: what was said, who said it, and how it functioned within the conversational flow. Our models processed 30-second dialogue windows to capture the full context of teacher-student exchanges.

2. Dialect and Cultural Sensitivity

Traditional speech recognition often fails students who speak African American Vernacular English (AAVE) or come from multilingual households. Left unchecked, that failure becomes structural bias.

We integrated specialized benchmarks for AAVE assessment, dialect-aware evaluation metrics, and culturally responsive training data so the system wouldn't pathologize the way students naturally speak. The goal was simple: ensure the AI worked for these students, not against them.

3. Human-in-the-Loop Design

Rather than replacing teacher judgment, our system amplified it.

Teachers and students became annotators, helping the AI learn what good communication actually looks like in their specific context. This human-in-the-loop design was essential—not just for performance, but for trust and model reliability.

Implementation Timeline

Phase Timeline Deliverables Status
1 Foundation Month 1 Data collection standards, evaluation guidelines Complete
2 Annotation Month 2 Teacher and student annotation framework Complete
3 Data Collection Months 3-4 Pilot classroom data, initial annotations Complete
4 Validation Month 5 Interannotator reliability analysis Complete
5 Model Training Months 6-8 LLM-based speech models, accuracy testing Complete
NSF VITAL PRIZE AWARDED
Month 8: Successful deployment and recognition for educational AI innovation

What We Learned: The Power of Inclusive Communication

Our research revealed something profound: The way teachers communicate doesn't just affect what students learn—it affects whether they believe they can learn.

Through our pilot implementations across multiple 5th-grade classrooms, we discovered measurable patterns:

When teachers increased "translation" language—phrases like "What I hear you saying is..." or "Let me put that in different words"—students were 40% more likely to attempt follow-up questions and showed significantly higher engagement in problem-solving discussions.

Student Outcomes

Outcome Baseline With System Improvement
Class Participation Measured by turn frequency 40% increase in follow-up questions +40%
Math Comprehension Word problem baseline 15% improvement over baseline +15%
Confidence Expression Limited verbal reasoning Greater thinking process articulation Qualitative

Teacher Outcomes

I became more aware of my own communication patterns and how they affected student engagement.

The system helped me recognize subtle signals of student understanding I was missing before.

I developed more effective strategies for reaching my linguistically diverse learners.

The Bigger Picture: Responsible AI in High-Stakes Environments

This project taught me something crucial about AI development: The higher the stakes, the more intentional we have to be about ethics.

In classrooms, you're not just processing data—you're influencing a child's relationship with learning. Every algorithmic choice can either reinforce bias or break down barriers.

Bias Mitigation at Every Level

Transparent Data Practices

Accessibility-First Design

Why This Matters for AI's Future

This work convinced me that the most powerful AI applications won't be the ones that replace human capability—they'll be the ones that amplify human connection.

The real breakthrough wasn't just the technology. It was demonstrating that AI can be designed to strengthen one of the most fundamentally human activities: learning through dialogue.

As we build increasingly sophisticated AI systems, the lessons from this classroom work are only becoming more relevant:

What's Next

The research continues at Columbia University, where the team is expanding the framework to support more grade levels and subject areas.

For me, this project sparked a broader mission: helping organizations implement AI in ways that amplify human capability rather than replacing it.

The same principles that guided this classroom work now guide how I help startups and enterprises implement AI in complex, human-centered environments:

Ready to Build Responsible AI Systems?

Partner with Ariana to architect AI solutions that amplify human capability and create measurable positive impact in your organization.

45-minute strategy session • Human-centered AI approach • Implementation roadmap included