From Data to Dialogue: Reclaiming Learning in the Age of AI
Part 1 — When AI Gets Learning Wrong
AI is rapidly reshaping how we teach, assess, and understand learning. Yet for all its power, it often misses the heart of education which is the human processes that make learning meaningful. This three-part series explores how we can reclaim that human dimension by moving from data to dialogue: designing AI that deepens reflection, strengthens motivation, and nurtures authentic understanding.
Part 1: When AI Gets Learning Wrong
Part 2: The Generative Turn — From Prediction to Participation
Part 3: The Conversational Classroom — Building Dialogue with AI
Part 1
Artificial intelligence has become education’s favorite new assistant. It grades essays, generates quizzes, and tracks progress with a precision no spreadsheet could match. Yet beneath the surface, a question keeps nagging at me:
Does AI actually understand learning—or just measure it?
A recent publication by Hooshyar and colleagues (2025) suggests the latter. The authors argue that most AI-driven learner-modeling systems “ignore essential learning processes such as motivation, emotion, and (meta)cognition and their contextual nature.” In other words, AI can see behaviour but not learning.
That insight explains why so many “intelligent” systems still misread what’s really happening in classrooms.
The Missing Middle: What AI Can’t See
Take Amira, a student who completes every algebra exercise on her laptop. The dashboard lights up with green check marks, signaling mastery. What it doesn’t show is the quiet anxiety that keeps her glued to procedural routines where she’s memorising, not understanding. The AI’s analytics celebrate her accuracy, but they can’t sense the hesitation behind each click or the growing fear of making a mistake. To the system, she’s thriving. To her teacher, she’s surviving.
Then there’s Luca, whose screen time looks like disengagement. He hesitates, stares into space, doodles in his notes and then types a single, brilliant explanation that connects two concepts no worksheet ever asked him to compare. His learning is slower, reflective, meandering. It’s the kind of thinking that needs space and silence.
For the algorithm, Amira is a success and Luca a problem. The data paints her as efficient and him as distracted. But for a teacher, the story is far more complex: one student is achieving without understanding, the other is struggling toward insight. What looks like productivity on a dashboard may in fact be a symptom of fear; what looks like idleness may be the visible form of deep thought.
AI can’t see any of that because its models treat learning as a linear path to mastery, not the messy, emotional, self-correcting journey that it really is.
The Cognitive Bias of Our Machines
This isn’t just a technical oversight; it’s a philosophical one. Learning is not a linear data stream but a recursive process shaped by feedback, emotion, and context (D’Mello & Graesser, 2023).
Yet AI systems are typically built on static correlations. They assume that if “practice leads to the correct answer,” learning has occurred. But research in educational psychology shows that mastery is inseparable from metacognition which is the learner’s awareness and regulation of their own thinking.
When AI ignores that dimension, it risks rewarding surface performance and penalizing productive struggle. Students who learn reflectively may look inefficient to a machine trained on speed and accuracy.
A Richer Model of Learning
To fix this, we need AI that reflects the complex ecology of learning and its cognitive, emotional, motivational, and social layers.
Imagine an intelligent tutor that doesn’t simply mark an answer wrong but asks,
“You changed strategies here. What made you decide to try that?”
The system might detect hesitation or frustration in a student’s text responses and prompt encouragement or self-explanation instead of another hint. That’s not science fiction; it’s a design philosophy grounded in responsible, hybrid human–AI approaches (Hooshyar et al., 2025).
Hybrid systems blend data-driven modelling with expert knowledge from educators and psychologists, ensuring that algorithms are guided by real learning theory, not just pattern recognition.
What Educators Can Do Now
While we wait for better tools, teachers can begin reclaiming the conversation about AI and learning by asking three simple questions about any technology they use:
What kind of learning does this system see?
If it only measures performance, pair it with human reflection tasks that capture emotion and reasoning.
How does it handle motivation?
Does it encourage curiosity and persistence, or merely completion?
Does it support metacognition?
Are students prompted to explain their thinking, set goals, or evaluate their own progress?
When we start asking these questions, we move from data collection to dialogue creation.
Why It Matters
Education is not a data-processing problem; it’s a meaning-making process. As Hooshyar and colleagues warn, AI built without a human model of learning can reinforce inequities and misclassify students who don’t fit statistical norms.
Our task as educators is not to resist AI, but to teach it what learning truly looks like which is messy, emotional, and profoundly human.
Reflection Prompt for Educators
Where in my teaching do I already capture learners’ motivation or metacognition and how could AI tools help me amplify those insights instead of replacing them?
Next up: The Generative Turn, From Prediction to Participation
In Part 2, we’ll explore how educators can design AI not as a grading assistant but as a creative collaborator. We’ll unpack the principles of Generativism or how learning deepens through creation, and see how teachers are using AI to help students generate, not just consume, knowledge.
Because if Part 1 asks what AI gets wrong about learning, Part 2 will ask how we can teach it to get learning right.
References
D’mello, S., & Graesser, A. (2013). AutoTutor and affective AutoTutor: Learning by talking with cognitively and emotionally intelligent computers that talk back. ACM Transactions on Interactive Intelligent Systems (TiiS), 2(4), 1-39.
Hooshyar, D., Šír, G., Yang, Y., Kikas, E., Hämäläinen, R., Kärkkäinen, T., Gašević, D., & Azevedo, R. (2025). Towards responsible AI for education: Hybrid human-AI to confront the elephant in the room. arXiv preprint arXiv:2504.16148.



