AI Isn’t Cheating. Our Curriculum Is!
How Misaligned Assessment Designs Invite Shortcut Thinking and Undermine Deep Learning.
When I first began writing The AI Educator, it wasn’t because I wanted to add another voice to the hype or the panic. It was because I kept meeting teachers who were quietly terrified.
Not terrified of robots taking their jobs, but terrified of getting AI wrong.
Terrified of accidentally lowering learning standards.
Terrified of letting something slip through the cracks.
Terrified of being judged for even trying to use AI.
Underneath all that fear was a simple truth:
Educators want to do the right thing. They just don’t want to break anything in the process.
My goal with the book was to help relieve that fear. To give teachers a clear, calm, grounded starting point and show them that they can use AI confidently, ethically, and meaningfully without feeling overwhelmed or exposed. AI isn’t something to fear; it’s something to understand, to work with, and to shape with intention.
But reassurance doesn’t mean pretending the risks don’t exist.
Confidence with AI requires clarity about its pitfalls.
And when we’re designing curriculum and assessment, we need to stay alert to the places where AI can quietly undermine learning if we’re not paying attention.
That’s why a recent systematic review on the harms of generative AI in computing education caught my attention. Not because it contradicts the message of The AI Educator, but because it reinforces it. The paper, Beyond the Benefits: A Systematic Review of the Harms and Consequences of Generative AI in Computing Education. (Bernstein, S., et al. 2025) names some of the hidden, structural risks we’ve been sensing for a while now, and its findings align closely with the themes woven throughout the book: process over product, epistemic integrity, and the fragility of traditional assessment in an AI-rich world.
Below, I break down what the review found and why these insights matter so much for curriculum design going forward.
1. AI Is Making Learning Look Better Than It Actually Is
One of the boldest claims in the review is also the simplest:
Students can now produce work that looks competent without actually being competent.
This is the core of the new learning-loss problem. Not post-pandemic, but post-GenAI.
When a tool can generate polished code, tidy up conceptual explanations, or reorganise an essay into something coherent, it creates a convincing surface that masks the underlying gaps. It’s not cheating; it’s a quiet erosion of the learning process.
In The AI Educator, I talk about how education has always wrestled with fears surrounding new technologies, from calculators to The Internet, but GenAI introduces something distinct. It allows students to bypass the entire cognitive journey of learning, not just parts of it.
“When assessment focuses primarily on the end product… GenAI can act as a shortcut that bypasses understanding entirely.” Chapter 5, The AI Educator
The work looks fine. The learning doesn’t.
That’s the danger.
2. Cognitive Offloading Is Turning Into Cognitive Dependency
Every tool encourages some level of cognitive offloading. Writing did. Google did.
But GenAI accelerates it in ways that are pedagogically disruptive.
Students in the review reported relying heavily on AI for:
debugging
problem-solving
generating ideas
structuring arguments
clarifying concepts
And the more they relied on AI, the more their confidence dropped.
The danger isn’t that students get help. It’s that they stop trying before they start.
“What was once an opportunity to wrestle with complex ideas becomes a button-click exercise in outsourcing thought.” Chapter 6, The AI Educator.
3. Students Are Losing Motivation, Not Just Skills
This is where the review gets especially interesting.
Students weren’t just losing knowledge. They were losing:
agency
ownership
pride
clarity about what they actually understood
AI didn’t make learning easier. It made learning feel less theirs.
If students feel disconnected from their own thinking, then no amount of polished output will matter.
“AI may generate content, but it cannot replace the human work of interpretation, empathy, and relational care.” Chapter 8, The AI Educator.
4. Hallucinations Become Misconceptions That Stick
One of the clearest risks in the paper is the problem of “false fluency.”
Generative AI often produces outputs that:
sound right
look right
feel right
…but are structurally or conceptually wrong.
And novices, who are the heaviest users, are the least able to spot those errors.
AI doesn’t just hide gaps.
It creates new ones.
The risk is not exposure to bad information; it’s the illusion of accuracy.
“AI outputs often sound good but are structurally incorrect or conceptually flawed, embedding new misconceptions rather than resolving existing ones.” Chapter 6, The AI Educator
5. Traditional Assessment Is No Longer Fit for Purpose
The review reinforces something we’ve already been feeling: assessment cannot continue as usual.
Tasks that rely on:
take-home writing
unguided problem-solving
surface-level explanation
open-ended homework
…are now vulnerable by design.
The problem isn’t student misconduct.
It’s misalignment.
The task asks for something AI can do better than a beginner.
That’s a design flaw, not a moral one.
“Traditional assessment models… may inadvertently encourage shortcuts when students perceive assignments as mere hurdles rather than meaningful learning experiences.” Chapter 3, The AI Educator
“Assessment becomes performative rather than diagnostic.” Chapter 5, The AI Educator
6. The Equity Issue That’s Easy to Miss
Finally, the review points to a subtle but important pattern:
Students with strong self-regulation, critical thinking skills, or access to human support tend to use AI as a thinking partner. A point I made in my last blog post.
Students without those advantages use AI as a replacement.
And that widens gaps we are already struggling with.
Chapters 2 and 3 of The AI Educator both highlight this tension:
Without guardrails, AI shifts from tool to tutor and not always in ways that preserve equity, agency, or epistemic integrity.
The more powerful the technology, the more powerful the disparity.
So What Do We Do With This?
Here’s the hopeful part. The review doesn’t argue that AI harms learning.
It argues that poorly aligned pedagogy harms learning in the presence of AI.
And that’s a crucial distinction.
Across the book, one message keeps resurfacing:
AI is not the threat.
Misaligned curriculum is the threat.
Product-focused assessment is the threat.
Lack of guardrails is the threat.
Uncritical adoption is the threat.
GenAI isn’t breaking education.
It’s revealing where it was already fragile.
And in that sense, it’s giving us an opportunity.
If we design for process over product…
If we make students show their thinking, not just their answers…
If we teach AI literacy rather than fear…
If we integrate guardrails instead of outsourcing judgment…
Then AI becomes a catalyst and not a shortcut.
A Practical, Forward-Facing Call to Action
If you’re an educator designing curriculum right now, here’s where I’d start:
1. Identify the thinking that AI cannot do.
That’s what your curriculum must foreground.
2. Shift assessments toward visible processes, not polished products.
Think logs, drafts, steps, reflections, decisions.
3. Normalise disclosure rather than detection.
Make AI use part of the assignment, not something to police.
4. Build AI literacy into the curriculum itself.
Students need to critique AI, not copy it.
5. Make human judgment, empathy, and dialogue central.
These are your competitive advantages and AI amplifies them.
Closing Thought
When I set out to write The AI Educator, my intention was simple:
to help educators use AI without fear.
This new research doesn’t undermine that goal.
It strengthens it.
Because to use AI confidently, we need clarity.
To design with AI responsibly, we need honesty.
And to build curriculum for an AI-rich world, we need to guard the parts of learning that matter most.
Not because AI threatens them.
But because they have always been worth protecting.
Reference
Bernstein, S., Rahman, A., Sharifi, N., Terbish, A., & MacNeil, S. (2025). Beyond the Benefits: A Systematic Review of the Harms and Consequences of Generative AI in Computing Education. arXiv preprint arXiv:2510.04443.


