FFS, don't give them the answers!!
Students are using AI all wrong and we are facilitating it.
One of the biggest misconceptions I see in education is the assumption that our students are “naturally skilled” with generative AI. They’re not.
What they are skilled at is Googling. In fact, even that statement is questionable. For decades there’s been this myth that millennials and younger are some kind of “digital natives” and have an intuition for using tech. But this is not the case and that’s maybe a rant for another day.
Generative AI isn’t a search engine.
And treating it like one barely scratches its potential.
A Google-style query retrieves information.
An AI dialogue builds understanding.
When students interact with AI the same way they interact with Google, they miss the real power of these tools which is the power to critique, challenge, refine, simulate perspectives, test assumptions, and stretch their thinking.
Let me show you exactly what I mean.
Students aren’t misusing AI out of laziness, they’re misusing it because no one has taught them the difference between searching and thinking with a machine.
The Before/After: What Students Actually Ask vs. What They Should Ask
These examples illustrate the gap between search queries and learning dialogues. A gap educators can close with good scaffolding.
Example 1: Understanding a Concept
BEFORE (Google-style):
“Explain disruptive innovation.”
This produces a neat definition, but no thinking, no struggle, no conceptual development.
Why is struggle important to learning? Check out this post.
AFTER (dialogic, critical, Metis-style):
“Here’s what I think disruptive innovation means:
‘A new technology that replaces an old one by being cheaper or more convenient.’
Where is my explanation oversimplified? Ask me two questions that help me deepen it.”
Now the student is:
articulating their prior understanding
inviting challenge
stepping into a dialogue
refining their mental model
This is learning, not retrieval.
Example 2: Preparing for a Discussion or Workshop
BEFORE:
“What are the pros and cons of XR in higher education?”
Again, a list, not a learning moment.
AFTER:
“I’m preparing for a debate on whether XR is transformative in higher education.
Challenge my position that it’s transformative.
Give me a counterargument from the perspective of a university CFO.”
Now the AI becomes a sparring partner, not a list-maker.
The student practices argumentation, perspective-taking, and justification.
Example 3: Working With Ambiguity (instead of avoiding it)
BEFORE:
“Will quantum computing be disruptive?”
This frames AI as a prediction oracle.
AFTER:
“Give me two plausible interpretations of whether quantum computing is disruptive, and explain what assumptions each interpretation relies on.
Then ask me which interpretation I find more convincing and why.”
Students learn to:
recognise ambiguity
articulate criteria
evaluate competing views
confront uncertainty (instead of fleeing from it)
Example 4: Strengthening Reasoning (instead of bypassing it)
BEFORE:
“Is AI a sustaining innovation or a disruptive one?”
A fast question that gives a fast answer leading to shallow learning.
AFTER:
“Help me classify generative AI by asking me three diagnostic questions first.
Don’t offer your own definition until I’ve attempted one.
Then identify two assumptions in my reasoning.”
This mirrors the kind of structured Socratic inquiry that deep learning thrives on.
Students aren’t misusing AI out of laziness, they’re misusing it because no one has taught them the difference between searching and thinking with a machine.
This is where scaffolding becomes essential.
Why Scaffolding Responsible AI Use Is Non-Negotiable
Every semester, I see two kinds of students:
The ones who rush straight to polished AI-generated answers.
The ones who feel anxious and afraid of “using AI wrong.”
Neither group is learning deeply.
And without guidance, generative AI doesn’t just become a shortcut, it becomes a bypass around critical thinking, reflection, and productive struggle. These are the exact processes higher education is meant to cultivate.
Scaffolding AI use isn’t about restriction. It’s about helping students think better with these tools rather than think less.
A Light Touch of Metis: A Guide, Not a Crutch
In my own course, I introduce Metis, a fictional guide inspired by the Greek figure of strategic wisdom. Metis appears at the end of each topic to model how students can use AI to:
test arguments
explore counterpoints
identify blind spots
embrace uncertainty
refine reasoning
His prompts look like:
“Ask the AI to critique your explanation.”
“What assumptions are you, or the model, making here?”
“Find a counterexample that challenges your classification.”
Metis doesn’t replace thinking. He provokes it.
He’s a light reminder that AI is most powerful when used deliberately, reflectively, and dialogically.
Four Practical Scaffolds You Can Add to Any Course
These are the scaffolds that consistently work across disciplines, and they don’t require rewriting your entire curriculum.
1. Clear Permission Levels
Students use AI more responsibly when they know exactly what’s allowed.
A simple four-level model works:
AI-0: No AI
AI-1: AI for brainstorming or light editing only
AI-2: AI allowed, but students must verify and document
AI-3: AI-integrated tasks where prompting is part of the learning
Clarity reduces anxiety and prevents accidental misconduct.
2. Require “Process Evidence,” Not Just Outputs
If assessment only rewards polished products, AI will produce polished products.
But if you require:
drafts
screenshots
prompt logs
annotations
reflect-and-revise notes
…students must show their thinking, not hide behind a model’s fluency.
This is one of the strongest safeguards for learning integrity.
Check out my AI Disclosure Proforma
3. Teach Students to Challenge AI
Most students assume AI outputs are “correct enough.”
But higher education requires more than “correct enough.” It requires critical judgement.
Teach students to ask:
“Where might this be oversimplifying?”
“What assumptions shape this explanation?”
“Whose perspective is missing?”
“What uncertainties or disagreements exist here?”
These questions build what I call meta-AI literacy, the ability to interrogate AI as a partner, not a source of truth.
4. Encourage Structured Dialogue, Not One-Shot Prompts
The biggest shift educators can make is teaching students to stay in conversation with AI.
A good structure is:
Attempt your own answer
Ask AI to critique it
Defend or refine
Seek counterexamples
Summarise how your understanding changed
This moves students from consumption to co-construction.
A Simple Example You Can Use Tomorrow
Task: Classify generative AI as sustaining, disruptive, or transformative.
Scaffolded version:
Student writes their classification first.
Student asks AI:
“Where are the weaknesses in my explanation?”Student challenges the model:
“Provide a counterexample from a regulator’s perspective.”Student revises their classification.
Student submits a short reflection describing what shifted in their understanding.
This takes a simple task and turns it into a thinking exercise.
Why All of This Matters
Our students are entering a world where generative AI will sit alongside them in every intellectual task they perform.
If they learn to rely on it uncritically, they risk weakening their judgement, their reasoning, and their intellectual independence.
But when we scaffold AI use intentionally, something powerful happens:
students think more deeply
their reasoning becomes more defensible
their confidence grows
and they learn to collaborate with AI rather than outsource their thinking to it
This is the future of responsible AI use in higher education.
Not avoidance.
Not overuse.
But deliberate, transparent, structured engagement.
Want More Guidance Like This?
I’m currently completing my book, The AI Educator, which explores scaffolding, assessment redesign, ethical AI literacy, and practical teaching strategies for the era of generative AI.
If you’d like to be notified when it’s released, subscribe here, I’d love to keep the conversation going.



