The Part of the AI Marking Debate We Keep Missing
The AI marking debate is often framed as a choice between slow but personal human feedback and fast but impersonal automation. The more useful model is different: AI drafts first, and the teacher reviews, adjusts and approves what the student actually sees.
Quick Summary
- •The real question is not whether AI can replace teacher judgement, but which parts of marking are best done by AI and which by the teacher.
- •Teacher-reviewed AI feedback is different from full automation because the teacher still decides what the student sees.
- •Reviewing a draft is usually faster and less draining than creating feedback from scratch for every script.
- •Faster feedback often matters more than people admit because students are more likely to act on it while the work is still fresh.
- •The strongest AI marking workflows reduce repetitive drafting work without removing teacher oversight.
The debate is usually framed in the wrong way
If you spend any time reading about AI in education, you start to notice that the conversation about marking tends to fall into a very familiar pattern.
On one side is the teacher who marks everything by hand. Every comment is written from scratch. Every judgement is personal. The teacher knows the class, knows the context, knows which student has suddenly gone quiet, whose writing has improved, and who is capable of more than they showed on the page.
That kind of marking matters.
It is also exhausting.
For a lot of teachers, the reality is that careful handwritten or typed feedback comes with a cost. The pile grows. Turnaround slows. Work comes home. Students get their books or essays back long after the lesson has moved on. The feedback may be thoughtful, but it arrives late, and late feedback is often much less useful than we like to admit.
On the other side is the image of AI marking. Instant. Efficient. Consistent. An AI tool reads the response, applies the mark scheme, generates detailed comments, and never gets tired. No backlog. No Sunday-night slog. No variation between the first script and the fifteenth.
That matters too.
But it is also incomplete.
Because AI does not know the student in the way a teacher does. It does not know that this answer, while still weak in absolute terms, is the first time a particular pupil has structured a paragraph properly. It does not know that one student has rushed because things are difficult at home, or that another has slipped because their confidence has gone. It does not know what tone this student will actually respond to.
So the debate gets framed as a choice between two unsatisfying extremes.
Either the teacher does everything manually and preserves the personal element, but pays for it in time and energy.
Or the AI does everything quickly, but loses the human judgement that makes feedback meaningful.
I think that is the wrong frame entirely.
The real question is not whether AI can replace teachers
I do not think the most useful question is whether AI can mark work instead of a teacher.
For me, the better question is this:
What parts of the marking process are best done by AI, and what parts are best done by the teacher?
That is a much more practical question, and it leads to a much better workflow.
The model that makes sense to me is not AI replacing teacher judgement. It is AI producing a first draft, and the teacher reviewing, adjusting, and approving what actually goes back to the student.
That is a very different thing.
In practice, the AI does the first pass. It reads the question, applies the mark scheme, drafts a mark, identifies strengths and weaknesses, and produces feedback. In a good system, it does this consistently and in line with the assessment criteria. It can be especially useful for structured tasks like AI essay marking, where the same rubric needs to be applied repeatedly across a class set.
Then the teacher steps in.
Not to begin from a blank page, but to review the draft.
Sometimes the AI mark is right and the feedback is clear, so the teacher makes only minor edits or none at all. Sometimes the mark needs nudging. Sometimes the wording needs changing because the student needs a firmer message, or a gentler one, or something more specific than the AI has produced. Sometimes the teacher spots that the answer deserves more credit than the AI gave it. Sometimes they spot the opposite.
That is where the teacher adds the value that matters most.
The teacher is still the one exercising judgement. Still the one who knows the class. Still the one deciding what the student sees.
The AI has not replaced that judgement. It has reduced the amount of low-level drafting work needed to get there.
That changes the job in a useful way
What often gets missed in this debate is that reviewing a draft is not the same task as creating feedback from scratch.
That distinction matters.
Starting from nothing is cognitively expensive. You have to reread the answer, hold the mark scheme in your head, decide on a mark, work out what to say, phrase it clearly, and make it useful for the student. Then you do it again. And again. And again.
Reviewing a draft is different.
You are evaluating. Checking. Editing. Refining. Making professional judgements on top of a starting point.
That is still real work. But it is usually faster, and it is much less draining.
I think this is one of the most overlooked benefits of AI feedback for teachers. The gain is not simply that the machine is quicker. It is that the teacher's mental effort gets redirected towards the part of the process where their expertise matters most.
That is a much better use of teacher time than writing every sentence of feedback from scratch at the end of a long day.
Faster feedback is not a small win
There is another part of this that matters more than people sometimes realise.
When teachers are overloaded, feedback slows down. That is not a character flaw. It is just what happens when the volume of marking exceeds the time available.
The problem is that delayed feedback is often weaker feedback, even when the comments themselves are thoughtful.
By the time students get their work back, the lesson has moved on. The thinking that produced the answer has faded. The mistakes are less vivid. The motivation to improve that exact piece of work is lower. The moment where feedback could most easily shape the next attempt has already passed.
One of the strongest arguments for using AI marking tools for teachers is not that they produce magic. It is that they can help teachers return feedback while the work is still fresh.
That matters.
Students are far more likely to act on feedback when they still remember what they wrote, why they wrote it, and where they were unsure. A useful comment on Tuesday is often worth much more than a beautifully phrased one three weeks later.
So when people talk about AI marking as if the only issue is whether the comments are identical to what a human would have written from scratch, I think they miss something important.
Speed matters in teaching.
Not speed for its own sake. Speed because it changes whether feedback still has any force when it reaches the student.
The false choice at the heart of the debate
A lot of the public discussion still jumps from one statement to another without noticing the gap in between.
It goes something like this:
- AI cannot replace a teacher's judgement.
- Therefore:
- AI should not be used for marking.
But that conclusion does not follow.
I agree that AI should not replace teacher judgement. I do not think a student should receive important feedback that no teacher has looked at. I do not think a school should hand over assessment entirely to an automated system and pretend the human layer is optional.
But none of that means AI has no place in marking.
It just means the teacher needs to remain in the loop.
That is the model that feels both educationally sensible and practically useful. The AI does the heavy lifting on the first draft. The teacher reviews, corrects, sharpens, and approves. The final feedback still carries human oversight, but it no longer demands the same amount of blank-page labour from the teacher.
For me, that is where teacher-reviewed AI feedback starts to make real sense.
The middle ground is where the value is
I suspect part of the problem is that extreme positions are easier to argue about.
AI will solve teacher workload is a simple headline.
AI should never be anywhere near marking is also a simple headline.
The reality is messier, but much more useful.
The best use of AI in education is often not full automation. It is structured assistance.
That is true in lesson planning, resource creation, and increasingly in feedback. The pattern that keeps working is this: let the AI do the first pass at the repetitive, time-consuming part, then let the teacher do the interpretive, contextual, human part.
That is not a compromise in the weak sense. It is a better design.
And in marking, it solves a real problem.
Teachers do not just need accurate comments. They need a workflow that is sustainable. They need students to get feedback soon enough for it to matter. They need to stay close enough to the work to understand what is happening in the class. And they need a process that does not quietly eat every evening.
A good AI marking workflow can help with that, but only if it is built around the teacher rather than around the fantasy of removing them.
Where I have landed
The position I have landed on is fairly simple.
I do not want AI to replace teacher judgement.
I do want AI to reduce the amount of repetitive drafting work teachers have to do in order to give students useful feedback.
That feels like the right balance.
The teacher still reads the work. The teacher still decides what is fair. The teacher still adjusts the tone, the emphasis, and the final message. The teacher still brings the knowledge of the pupil, the class, and the curriculum.
But the teacher is no longer doing every part of the process alone.
That is not a small difference. It changes the experience of marking quite a lot.
It makes the workflow quicker. It reduces mental load. It helps feedback go back faster. And it leaves more energy for the parts of teaching that only a human being can do well.
That, to me, is the part of the AI marking debate we should probably be talking about more.
Not whether AI can replace teachers.
But whether it can help teachers do the job in a way that is more manageable, more responsive, and ultimately better for students.
Gary Roebuck is an A Level Economics teacher and founder of TeachEdge, an AI-powered marking and feedback platform built for UK secondary schools.
Related Posts
Claude CoWork: You've Been Chatting With AI. Here's What Happens When It Can Actually Do Things
Anthropic's Cowork moves AI beyond the chat box and into real multi-step work. Here's what that looks like for teachers, where it could help, and where caution still matters.
What students actually think about AI marking and feedback
King's School, Chester ran an independent student voice survey on TeachEdge after around 18 months of use. Here's what 148 students said about clarity, speed, progress, and trust in marking.
AI Doesn't Automatically Reduce Your Workload. It Changes It.
A recent Harvard Business Review piece found that when people adopt generative AI, workload often intensifies rather than falls. For teachers, that is a warning worth hearing early, before AI quietly turns every spare minute into more tasks.
Ready to transform your marking workflow?