Straight from the Source: What GCSE Business Students REALLY Think About AI Essay Feedback (and why it matters)
A GCSE Business teacher asked students for honest, unfiltered views on TeachEdge's AI feedback. The themes were clear: speed, detail, confidence — plus a few sharp suggestions on how we can improve.
Quick Summary
- •Students value fast feedback because it keeps revision momentum going.
- •They highlighted the depth and actionability of TeachEdge feedback as a genuine driver of improvement.
- •Some want a 'Goldilocks' level of detail — helpful without being overwhelming.
- •Their suggestions point to clearer strand-level feedback, more student control, and stronger handwriting recognition.
Summary: A Year 11 GCSE Business class gave candid feedback on our AI marking tool. The positives were striking — speed, detail, and confidence. But they were also clear-eyed about what needs improving: better control over how much feedback they get, more strand-level precision, and fewer technical hiccups.
Why I'm sharing this
As teachers, we spend countless hours planning lessons, delivering content, and (most time-consumingly) marking.
We want feedback that genuinely helps students improve. But does it always land? And in a world where time is tight, can technology genuinely help — without turning learning into something impersonal?
I recently received an email from Stephen, an Economics & Business teacher using TeachEdge with his Year 11 GCSE students. As part of a review for his school's Digital Strategy Working Group, he asked selected students for honest, unfiltered views on using our AI marking tool.
He anonymised their responses and gave permission for me to share them. And honestly, reading feedback like this is gold — it's the clearest way to see whether we're actually helping students learn.
So, what did they say?
1) The need for speed: instant feedback wins big
One theme came through repeatedly: timeliness.
In the fast-paced GCSE revision cycle, waiting days (or weeks) for feedback can break momentum. Students really valued the instant loop AI can offer:
- Student B: "Gives feedback almost instantly, which is great for quick improvements."
- Student D: "I have found the instantly generated feedback very useful because it allows me to move on quickly with my revision to see which areas I need to improve."
- Student C: "…gives a large amount of feedback in a timely manner…"
This matters because feedback is most useful when the task is still fresh. Traditional marking often can't deliver that at scale.
2) Detail that drives improvement
Beyond speed, students frequently highlighted the depth and actionability of the feedback.
They weren't describing it as "nice comments" — they described it as targets and steps they could act on:
- Student A: "I think the detail of feedback is very good and I have noticed improvements within my answers… The detail of feedback is very clear about where your answer needs changing and it also acts as a set of targets on what to include next time."
- Student E: "I think Ai is very helpful as it gives me detailed feedback in what areas i need to focus on and improve… i also like the fact it can write an improved answer which can help me if i decide to rewrite or on future similar answers…"
- Student F: "I think its feedback has greatly helped my understanding of how to answer big questions."
A few students also mentioned that seeing an improved answer helps them understand the "gap" between what they wrote and what the mark scheme wants.
3) Confidence and structure
What I found most encouraging was that students described the feedback as something that changed how they approached questions.
Not just "I got a mark", but "I understand how to do this better next time":
- Student D: "After tackling many questions by now, Teach Edge has improved my confidence in answering questions."
- Student B: "It has made me more aware of how to structure better answers."
- Student E: "…increasing my confidence as i feel as though i have a better understanding of the mark scheme."
That's the whole point: improving students' ability to self-correct and build better answers over time.
Honest feedback: where can we do better?
The best kind of feedback includes criticism. And the students didn't hold back — which I genuinely appreciate.
Finding the "Goldilocks zone" for detail
A couple of students felt the feedback could sometimes be too much.
If feedback over-explains, it can bury the key message. One student noted that sometimes the AI rewrites the entire answer, which isn't always the most practical next step.
This is a real balancing act we're constantly working on: enough detail to drive improvement, but not so much that students disengage.
More granular feedback and control
Some students wanted feedback broken down more explicitly into the "strands" within their answer (closer to what a teacher might do in class).
Others wanted more agency — the ability to set their own questions or even run something like a tutoring-style session. That points towards future improvements around customisation and student control.
Technical hiccups
A couple of practical issues came up:
- handwriting/photo upload errors ("misspells my writing")
- occasional missing marks on specific questions
These are the unglamorous parts of edtech, but they matter. Reliability is non-negotiable, and we keep these areas under constant review.
The teacher's ally (and students notice)
One student made a point that really stuck with me:
- Student F: "I bet it's really helpful for you as well because you can just make little tweaks to its marking."
That's exactly the goal.
TeachEdge isn't built to replace teacher judgement. It's built to give a strong first draft of marking and feedback, so teachers can spend their limited time on:
- tweaks that matter
- targeted interventions
- patterns across a class
- actual teaching, rather than endless drafting
Thank you
A huge thank you to Stephen — and especially to his Year 11 students — for taking the time during a busy revision period to share such thoughtful, honest feedback.
This kind of dialogue is what shapes TeachEdge into a tool that genuinely serves both students and teachers in the UK system.
Gary Roebuck is Head of Economics at Holy Cross School, New Malden and the creator of TeachEdge.ai.
Related Posts
The Difference Between "The Auditor" and "The Teacher" (and Why It Matters for Your Students)
If you've ever pasted an essay into ChatGPT and got harsh, pedantic marking, you've met 'The Auditor'. The fix isn't a better model — it's using the right kind of AI brain for the right job, with a rubric and best-fit judgement.
Co-Intelligence in the Classroom: How AI and Teachers Are Partnering to Transform GCSE Business
A GCSE Business case study from The King's School Chester: how Stephen Walton combined TeachEdge.ai with one-to-one coaching to improve feedback cycles, boost student confidence, and drive strong mock exam progress.
Why TeachEdge.ai Gets Marking Right (When Others Don't Quite)
Most AI marking tools are clever, but they often miss what real exam marking rewards: clear credit for what's there, even when answers are rushed. Here's how TeachEdge.ai is trained to mark like a teacher, not just apply a rubric.
Ready to transform your marking workflow?