Teach Edge
HomeHow It WorksPricingTrialFAQsContactBlog
Ai In Education

Stop Trying to Catch Them: Why AI Detection is a Dead End for UK Secondary Schools

AI detection tools cannot reliably prove whether a GCSE or A Level student used generative AI. Schools will get further by modelling good AI use and protecting supervised writing time.

10 February 2026•7 min read•
Ai In EducationAssessmentAcademic IntegritySecondary SchoolsGcseA LevelTeacher WorkloadClassroom Practice

Quick Summary

  • •AI detection scores are not reliable enough to use as evidence in schools.
  • •False positives can punish capable writers and damage trust with students.
  • •Teach students what good AI use looks like, as a revision and coaching tool.
  • •Use regular supervised writing to create a clear baseline of independent ability.

One of the most common questions I get asked about TeachEdge is: "Can it detect if a student used AI to write their essay?"

The short answer is no.

The longer answer is that no tool can do this reliably enough for schools to use it as evidence, and I think it's the wrong question to be asking in the first place.

I should be clear: I'm talking specifically about UK secondary schools. Universities face a different set of challenges and have different levers to pull (longer-form dissertations, academic integrity frameworks, viva voce examinations). What follows is about the reality of teaching GCSE and A Level students, right now, in 2026.

For years we had plagiarism detectors. Turnitin and the like worked reasonably well. Copy-paste from a website? Caught. Lift a paragraph from a classmate's essay? Flagged. There was a reasonably clear boundary between "your work" and "someone else's work", and software could police it.

Generative AI blurs that boundary beyond usefulness, and no detection tool is going to put it back.

The detection arms race isn't winnable

I've tested the major AI detection tools (Turnitin's AI detector, GPTZero, Originality.ai) on work from my Economics classes. The results are, to put it kindly, unreliable.

  • A well-prompted AI response that's been lightly edited can sail through undetected.
  • Genuine student work can be flagged as "likely AI-generated" simply because the student wrote clearly and used sophisticated vocabulary.

That's the core problem. These tools are trying to distinguish between "sounds like AI" and "sounds like a capable student", and that distinction is getting thinner every month.

Students are quick learners too. Many already know how to prompt AI to write in their style, feeding it examples of previous work and asking it to match their tone, vocabulary level, and sentence patterns. Once a student is doing that, what exactly is a detection tool meant to be detecting?

And even if detection got better, you hit the next issue: the outputs aren't actionable evidence.

A 67% probability score isn't proof. You can't sit down with a Year 12 student and open with, "An algorithm thinks you might have cheated." That's not a healthy basis for a classroom relationship, and it's a risky way to make serious accusations.

The plagiarism detector model simply doesn't translate. We need a different approach.

To be clear: I'm not relaxed about students outsourcing their thinking. I'm relaxed about students using AI to learn, and strict about what counts as assessed writing.

What actually works: two practical shifts

After a year of experimenting with AI in my own classroom, I've landed on two strategies that address the real concern (ensuring students are genuinely learning) without the false promise of detection.

1) Model what good AI use looks like

The reason students use AI badly is that nobody has shown them how to use it well.

Here's what this looks like in practice. I say to my students:

"You've got a timed essay on market failure tomorrow. Go home tonight and use AI to prepare for it."

Not to write the essay, but to get ready for it.

Use it to:

  • explain the different types of market failure
  • quiz you on real-world examples
  • test your definitions and diagrams
  • critique a practice introduction in an examiner voice
  • help you spot gaps in your evaluation

It's the way an athlete uses a training partner the night before a race.

Or take the student who's been quietly lost on elasticity for three weeks but won't put their hand up. AI has infinite patience and energy. It will explain price elasticity of demand five different ways, with five different analogies, at 10pm on a Sunday night, without sighing or moving on.

That's not cheating. That's a resource no previous generation of students had.

But here's the principle that matters: good AI use still requires the student to do the thinking. If AI hands them a finished product, it has bypassed the entire point of the task. The learning happens in the struggle. The false starts, the corrections, the "I don't get it… oh, now I do."

Good AI use preserves that journey. It guides, prompts, challenges, and checks understanding, rather than delivering a polished answer to copy.

This is something we've built directly into TeachEdge's one-to-one Socratic tutoring. Rather than giving students answers, it asks them questions. It pushes them to develop their own reasoning, identify gaps in their arguments, and work through problems step by step. The AI does the scaffolding. The student does the thinking.

Instead of AI being a secret shortcut, it becomes a visible part of the learning process.

2) Protect what matters: supervised writing in class

Regular timed essays, completed in class under exam conditions, achieve something no detection tool ever could.

They show you what a student can actually do, independently, under pressure.

No ambiguity. No probability scores. No awkward accusations. Just a clear, honest picture of where each student is.

I've increased the proportion of assessed writing done in supervised conditions. Not every piece. That would be excessive and counterproductive. But enough that I have a reliable baseline for each student.

If their homework essays suddenly read like polished academic papers and their in-class writing is still full of basic gaps, I don't need an algorithm to tell me something doesn't add up. I just need a conversation and a plan:

  • "Show me how you produced this."
  • "Talk me through the argument."
  • "Let's write another version together."

This has a useful side effect too. It makes homework better. When students know their in-class writing is the benchmark, they're more motivated to use homework as genuine practice rather than an exercise in outsourcing. AI becomes a training partner rather than a substitute.

For secondary schools, this is practical and achievable. We already run timed assessments. We already know our students' voices well enough to notice when something doesn't fit. We don't need a detector to do what professional judgement already does. We need routines that protect valid assessment.

It's also why we built Exam Mode into TeachEdge: a clean, supervised writing environment designed to give teachers confidence that what they're seeing is genuinely the student's own work.

The real question isn't "did they use AI?"

It's "are they learning?"

Detection tools answer the wrong question. They try to police a boundary that no longer exists, using technology that can't keep pace with the tools it's trying to detect. The result is false confidence, false accusations, and a lot of wasted energy.

Modelling good AI use and protecting supervised assessment time answers the right question. It accepts that AI is part of the landscape, gives students a framework for using it well, and keeps the conditions we need to genuinely assess understanding.

The plagiarism era had clear rules: don't copy someone else's work. The AI era needs new rules, and they're not about catching students out.

They're about teaching students to think, and creating the conditions where we can see that thinking happen.


I'm an A Level Economics teacher and founder of TeachEdge, where we build AI tools that work with teachers rather than replacing them. If you're navigating AI in your department, I'd love to hear what's working for you.

Related Posts

Beyond Marking: How UK Teachers Are Getting Creative with AI

AI tools like ChatGPT can help with marking and feedback — but many UK teachers are using them for far more: lesson planning, classroom engagement, differentiation, communication, and personalised support.

Ai In EducationLesson PlanningClassroom Practice

The Birth of the App: From Sceptic to Advocate

TeachEdge.ai started as a practical response to the essay-marking workload — but quickly became a bigger idea: co-intelligence, where AI produces a strong first draft and teachers apply judgement, context, and care.

Teachedge.aiAi In EducationEssay Marking

From Chatbot to Co-Worker: What "Agentic AI" Actually Means for Teachers

Agentic AI sounds like jargon, but it points to a real shift: systems that can plan and take steps to complete a task, not just reply to a prompt. Here is what that means in classroom terms, and what to look for when you are choosing tools.

Agentic AiAi In EducationTeacher Workload

Ready to transform your marking workflow?

View Pricing•Contact Us

Product

  • How it works
  • Pricing
  • Compare TeachEdge to other AI Marking Tools

Resources

  • Blog
  • FAQs
  • Copyright FAQs

Company

  • Contact

Legal

  • Privacy
  • Child-Friendly Privacy Policy
  • Terms

TeachEdge is an education web-based application principally used by teachers in secondary schools. Disclaimer: TeachEdge.ai is independent of, and not endorsed by, any examination board.

TeachEdge.ai is a UK-built platform that helps secondary teachers give better feedback on essays and longer exam-style questions, without the copy-and-paste admin. Teachers set a task for a class, students submit in their own portal (typed or handwritten, including diagrams), and Teach Edge produces accurate draft marks and feedback calibrated to the relevant exam board (Edexcel, OCR, AQA, CIE, Eduqas). Teachers review and edit that feedback before anything is released.

It currently supports GCSE and A Level practice across: Economics, Business, History, English Language, English Literature, Sociology, Politics, Geography, Law, Philosophy, Music, Media, Film Studies, Biology, Maths, Physics, Chemistry, French, Spanish, Criminology and Psychology. The aim is simple: reduce marking load while making feedback clearer, more consistent, and more useful for students to act on.

Teach Edge also includes personalised tutoring. Teachers set the topic and students work through a one-to-one conversation that starts with a short baseline check and then proceeds in a Socratic, scaffolded way. Crucially, teachers can review full conversations and see summaries of student understanding or misconceptions, including class-level patterns, so tutoring feeds directly back into teaching.

© 2026 TeachEdge.ai. All rights reserved.