Back to blog
April 4, 2026

Why Every AI-Assisted Grade Needs a Paper Trail: The Case for AI Use Declarations in Education

AI Use Declarations create a defensible, fair, and auditable grading trail for AI-assisted feedback in schools.

ai-governanceassessmenteducators

Imagine this: a student disputes a grade. She says the feedback she received was generic, unfair, and clearly machine-generated. The teacher insists she reviewed every comment personally. There is no record either way.

Who wins that appeal?

In most schools today, nobody — because the evidence doesn't exist. That's about to change.


AI is already in your classroom. The question is whether it's documented.

Teachers everywhere — from Manila to Boston — are using AI tools to help draft feedback, check for plagiarism, suggest rubric scores, and generate comments on student work. Most are doing it thoughtfully. Some are doing it carelessly. But almost none of them are documenting it.

That gap — between AI use and AI documentation — is where accountability breaks down.

Two very different education systems have now arrived at the same conclusion. In the Philippines, the Department of Education issued Order No. 003, s. 2026 — Foundational Guidelines on Artificial Intelligence in Basic Education with a formal AI Use Declaration requirement built in (see also Inquirer coverage). In the United States, the Department of Education's own guidance on AI warns explicitly that "AI detection tools should inform educator judgment, not replace it. No student should face academic consequences based solely on automated detection." Harvard and Stanford now require explicit disclosure of AI assistance. Ohio passed legislation in 2025 mandating that every public school district have a formal AI policy by July 2026.

The language is different. The legal frameworks are different. But the underlying demand is the same: someone needs to be accountable for what AI does in a classroom, and that accountability needs a paper trail.


What is an AI Use Declaration?

An AI Use Declaration is a formal record, attached to a specific deliverable, that states:

  • Which AI tool was used
  • What task it assisted with
  • What prompts or instructions were given to the AI
  • That the teacher (or learner) reviewed, edited, and validated the final output
  • That no confidential student information was shared with the AI platform

It is not a confession. It is not an admission that the teacher "cheated." It is a professional record — the same kind of record a doctor keeps when they use diagnostic software, or a lawyer keeps when they use contract review tools.

And critically: it is tied to a specific deliverable, not a course or a semester. One assignment, one grading session, one declaration.


Three reasons this matters beyond compliance

1. It protects teachers

When a parent or administrator questions whether feedback was AI-generated, a teacher with an AI Use Declaration has a complete answer:

"Yes, I used an AI tool to assist with drafting feedback. Here is the tool I used, here is what I asked it to do, and here is my confirmation that I reviewed and revised every comment before it was released to students."

Without that record, the same teacher is left defending themselves on memory alone — vulnerable to accusations they cannot disprove.

The declaration doesn't just satisfy a compliance requirement. It is evidence of professional judgment. It shows that the teacher was in control of the process, not the AI.

2. It makes grading appeals fair — for everyone

Grading appeals are one of the most friction-filled moments in education. A student believes their grade was wrong. The teacher believes it was right. Without documentation of how the grade was determined, both parties are arguing from memory.

AI Use Declarations change this. When a grading session is documented, the appeal process has a foundation:

  • What rubric was applied?
  • What did the AI flag or suggest?
  • What did the teacher override or accept?
  • Was the final grade a human judgment or an automated output?

This isn't about catching teachers doing something wrong. It's about giving both teachers and students a fair process when disagreements arise. A documented grading trail protects teachers from unfair appeals just as much as it protects students from undocumented decisions.

Under DepEd DO 003, AI detectors are explicitly prohibited from being used as sole evidence of academic dishonesty. The U.S. Department of Education's guidance says precisely the same thing: no student should face consequences based solely on an automated flag. Both systems recognize that human judgment must be traceable — and the declaration is how you trace it.

This matters enormously in the context of grading appeals, which are governed differently in each system but share the same problem. Under FERPA in the United States, student work is classified as an educational record. That means any AI tool that processes student submissions — including grading assistants — is subject to strict data handling and disclosure requirements. A grading decision made with AI assistance, without documentation, creates an accountability gap that is both a legal exposure and a fairness problem.

3. It builds institutional trust in AI tools

One of the biggest barriers to AI adoption in schools — in the Philippines and the United States alike — isn't cost or access. It's trust. School administrators, parents, and accreditation bodies are watching how AI is used, and they are nervous.

The AI Use Declaration is the answer to that nervousness. It says: we use AI, we document it, we validate it, and we take accountability for the result.

In the Philippines, schools that build this practice early will be ahead of the curve when DepEd's AI Registry becomes operational. In the United States, institutions in California, Ohio, Tennessee, and a growing list of other states are already facing formal AI governance requirements. Across both systems, documentation is the difference between AI adoption that looks reckless and AI adoption that looks professional.

As one higher education policy guide put it plainly in 2026: "Every AI tool that processes student data must be vetted before deployment — and the institution needs to be able to explain how AI was used in any grading decision that is challenged."


What a good AI Use Declaration looks like in practice

For a teacher using an AI-assisted grading tool on a batch of student essays, the declaration should capture:

Assignment: English Argumentative Essay — Grade 10, Section A

AI tool used: Humans But Guided (HBG)

Task: Generate initial feedback on argument structure, evidence use, and grammar

Prompts used: [auto-populated by the tool]

Teacher review: All feedback reviewed, 14 of 30 comments modified before release

Data handling: No student names or identifying information submitted to AI

Declaration: I confirm that the final feedback reflects my professional judgment and that AI was used as a support tool only.

This takes less than a minute to generate when the tool is designed to support it. It can be exported as a PDF, stored in the school's records system, and produced immediately if an appeal arises.


The shift that's happening everywhere

AI in education is not a question of if — it's already here. The question is whether schools will manage it reactively, scrambling to explain their practices after a dispute, or proactively, building documentation habits now that protect everyone involved.

The Philippines has codified this through DepEd DO 003. The United States is building it through a growing patchwork of state laws, federal guidance, and institutional policy — with the same destination in mind. The EU AI Act, fully effective in 2026, classifies AI-assisted grading as high-risk and requires transparency by law. These aren't isolated policy experiments. They are converging on a global norm: AI use in education must be disclosed, documented, and subject to human review.

Teachers who document their AI use aren't admitting weakness. They're demonstrating exactly what every major education framework now asks for: that human agency remains at the center of every educational decision, with AI playing a supporting role.

That's not just compliance. That's good teaching.


How HBG handles this

Humans But Guided was built with the AI Use Declaration as a core feature, not an afterthought. Every grading session automatically generates a declaration that captures the tool used, the prompts run, and the teacher's confirmation of review. It exports as a PDF, ready to attach to gradebook records or produce in an appeal.

Because the best time to document a grading decision is before anyone questions it.


Sources


Humans But Guided is an AI-assisted grading platform built for educators. It is designed in alignment with DepEd Order No. 003, s. 2026, the National Privacy Commission's data privacy framework, and FERPA requirements for U.S. institutions.

Ready to Transform Your Grading Process?

Get started with AI-powered grading assistance.