Guide • Teachers

Supporting academic integrity with the Integrity Checker

The Integrity Checker helps you support academic integrity by analysing completed submissions for signals that may warrant a follow-up conversation with the student. Use this page to run a check, interpret the Integrity Report, and understand what the tool can and cannot do.

What this guide helps you do

Help teachers run the Integrity Checker confidently, interpret Integrity Reports accurately, and understand the hard limits of AI-generated assessments before taking any action.

Expected outcome

You can run an Integrity Check, open the Integrity Report for any student, read the verdict and evidence correctly, use the suggested conversation, and know what is and is not appropriate to do with the output.

For

Teachers

Before you begin

Submissions must be graded (status: Completed) before running an Integrity Check. Run grading first. • The Integrity Checker is a School/District plan feature. You will be prompted to upgrade if your plan does not include it. • Each eligible submission uses 1 credit. Check your credit balance before running on a large class.

Feature requirements

School/District plan or above (Integrity Checker is not available on Starter or Pro plans). • At least one submission in Completed status. • Sufficient credit balance (1 credit per submission, waived for institution-managed accounts).

Applies to

Grading • Submission review • Academic integrity

Last verified: 2026-05-11

The Integrity Checker is the feature. Running an Integrity Check is the action. The Integrity Report is the per-student output you read. A check analyses all eligible submissions in the class at once — you cannot run it for a single student.

Step by step

  1. Open the assessment and make sure all submissions you want to check have been graded. Only completed submissions are eligible.
  2. Click the More menu (⋯) in the top-right corner of the assessment detail page.
  3. Select Check Integrity from the menu. If the option is greyed out, no eligible submissions exist yet.
  4. Review the confirmation message. It shows the number of eligible submissions and the credit cost (1 credit per submission for non-institution accounts). Click Confirm to proceed.
  5. Wait for the check to complete. The menu item will show Checking integrity… while it runs. You will see a success toast when it finishes.
  6. Once complete, open any student row to see their Integrity Report. The Integrity Report tab will appear next to the Grading tab in the submission panel.
  7. The Integrity Risk column on the assessment table also updates to show a risk badge for each student.

Keep in mind

  • The check runs across the whole class at once — you do not need to open each submission individually.
  • You can re-run the check after new submissions are graded. The menu shows the last checked date.
  • Institution-managed accounts are not charged per submission. All other plans use 1 credit per eligible submission.

Keep in mind

Integrity Check is available on the School/District plan only. If you see an upgrade prompt when selecting Check Integrity, your current plan does not include this feature.

The Integrity Checker is an AI-powered feature that analyses completed submissions for signals inconsistent with authentic student work. Running an Integrity Check produces an Integrity Report for each eligible student — a per-submission analysis with a risk verdict, supporting evidence, and a suggested teacher conversation. It is not a plagiarism detector and does not produce a finding of misconduct.

Key points

  • Run an Integrity Check from the More menu on the assessment detail page. Results appear as the Integrity Report tab on each student's submission panel.
  • The check runs across the full class at once. It is not triggered per student.
  • It analyses submission text, expected grade-level standard, past submissions on file, and peer similarity data when available.
  • The output is decision support. No action should be taken based on the report alone.

Keep in mind

The Integrity Checker does not determine whether academic misconduct occurred. That determination requires a direct conversation with the student and your own professional judgment.

The AI evaluates the submission against several signal types and weighs them in order of reliability. Understanding this helps you interpret findings correctly.

Key points

  • Peer similarity — the strongest signal. Shared unusual errors, identical phrasing, or idiosyncratic structures that appear across two or more submissions.
  • Baseline shift — a significant deviation from the student's previous verified work, or from the expected grade-level standard when no prior work is available.
  • Policy mismatch — the submission appears to violate a policy stated in your custom instructions (for example, a no-AI policy).
  • Hallucinated logic — the submission reaches a correct answer using reasoning that is impossible or fabricated.
  • Format anomaly — structural irregularities inconsistent with authentic work for that assessment type.
  • Genericity — language that is polished but generic. This is the weakest signal and is always capped at Weak strength on its own.

Notes

  • When no past submissions exist for a student, the AI uses a "phantom baseline" built from the expected grade level and subject. This reduces reliability and is reflected in a lower Confidence rating.

Keep in mind

Polished academic writing, strong grammar, or formal register are not treated as suspicious on their own. The AI is calibrated to avoid penalising strong students.

The verdict banner at the top of the report shows two values: the overall risk level and the confidence rating. These are distinct and mean different things.

Risk levels

Risk level

What it means

Suggested next step

Low

The submission is consistent with authentic student work. Any signals are weak or are well explained by counterevidence.

No conversation required unless other concerns exist.

Moderate Risk

The submission shows mixed signals that raise enough concern to merit a clarifying conversation. This does not mean misconduct occurred.

Have a supportive conversation with the student using the suggested questions in the Teacher Guidance section.

High Risk

Multiple strong signals converge — for example, high peer similarity combined with a significant baseline shift. A conversation is required.

Follow the required workflow below. Do not proceed to any formal action without speaking to the student first.

Insufficient Evidence

There is not enough reliable information to make an assessment — for example, a very short submission or one with no baselines available.

Treat the submission as you would any unreviewed piece of work. The absence of a finding is not a clearance.

Notes

  • Confidence (Low / Medium / High) reflects how complete the inputs were, not how likely misconduct is. A High Confidence Low-risk report means the AI had good data and found nothing concerning. A Low Confidence Moderate Risk report means the AI had limited data and you should weight the finding accordingly.

The Evidence section lists each individual signal the AI identified. Each finding includes a location, an excerpt, an explanation, and any counterevidence the AI considered.

Step by step

  1. Open a finding card to read the full explanation under Why this was flagged.
  2. Read the Could also be explained by section carefully. This lists innocent explanations the AI weighed before assigning the signal.
  3. Notice the signal type on each card (for example, Peer similarity or Baseline shift) — this tells you what kind of evidence you are looking at.
  4. Findings are sorted from strongest to weakest. Expand the first finding first.
  5. Use the evidence to inform the questions you ask the student, not as proof of anything.

Keep in mind

  • A finding marked Weak is a low-weight signal. It contributes to the overall picture but should not be treated as significant on its own.
  • Counterevidence notes are as important as the finding itself. If the AI raised a signal and then found a plausible innocent explanation, that context matters.

The Teacher Guidance section gives you a suggested conversation starter, questions to ask, and recommended next steps. It is designed to help you have a productive, non-accusatory conversation with the student.

Step by step

  1. Read the Opening script before you speak to the student. It is written in the recommended tone (Curious & Non-Accusatory, Supportive & Clarifying, or Document & Review) based on the risk level.
  2. Use Copy to copy the script to your clipboard if you want to paste it into a message or note.
  3. Work through the Questions to ask during your conversation. These are grounded in the specific findings, not generic.
  4. After the conversation, review the Recommended next steps and decide which are appropriate given what the student told you.

Keep in mind

  • The tone recommendation is chosen by the AI based on the overall risk level and signal types. You can adjust your approach based on what you know about the student.
  • The opening script is a starting point, not a script you must follow word for word.

Keep in mind

The Teacher Guidance section is also AI-generated. Use your professional judgment when deciding how to approach the conversation.

The Integrity Checker and its Integrity Reports are decision support tools with hard limits. Understanding these limits is essential before you act on any finding.

Key points

  • It cannot determine whether academic misconduct occurred. That is a human judgment.
  • It cannot access the internet, detect AI-generated writing tools by name, or compare submissions against external databases.
  • It cannot account for every legitimate explanation for a signal. Counterevidence notes are not exhaustive.
  • A Low risk result does not guarantee authenticity. Insufficient evidence also returns a low-signal result.
  • Confidence reflects input completeness, not AI accuracy. Even a High Confidence result can be wrong.

Warnings

  • Never use Integrity Checker findings as sole or primary evidence in a formal academic misconduct process. The feature is not designed or validated for that purpose.

For any submission where the risk level is Moderate Risk or High Risk, the following steps are required before any consequential action is taken.

Decision checklist

  • Read the full report including all evidence cards and their counterevidence.
  • Consider what you know about the student independently of the report.
  • Have a direct, private conversation with the student using the suggested questions as a guide.
  • Evaluate the student's explanation against the findings.
  • Only proceed to further action if, after the conversation, your own professional judgment confirms a concern.
  • Document the conversation outcome before taking any formal step.

Keep in mind

This workflow applies regardless of the risk level shown. A High Risk result is not a finding of misconduct — it is a prompt to investigate further.