Human Oversight as a Foundation for AI in Education

AI offers great opportunities for education: more efficient administration, personalised learning, and support with marking or detecting learning delays. Yet these opportunities come with a critical responsibility: humans must always remain in control.

The principle of Human-In-The-Loop (HITL) ensures that AI does not autonomously make decisions about pupils or educational processes. AI can analyse and advise, but the teacher or school leadership decides.

human agency and oversight AI

Updated 2 October 2025 3-minute read

TL;DR Summary

  • Human-In-The-Loop: AI output is always subject to human oversight.
  • This safeguards reliability, safety, and pedagogical quality.
  • Four key roles: school leadership, ICT coordinator, teacher, and pupil – with AI as a supporting factor.
  • Without oversight, risks include: flawed advice, loss of autonomy, inequality, or legal issues.
  • Core message: AI is a tool, not a substitute for human judgement.

What Is Human-In-The-Loop?

Human-In-The-Loop means that AI systems can always be reviewed or overruled by humans. They may provide advice or suggestions, but decisions with direct impact on pupils or education remain human.

Example:

  • AI might flag that “pupil X struggles with fractions”.
  • The teacher then decides why this is the case (motivation, home situation, dyslexia) and how to respond.

Without human oversight, context, accountability and pedagogical fairness are missing.

Roles and Responsibilities

Table 1. Roles and responsibilities
Role Responsibility Practical actions Risk without oversight
School leadership Strategic and policy oversight. - Define vision and policy.
- Ensure AI is only used with clear pedagogical added value.
- Provide resources and training.
Lack of clarity, legal risks, loss of trust.
ICT coordinator Technical assurance and monitoring. - Select safe, explainable AI tools.
- Implement audit logs and controls (where possible).
- Ensure compliance with GDPR.
Insecure systems, black-box decisions, data breaches.
Teacher Pedagogical assessment and daily supervision. - Review and adapt AI feedback.
- Apply contextual knowledge to advice.
- Communicate transparently with pupils and parents.
Misleading advice, loss of professional autonomy.
Pupil Critical user and feedback contributor. - Learn to review AI outputs.
- Report errors or issues.
- Develop digital literacy.
Passive use, lack of ownership and digital skills.
AI Supporting tool. - Analyse data, provide suggestions, speed up processes.
- Must be explainable and controllable.
Unchecked or flawed decisions without human correction.

Example from Educational Practice

A pupil monitoring system flags that a child is falling behind in reading comprehension. The AI advises assigning extra exercises.

  • Without human oversight: the pupil automatically receives extra work, while the true cause – motivation or home environment – remains invisible.
  • With Human-In-The-Loop: the teacher reviews the advice, investigates the context, and adjusts the learning pathway. AI provides input; the teacher safeguards quality.

Challenges and Solutions

  • Over-reliance on AI: train teachers to critically assess AI advice.
  • Loss of skills: maintain human tasks such as analysis and assessment.
  • Lack of time: integrate AI and oversight into existing workflows.
  • Legal and ethical risks: align policies with GDPR and the EU AI Act.

Towards a Culture of Shared Oversight

Human-In-The-Loop is not a technical add-on but an educational and governance imperative. It requires:

  1. Implementation: establish policies and oversight structures.
  2. Monitoring: gather feedback from teachers and pupils.
  3. Adjustment: refine policies and systems based on practice.

This fosters a culture in which AI is a powerful ally, but humans always set the direction. The key: trust what AI can do, but above all, rely on the professional judgement of teachers and the strategic frameworks set by school leadership. This also raises broader questions about moral responsibility in AI use — who bears the moral burden?

« More Responsible AI