How Schools Use AI Safely, Transparently and Professionally

AI Policy per Domain

AI is increasingly present in lesson design, feedback, differentiation, learner support, administration and school-wide decision-making. Each new use brings the same question: what rules apply here, and how do we protect learners, staff and the organisation?

The six AI domains show where AI appears in everyday school practice. AI policy defines the goals, risks, norms and rules that keep that use responsible, fair, transparent and professional.

Close-up of hands reviewing an AI policy dashboard on a laptop, with goals, risks, norms and rules visible, in calm lighting with subtle blue and yellow accents.

17 November 2025 3-minute read

TL;DR Summary

The six AI domains show where AI enters school practice; AI policy sets the goals, risks, norms and rules that keep that use safe, transparent and human-centred.

Domains = practice; Policy = terms & rules.

A Practical Framework for Schools

Many schools want “one AI policy for everything”, yet AI behaves very differently in lesson design, assessment, differentiation, learner support, administration and leadership.

That is why policy works best per domain: concrete, actionable, and aligned with the responsibilities of each team. This creates not an abstract policy document, but a practical decision-making framework that can be used directly in lessons, processes and governance.

The Six Domains - AI Policy at a Glance

1. Lesson Planning & Materials Development

AI may support lesson materials, but human control and transparency are mandatory.

  • Policy goals: quality, didactic appropriateness, clear authorship.
  • Risks: hallucinations, incorrect explanations, unclear what counts as original work.
  • Norms: quality checks, attribution, transparency about AI-generated content.
  • Example rule: “AI may generate draft lesson materials; the teacher verifies accuracy and cites AI contributions in the final version.”

2. Assessment, Feedback & Evaluation

AI may support feedback, but may never determine marks or final judgements.

  • Policy goals: fairness, reliability, human judgement.
  • Risks: bias, incorrect rubric matching, automated marking without context.
  • Norms: human-in-the-loop, assessment validity, auditable feedback.
  • Example rule: “AI may generate feedback, but grading and assessment remain fully under a teacher's responsibility.”

3. Differentiation & Personalised Learning

AI may help differentiate, as long as recommendations are explainable, fair and non-binding.

  • Policy goals: equal opportunities, safe personalisation, transparency.
  • Risks: algorithmic exclusion, black box advice, uneven learning pathways.
  • Norms: explainability, transparency, human verification of AI advice.
  • Example rule: “AI-based route suggestions are not binding; teachers always determine whether the offer is appropriate and fair.”

4. Learner Support & Didactic Assistance

Learners may use AI within safe, guided boundaries that strengthen critical thinking.

  • Policy goals: safety, digital literacy, pedagogical oversight.
  • Risks: inappropriate output, misinformation, privacy concerns.
  • Norms: safe input, supervision, escalation procedures for unsafe output.
  • Example rule: “Learners use AI within agreed boundaries; entering personal data or confidential cases is not permitted.”

5. Communication, Organisation & Administration

AI may support administrative drafts, but privacy and accuracy must always be checked by staff.

  • Policy goals: data minimisation, reliability, clarity.
  • Risks: data leaks, factual errors, misinterpreted summaries.
  • Norms: privacy risk analysis, anonymisation, human final checks.
  • Example rule: “AI may generate draft letters; a staff member checks content, tone and privacy sensitivities before sending.”

6. School Development, Policy & Safety

AI may support policy analysis, but may never drive decisions-human interpretation remains required.

  • Policy goals: transparent decision-making, system safety, data literacy.
  • Risks: misread trends, misleading risk scores, technological dependency.
  • Norms: human interpretation, incident response plan, annual policy review.
  • Example rule: “AI-driven analyses may inform policy discussions, but all decisions are made and justified by people.”

How Policy and PDCA Work Together

AI policy defines what must happen: goals, risks, norms and rules per domain. PDCA defines how it happens: workflows, controls and yearly updates.

Further details appear in the companion article: PDCA per Domain.

Together Strengthening AI Policy

Would you like to develop or review AI policy per domain? We support schools with policy frameworks, risk analysis and safe AI integration aligned with classroom practice and school leadership. We are happy to think along with what works for your school.

« More Perspectives Self-Scan »