How Schools Can Ensure Safe, Purposeful and Professional Use of AI

PDCA Across the Six AI Domains

AI is rapidly entering every corner of education - from lesson materials to administration. But once you look beyond the quick wins, one fundamental question emerges: how do you ensure that AI use is not only possible, but also structurally safe, effective and professionally managed?

The answer lies in applying PDCA per domain: small, practice-based improvement cycles that help school teams maintain ownership and quality.

Teachers collaborating in a calm, human-centred staff room with a whiteboard showing the Plan-Do-Check-Act cycle, in soft lighting and subtle blue and yellow accents.

17 November 2025 4-minute read

TL;DR Summary

The six AI domains show where AI is used in education; PDCA per domain ensures how that use remains safe, purposeful and professional.

Domains = practice; PDCA = quality assurance.

PDCA Across the AI Domains

The overview below helps teams improve AI use structurally - without adding unnecessary bureaucracy - by making short-cycle adjustments exactly where practice demands it.

1. Lesson Planning & Learning Material Development

PLAN

  • Define quality criteria: for AI-generated materials (accuracy, subject alignment, learning objectives).
  • Set transparency rules: which elements should be marked as AI-generated?

DO

  • Use AI for draft lesson plans, explanations, quizzes and differentiation ideas.
  • Always apply human quality control.

CHECK

  • Review materials regularly for clarity, bias and subject accuracy.
  • Collect feedback from colleagues and learners.

ACT

  • Improve guidelines (prompt templates, quality norms).
  • Share strong examples within the team.

2. Assessment, Feedback & Evaluation

PLAN

  • Decide usage rules: when AI may or may not be used in marking.
  • Establish human judgement: remains decisive.

DO

  • Use AI to generate formative feedback or analyse open responses.
  • Document how AI-generated feedback has been used.

CHECK

  • Compare AI feedback with human samples.
  • Monitor distortion, inaccuracies or bias.

ACT

  • Adjust rubrics, workflows or tool choices.
  • Train teachers to interpret and correct AI output.

3. Differentiation & Personalised Learning

PLAN

  • Define usage points: when AI may provide learning-path recommendations.
  • Ensure transparency: towards learners about personalisation.

DO

  • Use AI to adapt level, pacing and practice materials.
  • Keep a human-in-the-loop for all learning decisions.

CHECK

  • Monitor whether learners retain equal opportunities.
  • Check whether AI advice is appropriate and subject-accurate.

ACT

  • Refine rules for fair differentiation.
  • Train teams to recognise bias and misleading patterns.

4. Learner Support & Didactic Assistance

PLAN

  • Specify input boundaries: what learners may and may not input into AI tools.
  • Create safety rules: for safe multimodal use (image/audio).

DO

  • Use AI for explanations, examples and guiding questions.
  • Support learners in safe and critical use.

CHECK

  • Review sample tasks: is the information correct and safe?
  • Check whether learners use AI responsibly.

ACT

  • Update school agreements (privacy, data minimisation).
  • Develop micro-lessons on “critical work with AI”.

5. Communication, Organisation & Administration

PLAN

  • Define agreements: on data minimisation, reporting, and AI-assisted communication.
  • Establish privacy (DPIA): requirements for administrative AI tools.

DO

  • Use AI for summaries, draft letters, planning and reports.
  • Perform content and privacy checks before sending.

CHECK

  • Verify accuracy, tone and potential data-leak risks.
  • Analyse error patterns or tool dependence.

ACT

  • Improve templates, controls and processes.
  • Align AI use with ICT, privacy officers and leadership.

6. School Development, Policy & Safety

PLAN

  • Set frameworks: for dashboards, data use and risk scoring.
  • Develop AI response plan: for misuse, errors, leaks.

DO

  • Use AI for policy notes, trend analysis and quality assurance.
  • Monitor systems for anomalies or cyber-risks.

CHECK

  • Review analyses for correctness and bias.
  • Evaluate incidents and signals.

ACT

  • Update policies, dashboards and data governance.
  • Conduct an annual PDCA review.
“PDCA per domain = less complexity, more quality”

Meta Insight

Many schools try to fix AI policy “all at once” - but that rarely works. PDCA is powerful precisely because it is iterative: small steps, close to daily practice.

Yet there is an important nuance: PDCA was designed for stable processes, while AI evolves quickly and cuts across domains. This article should therefore be seen as a foundation - not a complete system.

The Limits of PDCA for AI

Even a well-executed PDCA cycle has three structural limitations when applied to AI:

  1. Too reactive - it improves what already exists, while AI also requires innovation.
  2. Too linear - AI develops faster than annual or biannual cycles.
  3. Too segmented - AI domains influence each other, creating potential silos.

PDCA remains essential - but not sufficient on its own. Use the Double Diamond approach alongside PDCA to maintain space for exploration, experimentation and innovation - exactly what fast-moving AI applications require.

Creating Your PDCA Cycle Using the AI Self-Scan

Want to get started more quickly - without writing out each domain? Use the AI Self-Scan as the foundation for your PDCA.

In short:

  • Repeat the scan every six months.
  • Sort results into Plan, Do, Check, Act.
  • Choose two improvement actions, one evaluation moment and one policy update.

💡 Creating a PDCA cycle with the AI self-scan: from scan to action in 60 minutes

Taking the Next Step with Guided Support

If you want to make this concrete in your own organisation, we provide PDCA workshops, AI policy sessions and practical AI coaching aligned with lesson practice, quality assurance and school-wide development. Feel free to contact us for a brief, no-obligation intake.

« More Perspectives Our Services »