From Self-Scan to AI Policy

Our AI Self-Scan shows where your school stands - policy determines where you want to go. From isolated initiatives to shared agreements: this is how AI use grows into responsible policy.

Green plant growing out of checklist - symbolising the growth from awareness to policy.

6 November 2025 7-minute read

TL;DR Summary

Turn your latest self-assessment into action: sort results by Plan-Do-Check-Act, convert checks into one-line policy statements with owners and review cycles, and add short commitments per stakeholder. With a light annual PDCA review, AI becomes embedded in everyday practice-responsible, transparent, and firmly in support of learning.

From Awareness to AI Policy

Where our AI self-scan offers the diagnosis - insight into knowledge, attitudes, and practice - the AI policy becomes the roadmap: it helps set direction, secure agreements, and build shared learning around new technologies. As with all parts of our work, this policy is designed to use AI for learning - never instead of learning. This article explains, step by step, how schools can move from awareness to coherent, practical policy aligned with their vision, people, and practice.

1. From Self-Scan to Insight

A self-scan using our checklist helps identify how teachers, students, and school management currently use AI and how aware they are of its opportunities and risks. The results offer two types of information:

  • Checked items: show what is already secured - themes that can be formalised within existing policies.
  • Unchecked items: reveal white spots - areas needing attention in training, assessment, or policy.

💡 Work together to decide which issues are urgent (e.g. privacy, assessment, transparency) and which can develop over time (curriculum, professional development). This creates a realistic starting point for policy development.

2. Create Space and Stability

Before drafting policy, take time to gain an overview. Use this phase to:

  • Map how AI is currently used by teachers and students
  • Set temporary guidelines for generative AI (what is and isn't allowed in assignments and assessments)
  • Define preliminary boundaries for assessment and privacy

This creates space to think strategically instead of reacting ad hoc. The goal is not a “final” policy but a safe transition phase where experimentation and learning go hand in hand.

3. Build on Existing Frameworks

AI policy does not stand alone. It builds on existing values, documents, and processes:

  • The school's educational vision
  • The public values underpinning your education (fairness, autonomy, inclusion)
  • Existing digitalisation and quality policies

Do not start with a blank page - link new AI arrangements to what already exists. This avoids duplication and strengthens coherence across policy areas.

4. Group the Checks by Policy Level

The self-scan results can be sorted and organised in many ways. The policy level tag offers a simple structure for your document.

Policy levels and their purposes
Level Key question Focus
Strategic Why do we use AI, and with which values? Vision and public values
Tactical How do we safeguard quality, privacy, and support? Frameworks, policy, and professional learning
Operational What do we do daily, and who is responsible? Implementation, reflection, and PDCA

5. From Check to Policy Statement

Each check can be turned into a short, verifiable policy statement. Use this simple formula:

We ensure that [goal] by [how we secure it].

Example checks and policy statements
Check Policy statement
Assess new AI tools against core values Core values are safeguarded by assessing AI tools in advance using an evaluation framework.
Organise training We ensure that staff receive yearly AI and ethics training by including it in our professional learning plan.
Human judgement remains decisive We ensure that AI is used only in an advisory role by stating that final decisions always rest with the teacher.
Didactic balance We ensure that collaboration and critical thinking are preserved by consciously selecting and evaluating AI tools.

💡 One sentence per goal is enough - details can be elaborated later in protocols or manuals.

6. Translate Policy to Stakeholder Groups

Policy only becomes meaningful when everyone understands what it means for them. Include a short paragraph per stakeholder group with key responsibilities and agreements.

Stakeholder groups and typical commitments
Stakeholder group Typical commitments
Students Fair use of AI, correct source citation, and protection of personal data.
Teachers / Trainers Professional freedom to experiment, training in AI literacy, and safe data handling.
School Leadership / Board Clear procurement and usage rules for AI tools, defined responsibilities, and regular evaluation.

💡 How to include this in policy:

  • In the school plan: add a short paragraph per group in the AI section.
  • In protocols or appendices: describe examples and guidance for each group.
  • In communication: use this same structure for staff briefings or the school guide.

This keeps policy concrete, visible, and connected to everyday practice.

7. Professionalise and Embed

Use the self-scan results as a starting point for targeted professional learning.

  • Embed AI literacy in existing training, learning pathways, and quality processes - not as a separate theme, but as a natural part of professional growth.
  • Establish an AI working group or knowledge team to monitor policy, share experiences, and translate insights into practice.
  • Encourage teachers and students to explore AI ethics and applications within their own subject areas.

In this way, AI policy is not only written down but lived within the school's professional culture.

8. Communicate and Build Support

Good policy is shared policy.

  • Communicate clearly: include key agreements in the school guide, staff portal, and onboarding materials.
  • Foster dialogue: discuss the opportunities and risks of AI with teachers, students, and parents.
  • Involve the participation council: this increases ownership and legitimacy.

Avoid a “for or against AI” mindset - an open values-based dialogue connects rather than divides.

9. Keep Policy Alive: Update Using the Self-Scan

Repeat the self-scan every six months - this is your starting point for policy updates. Sort the results by the tags Plan, Do, Check, Act and identify where most progress is needed.

Straightforward approach:
Discuss the outcome of the scan in a one-hour PDCA meeting. Choose two improvement actions, one evaluation moment, and update one policy item. Quick, simple, and effective.

More in-depth approach:
Use the scan to set three priorities with your team or AI group. Design small pilots or adjustments in teaching, training, and quality assurance, and integrate the results into the AI section of your school plan. This keeps AI policy manageable, relevant, and directly connected to practice.

Summary

The AI Self-Scan is not an endpoint but a starting point for organisational learning. It helps schools to:

  1. Understand where they stand
  2. Create space for reflection and dialogue
  3. Build policy on existing values
  4. Engage people and strengthen professional capacity
  5. Monitor progress through regular reflection

Thus, AI policy evolves from awareness to accountability - step by step, reflective and human-centred.

Ready to Take the Next Step?

Discover how our AI coaching and team workshops can help you move from self-scan to a shared, values-driven AI policy.

« Self-Scan Our Services »