Protecting Privacy in School AI
ChatGPT, Gemini and similar AI tools are entering classrooms at high speed. They can be powerful helpers, but they also raise urgent questions: what happens to the data we type in — and how do we keep pupils’ privacy safe?

TABLE OF CONTENTS
TL;DR Summary
AI tools like ChatGPT can support learning, but they also store and process data. Schools risk breaching GDPR if personal details are entered. Key steps: never input pupil data, use education licences, conduct risk assessments, and explain clearly to parents how AI is used.
Why relevant now?
Generative AI is no longer science fiction; it’s already in schools. Teachers use ChatGPT to create exercises, and pupils try Gemini to practise writing. But while the benefits are tempting, there’s a hidden cost: data privacy.
Everything typed into these systems is collected and stored. Sometimes this data is used to train the AI further. In education, that might mean sensitive information about pupils ending up on servers outside Europe, with little control over how it’s used.
For schools, this isn’t just a technical concern. It’s a legal duty under the GDPR, and a matter of trust with parents and pupils.
The Main Privacy Risks
- Data goes further than you think: When you type something into ChatGPT or Gemini, that text doesn’t just disappear. It’s stored on company servers, often outside Europe, and may be reused to improve the system. If you type in pupil names, grades, or personal circumstances, that information could be kept for years.
- Schools carry the legal responsibility: Under the GDPR, schools are seen as the “data controller.” This means the school is responsible for how pupil data are used – even when using an external tool. For risky tools like AI, the law often requires a privacy check (DPIA) before use.
- AI systems are not transparent: Many AI tools are black boxes. It’s unclear what data they collect, how long they store it, or who they share it with. Even Microsoft’s Copilot was criticised in the Netherlands for giving “incomplete and incomprehensible” answers about its data use.
What the Regulators Say
- The Dutch Data Protection Authority (AP) has warned: chatbots can easily capture very sensitive information. Schools must prevent staff and pupils from entering personal data.
- Enforcement is strict: companies like Clearview AI have already faced multimillion-euro fines for unlawful data collection.
- The new EU AI Act sets extra rules: schools must do risk assessments and give staff AI literacy training if they use AI for grading, scoring, or other high-impact tasks.
What Schools Should Do
- Never type pupil names or personal details into ChatGPT, Gemini, or other public AI tools.
- Use education licences where available – these often promise not to store or reuse data.
- Check the tool before use: Where are the servers? How long is data stored? Is it reused?
- Do a privacy assessment (DPIA) if the tool is used for grading, scoring, or profiling.
- Explain clearly to parents and pupils which AI tools are used and why.
- Train teachers so they know what is safe to input and what not.
Why This Matters
Privacy is not just about avoiding fines – it’s about trust. Parents want to know that schools handle their children’s information responsibly. Pupils have the right to learn in a safe environment without fear that their data will follow them for years.
Handled well, AI can support learning. Handled badly, it risks both pupils’ rights and the school’s reputation.
Symbio6's Privacy-First Policy
In the context of practice what you preach, we maintain a stringent privacy policy based on the principle of privacy-first. This extends beyond legal frameworks.