16 July 2025
Navigating the EU AI Act - what does it mean for educators? CAS AI Online event
If you were unable to join us for Understanding the EU AI Act: Implications for Schools online community meeting, don't worry! You can catch up on all the content and a recording of the session below.
What the EU AI Act Means for Schools: A Recap for Computing Teachers
Key Takeaways
-
The EU AI Act sets global benchmarks for AI governance—UK schools aren’t legally bound by it but will still feel its influence.
-
AI use in schools will need clear policies around transparency, risk levels, and staff training.
-
High-risk AI use cases include student assessment, admissions filtering, and biometric monitoring.
-
Tools like ChatGPT and Turnitin need careful consideration due to potential misuse in high-risk contexts.
-
Schools should audit current AI use and develop a working party or policy group to oversee safe deployment.
With AI tools becoming increasingly integrated into teaching and school administration, understanding the legal and ethical landscape is more important than ever. That was the focus of the recent CAS online community meeting: Understanding the EU AI Act: Implications for Schools, led by Matthew Wemyss, Assistant School Director at the Cambridge School of Bucharest.
Matthew began by explaining why the EU AI Act, even though not directly applicable to UK schools, is still highly relevant. Since many AI providers will build their products to meet EU standards, these changes will likely affect platforms used in British classrooms too.
Understanding the Risk Levels
The Act categorises AI systems into four levels of risk:
-
Unacceptable Risk – These are uses that manipulate users, exploit vulnerabilities, or involve social scoring. Examples include emotional surveillance and AI headbands measuring student attention—tools that no school should be using, regardless of location.
-
High Risk – This includes AI systems that assess student learning outcomes, influence admissions, or monitor exams. For example, AI plagiarism detection tools like Turnitin might fall into this category depending on how they are used. If staff rely on AI alone for grading decisions without human oversight, the risk increases.
-
Limited Risk – This covers chatbots, virtual assistants, image generators, and writing aids. Schools using chatbots (e.g., for revision support or administrative tasks) need to ensure transparency so students know they are interacting with AI.
-
Minimal Risk – These are administrative tools like spell checkers or simple scheduling assistants, requiring only voluntary codes of conduct.
Practical Next Steps for Schools
Matthew shared actionable advice for schools, drawing parallels between the EU AI Act and everyday practice in British schools:
-
Appoint an AI Lead or Working Group – While the Act doesn’t specify roles, schools are advised to designate someone to oversee AI use, policy, and staff training.
-
Audit Current Use – Run a staff survey to find out which AI tools are already in use. Teachers often adopt AI informally before governance catches up.
-
Develop Transparent Policies – Matthew suggested a staff code of conduct with clear guidance on AI tools. He recommends using an appendix listing approved tools and their appropriate use cases, so updates are easy to manage.
-
Focus on Training – AI literacy training should match the level of risk. All staff need a basic understanding of AI, while those involved in assessment or admissions may need specialist guidance.
-
Consider Student Interaction with AI – AI-generated content, especially deep fakes or historical recreations, must be clearly labelled to avoid misconceptions. For example, Matthew’s school labels any AI-generated historical figures, whether in text, voice, or video.
Generative AI and Grey Areas
A key concern raised was generative AI use, especially tools like ChatGPT and Claude. These systems can technically perform high-risk tasks (e.g., marking work), but their terms of service usually prohibit this. If staff are informally using generative AI for grading, it exposes schools to significant risk—even if no formal policy is in place.
Next Steps: Reflect and Act
Consider these questions to shape your own school’s AI approach:
-
What AI tools are currently being used by staff and students, officially or unofficially?
-
Do you have a named person or team responsible for AI governance?
-
How do you classify risk levels for the AI tools your school uses?
-
Are your students aware when they’re interacting with AI? Are parents?
-
How do you handle appeals or redress if an AI tool influences an important decision?
Example Exercises for Schools
-
Run an AI Use Audit: Use or adapt Matthew’s survey to find out which AI tools are already in use.
-
Create a Staff Training Session: Hold a short INSET introducing the different AI risk levels and your school’s expectations.
-
Draft an AI Code of Conduct: Start small with basic principles, then build out specific cases in an appendix.
-
Label AI Content: Develop a school-wide standard for labelling AI-generated materials in lessons or assessments.
-
Test a Redress Process: Role-play what would happen if a student disputes an AI-based decision. Is there a clear, fair process?
Further Resources
UK DfE Generative AI Guidance for Schools
Matt Wemyss’ Resources Pack and Audit Tool (Free Download) (Use code CASEU for free access)