11 February 2026
Getting the most out of Copilot - CAS AI event
Getting the Most Out of Copilot in School
If you were unable to join us for “CAS AI: Getting the Most Out of Copilot” online community meeting, don't worry! You can catch up on all the content and a recording of the session below.
This online community meeting explored how Microsoft Copilot can support teaching, assessment and whole-school workflows — from quick lesson resources to custom AI agents.
Key Takeaways
-
Copilot’s free and 365 versions offer different capabilities, particularly around integration with Microsoft tools.
-
Custom instructions and saved “memories” can significantly improve the quality and consistency of AI outputs.
-
The Teach module enables rapid creation of lesson plans, quizzes, and study aids within a safeguarded workflow.
-
AI can enhance assessment and feedback — but requires careful chunking, checking and human oversight.
-
Custom Copilot agents open up powerful possibilities for curriculum, governance and operational support.
The session brought together Jonathan O’Donnell (Harris Federation) and Joe Cozens (Clifton High School) to demonstrate how Copilot is being used across schools — from classroom teaching to trust-wide strategy
Understanding Copilot in a School Context
Jonathan began by clarifying an important point: Copilot itself is not a standalone large language model. It uses ChatGPT behind the scenes but operates within Microsoft’s environment, including enterprise data protection when accessed via a school account
For many schools, this is the reason Copilot is permitted where other AI tools may not be. Signing in with a work account and seeing the green enterprise protection indicator signals that data stays within the Microsoft service boundary.
He also highlighted the distinction between:
-
Free Copilot (web-based, limited integration)
-
Copilot 365 licence (paid, deeply embedded across Word, Excel, PowerPoint, Teams and SharePoint)
This distinction becomes crucial when considering scale and whole-school implementation.
Improving Outputs: Custom Instructions and Memory
One of the most practical parts of the session was the demonstration of Copilot’s custom instructions and saved memory feature
Rather than repeatedly prompting:
“Respond in UK English” or “I teach secondary students in England”
…you can store that information once.
In a CAS context, that raises interesting possibilities:
-
Could you create shared prompt libraries across departments?
-
Should departments define tone or reading age expectations via custom instructions?
-
How might this support accessibility and SEND adaptations consistently?
Jonathan demonstrated how saved memories can be edited or deleted — and how temporary chats can remove context when required.
Pages and Collaborative Editing
Copilot’s “Edit in Pages” function allows users to develop content collaboratively with AI inside a live document space
Instead of copying text into Word, users can:
-
Refine AI outputs interactively
-
Insert tables, code blocks, checklists or equations
-
Roll back to previous versions
For Computer Science teachers, the ability to embed and refine code snippets directly in this space is particularly compelling.
Image, Video and Infographic Creation
Copilot now connects to Microsoft Designer, allowing image generation and editing — including moving elements and editing AI-generated text within images
Jonathan demonstrated:
-
Editing AI image text to fix common hallucination errors
-
Creating infographics (with caveats about layout accuracy)
-
Generating short explainer videos, which can be refined in Clipchamp
This opens up interesting questions for departments:
Are we teaching pupils how these tools generate media? Or simply using the outputs?
Prompt Templates for Teaching
A particularly useful strategy was the development of prompt templates with placeholders
For example:
-
Parsons problems in specific programming languages
-
Disciplinary literacy prompts including etymology and morphology
-
Debate question generators
By using variables (e.g. [exam board], [programming language], [topic]), departments can create reusable prompt frameworks that reduce cognitive load for staff.
This feels especially relevant for CAS members supporting early career teachers.
Assessment for Learning: What Works (and What Doesn’t)
A candid section of the session focused on limitations.
Handwritten text extraction was demonstrated to be unreliable — even with a paid licence.
While around 80% accurate, critical errors made it unsuitable for high-stakes marking.
However, AI was shown to be more effective for:
-
Multiple-choice or short-answer questions
-
Generating model answers
-
Expanding concise teacher feedback
-
Producing revision resources
The key message: Always keep the human in the loop.
Jonathan advised breaking marking into small chunks to avoid context-window limitations.
Copilot Teach: Lesson Planning and Study Tools
Joe introduced the Copilot Teach module, which provides structured tools such as:
-
Lesson planner
-
Quiz generator (built directly into Microsoft Forms)
-
Fill-in-the-blanks activities
-
Flashcards and matching tasks
Of particular interest for assessment was the ability to generate Forms quizzes quickly and use practice mode to encourage retrieval with low stakes.
Teams integration was also highlighted. Copilot can now summarise rubric feedback into concise student-facing comments — potentially increasing the likelihood that pupils actually read and act upon it
Next Steps: Questions for Your Practice
You might reflect on:
-
How are we currently using AI — for efficiency or for pedagogy?
-
Are we building staff capability in prompt design?
-
Where could prompt templates reduce workload without deskilling teachers?
-
How robust is our “human in the loop” approach to assessment?
-
Could custom agents support curriculum coherence across departments?
Suggested Classroom Exercises
-
Create a Parsons problem generator prompt for your current programming unit.
-
Use Copilot Teach to build a retrieval quiz and analyse how pupils respond to practice mode.
-
Compare AI-generated disciplinary vocabulary lists with those produced manually.
-
Test infographic outputs with pupils and critique hallucinations as a digital literacy task.