Skip to main content

12 March 2026

AI TeachMeet London 9th March

Auto generated profile Image
Written by

Jonathan O'Donnell | Educational Consultant

The intent and purpose of the day

On 9 March, educators from across schools and trusts came together in London for a full-day AI TeachMeet: a space built not around hype, but around practical classroom reality, strategic leadership, honest reflection and professional curiosity. The purpose of the day was clear from the outset: there is already plenty of noise around AI in education, but too often there is too much ‘why’ and not enough ‘how’, too many overblown claims, not enough discussion of cost, and not enough openness about what has not worked. This event was designed to be different.

The day was framed by a shared agreement: share practice, not hype; be transparent; declare affiliations and biases; challenge ideas, not people; remain open-minded; and protect confidentiality. That tone mattered. It created exactly the kind of professional space educators need if we are going to talk seriously about AI — a space where people can be candid, specific and grounded in the realities of schools and colleges.

Across the day, we heard from colleagues working in primary, secondary, trusts, leadership, classroom teaching and innovation. The result was a rich picture of what responsible AI adoption can look like in practice: curriculum design, governance, workload reduction, strategy, AI literacy, innovation, marking, lesson planning and bespoke tool building.

It was also about community. The day brought together educators who are experimenting thoughtfully, asking difficult questions and trying to balance innovation with safeguarding, pedagogy, equity, workload, governance and professional judgement. In that sense, the event was not simply about AI tools; it was about building a stronger professional culture around how we talk about them.

Presentation highlights from the day

1) Andy Dax – Implementing AI in primary and developing AI literacy from age 4–18

Andy opened with one of the most important themes of the day: AI is not a single computer science problem — it affects every subject across the school. He outlined Wellington College’s work to build a 4–18 AI literacy curriculum, rooted in shared language, shared values and a cross-curricular approach, recognising that staff confidence varies and that schools need a common direction.

His argument for starting early was especially powerful. Andy made the case that children are already encountering AI through home, school and the wider world, so schools need to respond by building critical thinking and ethical understanding from the earliest years. His emphasis on modelling mattered too: teachers showing pupils, live and age-appropriately, how to ask questions of AI and how to think critically about the answers.

One standout idea was the insistence that ‘order of thinking matters’. Pupils should think first, attempt first, and use AI as a reflective partner rather than a substitute for thought. Another was his reminder that schools must build independence, not dependence. That is exactly the right challenge for AI literacy work in schools.

Impactful lines from Andy’s session included: ‘AI is not a single computer science problem.’ ‘We are all a teacher of literacy and AI literacy.’ ‘Order of thinking matters.’

2) Ravi Chagger – Osborne Co-operative Academy Trust’s AI journey so far

Ravi shared a trust-wide perspective on implementation, governance and culture. His presentation showed that meaningful AI adoption does not happen through isolated enthusiasm alone; it needs structures. Osborne’s model includes a Trust IT Sub-Committee, AI Forum, AI Focus Group, AI Champions and AI Leaders for students, alongside a growing knowledge base, approved tools and regular staff training.

A particularly striking aspect of Ravi’s session was the link to AI literacy and accountability. He highlighted the OECD/PISA 2029 focus on Media and AI Literacy, asking a question that should stay with every school leader: ‘Are our students becoming more capable, or simply more exposed?’ That line captured the central tension perfectly. Exposure to AI is inevitable; capability must be intentionally built.

Ravi was also refreshingly honest about the realities of rollout. He noted that some things had been introduced too quickly, that remote approaches had not always landed well, and that not all leaders naturally embrace technology. His closing message was an important corrective to simplistic narratives: transformation is not the result of a software update, but a culture you build and time you dedicate to staff development.

Impactful lines from Ravi’s session included: ‘Are our students becoming more capable, or simply more exposed?’ ‘We did too much too quick – remote was a mistake.’ ‘Transformation is not a result of a software update you download, but a culture you build.’

3) Tim Clarke – Leadership Team: Gen AI Research Study

Tim’s session brought strategic leadership sharply into focus. His research asked how generative AI could be used responsibly, strategically and innovatively by school leadership teams to improve decision-making, self-evaluation and impact for learners, staff and the wider school. He grounded this in clear success criteria, including stronger strategic thinking, better legal compliance, more data-informed leadership and improved access to documentation and messages.

What made this session particularly useful was the range of concrete case studies. Tim shared examples including summarising safeguarding documents, supporting attendance work, improving reports to governors, analysing learning walks and creating ‘critical thinking partners’ for different leadership roles. The time savings were significant, but the deeper message was even more important: AI can improve decision quality, time reallocation and implementation fidelity when used well.

His caution was equally important. Tim reminded everyone to ‘keep the professional in the loop’ and ‘never trust anything AI tells you’. That balance — practical benefit combined with disciplined scepticism — was one of the strongest leadership messages of the day.

Personally I have already adopted Tims concept of treating NotebookLMs as Critical Thinking Partners and will be sharing this widely across the Harris Federation.

Impactful lines from Tim’s session included: ‘Keep the professional in the loop.’ ‘Never trust anything AI tells you.’ Generative AI delivered ‘three leadership dividends: decision quality, time reallocation, implementation fidelity.’

4) George Greenbury – Breaking the Mould: how schools can drive genuinely groundbreaking innovation

George’s presentation challenged schools to think bigger about innovation. Using a four-stage model — clarify a problem, ideate solutions, develop a prototype, implement innovation — he showed how schools can move beyond simply consuming edtech and instead design genuinely new and useful solutions.

His work around Inkling, developed in partnership with Clifton College, was particularly thought-provoking. The central idea — ‘Every student deserves a personal tutor. Now they can have one.’ — offered a clear example of schools attempting to solve real problems through purposeful design rather than generic AI enthusiasm. The session also showed the importance of feedback loops, ethics, refinement and the need to keep adapting tools based on how students actually use them.

George’s session was a reminder that innovation in schools does not have to mean waiting for external companies to decide what matters. Schools can identify problems, prototype solutions and shape the tools they need themselves — if they are willing to think differently and stay focused on usefulness.

Impactful lines from George’s session included: ‘New + useful = innovative.’ ‘Every student deserves a personal tutor. Now they can have one.’

5) Adam Lockwood – Building your own AI marking systems

Adam’s session was one of the most practical of the day. He focused on using AI to reduce marking workload, especially in computing, and demonstrated how systems can be built to process student responses, apply criteria and return personalised feedback. His examples covered QLAs, Google Docs, Google Sheets and even approaches to marking programming work more efficiently.

What stood out was Adam’s honesty about iteration. He did not present AI marking as effortless; he showed the refinements, the prompting, the adjustments to harshness, and the need to align outputs closely to the mark scheme. That honesty mattered because it framed AI not as a magic shortcut, but as a tool requiring thoughtful design and testing.

His core message landed well: AI can help teachers scale timely feedback and reduce repetitive workload, but only if the teacher’s professional judgement remains central. As with other sessions, this was not about replacing the teacher; it was about building better systems around the teacher.

Impactful idea from Adam’s session: every ‘clever’ AI marking workflow still depends on careful prompt design, criteria alignment and human review.

6) Carmen Tenorio-Nunez – From classroom problem to AI-powered solution: Building the Slide Engine

Carmen’s presentation was a brilliant example of what happens when a teacher starts with a genuine pain point and builds from there. Her project, The Slide Engine, was designed to automate the process of standardising and populating PowerPoint lesson materials, turning a repetitive planning task into an efficient workflow. The headline figure was striking: 15 minutes per lesson reduced to 15 seconds, with 126 hours reclaimed weekly across staff.

Just as important as the outcome was the process. Carmen showed how AI was used to draft code, debug errors, understand failures, refine logic and suggest improvements, with AI acting as a thinking partner rather than as a replacement for expertise. She also foregrounded responsible development: testing thoroughly, documenting behaviour, considering security, engaging IT early and framing work around pedagogy.

Her opening principle was probably one of the cleanest statements of the day: ‘If it’s repeatable, it’s automatable.’ That line will resonate with many school teams looking at where small, sensible automation can free educators to focus more on teaching.

7) Pete Marshman – Playlab AI: teaching and learning tools for teachers and students

Pete introduced educators to Playlab, a free platform designed to help educators and students become not just consumers of AI, but creators too. The emphasis was on hands-on AI literacy and building small, purposeful applications to solve real problems in school-safe ways.

The session showcased how teachers and students can create specialised AI apps rather than relying only on general-purpose chatbots. Pete demonstrated examples such as a maths worksheet generator, a history adventure game and a coding challenge generator, while also giving a simple framework for identifying audience, problem and solution.

One of the strongest themes from this session was empowerment. Playlab’s goals include increasing AI literacy, enabling educators to solve problems by building custom tools, and building a wider community around improving education through AI. That practical maker mindset added a valuable dimension to the day.

8) Page Starr – Paste, Ask, Act: making institutional knowledge executable with AI

Page’s presentation was an excellent example of teacher-led experimentation grounded in subject reality. Working in Computer Science and IT, Page described using AI to support lesson generation, specification delivery, SEND responsiveness and iterative improvement, especially while teaching a newer qualification.

The standout concept from the session was ‘executable knowledge’ — the idea that every failure, friction point or repeated issue can become a rule or instruction that improves the system. As Page put it, ‘Each rule started as a failure. The prompt is a record of what went wrong and how I fixed it.’ That is a remarkably useful way for educators to think about AI-enhanced workflows.

It was a session about craft as much as technology: observing what goes wrong, refining prompts, capturing context and gradually turning experience into repeatable intelligence. For teachers building resources at scale, that was an especially compelling message.

9) Jonathan O’Donnell – Microsoft updates, live demos and open discussion

The final session pulled together many of the day’s themes through live demonstration and open discussion. I shared examples from the latest Microsoft tools, including Teach and Create experiences, while being very open about both strengths and shortcomings. In particular, there was positive discussion around features such as flashcards, fill-in-the-blanks, matching and supporting examples, alongside honest critique where tools still fall short in classroom usefulness.

I also shared reflections on Microsoft 365 Copilot licensing, cost considerations, why Harris has rolled out paid licences in the way it has, and how tools such as Researcher and Analyst agents are already being used effectively. The session also included examples of Interpreter mode in Teams, custom instructions, saved memories and chat history management, all aimed at showing how staff can get more value from tools when they understand how to shape them properly.

A particularly welcome addition was inviting Elliot Giles to speak about the Enterprise Skills Hub, connecting AI conversations to employability, future skills and how we prepare students for the changing world of work.

My reflections on running a full-day AI TeachMeet

Running an AI TeachMeet like this was extremely rewarding. There is something genuinely powerful about getting educators together in a room to speak honestly, share practical work, and learn from one another without posturing.

That said, a whole-day TeachMeet is also a lot of organising. One of the biggest challenges is making sure presenters have the freedom to share what they genuinely want to share, while also ensuring the agenda has coherence and there is not too much crossover between sessions.

One of my biggest takeaways is that the forum at the end was a great idea, but I wish I had planned more time for it. The appetite for discussion was clearly there. Another practical lesson: if you give presenters 20–25 minutes, they will almost certainly use 25 minutes, so it is wise to build in more time between sessions for discussion, questions and professional conversation.

And one thing I feel strongly about: hold the line. TeachMeets should remain safe spaces for educators to talk openly about AI tools and services. If we want candour, trust and honest professional exchange, then the presentation space itself should be for current educators. That matters.

Final thoughts: we need more of this

If the day proved anything, it is that educators are not short of ideas, insight or ambition when it comes to AI. What they need are the right spaces to speak honestly, learn from one another and test ideas without fear of judgement or pressure to perform certainty.

So my message at the end of this event is simple: create your own AI TeachMeets. Keep this community spirit going. Share what you are doing. Share what you have learned. Share what has gone wrong. Be brutally honest. The profession will be stronger for it.

Thank you to every presenter — Andy Dax, Ravi Chagger, Tim Clarke, George Greenbury, Adam Lockwood, Carmen Tenorio-Nunez, Pete Marshman, Page Starr, and everyone who contributed to the open discussion — for giving so generously and so honestly across the day. Thank you as well to all the guests who engaged so thoughtfully and made the event what it was.

A special thank you to CAS for supporting the event, helping ensure it could remain free for educators to attend, and to the Harris Institute of Teaching and Leadership for providing the venue and hosting the day.

Discussion

Please login to post a comment