20 May 2025
Working with Assessment data - ECT online event
If you were unable to join us for Working with Assessment Data online community meeting, don't worry! You can catch up on all the content and a recording of the session below.
Making Sense of Assessment Data in Computing Classrooms
-
Measuring pupil progress is complex, but using pre- and post-tests can offer insights—especially when the same test is used.
-
Statistical tests such as the paired t-test can help determine whether progress is significant or just down to chance.
-
Tools like Excel and Python (via Google Colab) can be used to visualise and analyse assessment data, even by beginners.
-
Richer data sets (e.g. SEN status, Pupil Premium) allow for deeper analysis of equity in learning outcomes.
-
National data shows persistent attainment gaps, but also surprising trends, such as EAL pupils outperforming peers.
In our latest CAS ITT/ECT community session, led by Miles Berry, we took a detailed look at how assessment data can—and can't—be used to evaluate pupil progress, particularly in Computing. While it’s tempting to link improved scores directly to specific strategies (like weekly multiple choice quizzes), the reality is often more complicated.
We began with a simple data set: pre- and post-test scores from a cohort of students using identical 20-question tests. The goal was to assess whether progress had been made, and if so, whether it was statistically significant. Using Excel, Miles demonstrated how to calculate averages, create box-and-whisker plots, and apply the paired t-test to determine whether observed differences were more than random variation. Even a small improvement, he explained, can be statistically significant.
The session then shifted gears to more advanced data analysis using Google Colab and Python. A more detailed anonymised dataset included demographics such as form group, SEND status, Pupil Premium eligibility, and participation in regular AFL quizzes. This enabled a deeper dive: did certain groups benefit more? Did the quizzes make a meaningful difference?
The answer, perhaps surprisingly, was no. While the overall group made statistically significant progress, the quizzes themselves didn’t appear to account for it. However, the data showed that students with SEND made greater gains than their peers, while Pupil Premium students made less progress—a disappointing but familiar trend.
In the second half of the session, Miles explored publicly available national data. Using school-level figures, he demonstrated how broader structural inequalities persist: students in more affluent areas tend to achieve higher Attainment 8 and Progress 8 scores. Yet a fascinating insight emerged—pupils with English as an Additional Language (EAL) often made more progress and, in many schools, outperformed native English speakers. This led to a discussion about possible reasons, from parental expectations to cognitive benefits of bilingualism.
Throughout the session, the emphasis was on helping teachers understand what the data does and doesn't tell us. It’s not just about proving a strategy worked, but about critically examining how different students experience learning and achievement.
This session raises thoughtful questions for anyone involved in assessment:
-
Are we assessing what we think we're assessing?
-
Which groups of students benefit most—and least—from our current teaching approaches?
-
Are we using data to inform teaching, or merely to justify it?
Practical exercises for the classroom:
-
Use identical pre- and post-tests in a unit and analyse class data using Excel’s AVERAGE, MEDIAN, and T.TEST functions.
-
Introduce pupils to data visualisation by creating scatter plots or histograms of their test results.
-
Discuss fairness in assessment: Why might some pupils make more progress than others?
-
Try anonymising class data and letting pupils investigate patterns themselves.
Further Resources
Google Colab (for data analysis in Python)