03 November 2023
Why we must (still) teach our children to code
The advent of ChatGPT and GitHub Copilot, based on Large Language Models (LLMs), has led many people to suggest that, since AI can apparently write code, perhaps programmers will soon be out of work; and, in the education arena, perhaps we no longer need to teach our young people to code. For example in his speech on 6 July 2023, Sir Keir Starmer said "The old way – learning out of date IT, on 20 year old computers – doesn’t work. But neither does the new fashion, that every kid should be a coder, when artificial intelligence will blow that future away."
I don't think that particular soundbite is right at all. Learning programming is not a "new fashion"; it is part of the very fabric of learning computer science as a foundational subject. Not every child will grow up to be a coder, but that is a poor goal: it suggests a narrow, instrumental, purely-vocational view of education. Not every child grows up to be a biologist or an artist, yet we think it is important that every young person has some elementary understanding of biology and of art.
There are two main reasons why I think that programming should remain a fundamental part of what our children learn about computing:
- Powerful knowledge and foundational understanding. Even if AI could write perfect programs (which it can't), there are strong educational reasons for learning to do things that computers do well (Section 1), and programming most of all.
- Human programmers are more important than ever. AI will make programmers more productive, but it will not make them redundant any time soon (Section 2). Software development as a professional discipline is far from being blown away.
This article is tightly focused on the question of whether or not we should teach programming at school. An earlier article takes a broader look at the impact of AI in education.
1. Powerful knowledge and foundational understanding
It is tempting to think that if a computer can do X, then there is no point in teaching X to our young people. But actually much of what children learn at school, especially in mathematics, can be done rather well by computers. That is not just oversight: there are strong educational reasons for doing so (Section 1.1).
So far as coding is concerned, I argue in Section 3 that AI does not make programming redundant (not yet, and not for the foreseeable future). But even if it did, there are particularly strong educational reasons to believe that learning to understand software (including the ability to write it) is fundamental (Section 1.2).
The good news is that ultimately LLMs should open the way to a richer, more exciting pedagogy for computer science (Section 1.3).
1.1 Foundations matter
A humble calculator can multiply and divide multi-digit numbers in a flash, using virtually no energy. And yet our primary schools take every child through hundreds of hours of learning ...arithmetic. It's called "numeracy", and if anything the emphasis on numeracy has increased over the last decade or two.
If a dirt-cheap machine can do the job, why do we do that? Because we have learned, through bitter experience, that calculators are not enough; that people who are innumerate have worse life outcomes, lower salaries, and poorer health (National Numeracy makes the case here). Numeracy underpins both deeper learning, and is a foundational life skill.
1.2 Programming matters especially
The National Curriculum Programme of Study for computing says this (my bold):
The national curriculum for computing aims to ensure that all pupils:
- can understand and apply the fundamental principles and concepts of computer science, including abstraction, logic, algorithms and data representation
- can analyse problems in computational terms, and have repeated practical experience of writing computer programs in order to solve such problems
- can evaluate and apply information technology, including new or unfamiliar technologies, analytically to solve problems
- are responsible, competent, confident and creative users of information and communication technology
Note that our focus here is on what all children should learn, not just a geeky elite. That raises the bar considerably. We think that all children should learn elementary maths and science, but we do not insist that they all learn Japanese.
Why does the national curriculum make such a big deal about programming? Is coding like elementary maths or like Japanese? Here are two articles that explain why:
- Why we should teach our children to code (Hello World 10, Oct 2019, page 62)
- Practical programming in computing education (NCCE Academic Board, Feb 2022)
Programming is a classic example of what educationalists call powerful knowledge, knowledge that is systematic, subject-specific, and distinct from common sense. The term was introduced by Michael Young in the context of geography; here's a good summary, and a talk by Michael Young himself. He says "Powerful knowledge refers to what the knowledge can do or what intellectual power it gives to those who have access to it. Powerful knowledge provides more reliable explanations and new ways of thinking about the world and …can provide learners with a language for engaging in political, moral, and other kinds of debates." (Young, 2008).
Here are some more specific reasons that programming is fundamental to learning computer science:
- It blows away the magic. If computers are simply magical black boxes that do wonderful things when you utter the appropriate incantations (mouse-clicks or, these days, prompts to ChatGPT), we are fundamentally dis-empowered. Someone else makes the machine; we simply use it. If it does not work, we are stuck. If it gives misleading or wrong answers we risk following its bad advice. In short, we lose our agency: instead of us controlling technology, it controls us.
Even an elementary understanding of computer science dispels the magic. All my phone is doing is relentlessly and mindlessly executing instructions, one after another. This can be a revelation: "you mean, that's all it does?" - It brings computer science to life, making it concrete, interactive, creative, and playful. Saying "computer science is the study of information, computation, and communication" is true but abstract; writing programs that compute over live data brings it to life. Studying computer science in the abstract would be a dry, eviscerated husk.
There is a false dichotomy between "knowing" and "doing", between "knowledge/understanding" and "skills". In fact, of course, the two are complementary. You can't do much if you don't have the powerful knowledge to direct your actions. But it is virtually impossible to truly internalise deep understanding without a lot of doing to illustrate concepts, exemplify them and put them to work, and move knowledge from memorised "facts" into visceral understanding. In computer science, programming plays a deep role in building understanding.
- It teaches logical thinking and precision. The computer pitilessly (but non-judgmentally) exposes sloppy thinking -- the program simply doesn't work. Whether we articulate it or not, in writing programs we exercise the classic scientific method: we form a hypothesis about what is wrong; we devise experiments that will validate or disprove the hypothesis; we update our mental model about how the program works; we modify the program, and iterate the entire process. (Side note: we need a proper caution about the extent to which logical thinking about programming "transfers" to logical thinking more generally; see for example Mark Guzdial's blog post on the subject.)
- It gives an informed basis for discussing the social and ethical dimensions of technology. Computer systems open up huge new opportunities for making the world better; but they also open up huge new ways for making it worse. Basing our ethical and societal judgements on actual knowledge, rather than myths and guesswork, will help us all, individually and collectively, to make better decisions.
In particular, it puts AI in perspective. A neural network is just a program that executes, like any other program, doing linear algebra operations over the inputs and a large number of "weights". ChatGPT is no more than 200,000,000,000 floating point numbers and a raft of linear algebra. Of course it doesn't have feelings! And there is real danger in sliding into anthropomorphising AI. (Side note: there is considerable philosophical debate about whether there is any level of complexity at which a computer could be said to possess "feelings" or "consciousness". You might enjoy Heinlein's "The Moon is a Harsh Mistress" (1966) for example. But knowing how AI works is crucial for engaging with this debate in a meaningful way; and I think it's pretty clear that today's LLMs are nowhere near "consciousness".)
- Programming is a unique and rich form of expression. Sometimes programming is a means to an end: "write an app to show weather forecasts on my phone". But quite often it is a medium in which the author directly explores a design space that is not well understood in advance. For example, for a scientist building a model of the spread of Covid in a population, the code is the model; and the scientist may repeatedly modify it to explore different designs and their consequences. Similarly for an artist, the code that generates a picture or a piece of music is the medium of creativity. Coding underpins computational thinking.
In short, even for sorting algorithms, it is simply irrelevant that ChatGPT can write them easily. What matters for pedagogy is the foundational understanding, creativity, and resilience that programming teaches.
1.3 Richer pedagogy
As Conrad Wolfram points out in his book The Maths Fix (see also Computer Based Maths), even if we believe that all children should become numerate, it is bonkers to teach maths as if computers did not exist. Rather than teaching children to execute long-division algorithms, we should take advantage of computational methods to teach maths in a way that is more ambitious, rich, engaging, and exploratory.
It's the same with coding. I am not arguing that we should ignore LLMs in teaching programming. Quite the reverse: we should exploit their strengths to allow us and our students to focus on the things that matter. For example, we know that many children find programming difficult, and struggle with where to place their semicolons. If a coding co-pilot can automate some of these low level tasks, it may free more of their cognitive cycles to spend on higher level goals, such as (for example) critiquing, explaining, and finding errors in draft code produced by an LLM.
But this is all speculation. We are only six months into the LLM era, so no one has much idea yet of how to build better pedagogy on that technology. That will come. You might, for example, enjoy Promptly: using prompt problems to teach learners how to effectively utilize AI code generators (Denny et al, 2023), or How Novices Use LLM-Based Code Generators to Solve CS1 Coding Tasks in a Self-Paced Learning Environment (Kazemitabaar et al, 2023). Meanwhile, let's avoid the knee-jerk reaction of "kids don't need to learn to code any more".
2. Human programmers are more important than ever
LLMs can certainly write code. But can they reliably write correct code? Not really: they are imitative, predicting the next symbol in the program based on the preceding symbols, the original prompt, and an enormous corpus of existing code, much of which contains bugs. Code written by an LLM is (for now anyway) fundamentally untrustworthy.
2.1. Copilots need pilots
If I ask ChatGPT to "write a sort algorithm in Python", it will do so well, because there are thousands and thousands of well-labelled sorting algorithms in the training data the ChatGPT has learned from.
A sorting algorithm is a rare example of a task with a simple, easy-to-understand specification that can be stated in a few words. But suppose I ask ChatGPT to "write a system that gathers data from my health-monitor bracelet, and tells me if things are going wrong". Unlike a sorting algorithm, it is far from clear what I mean. How should the data be gathered? How should it be stored? What does "going wrong" mean? What should the user interface be like? How can it be resilient to data loss if the computer crashes? What about privacy concerns? Should the components communicate using cryptographic methods? How should the user be authenticated?
Real systems are complicated, and consist of many thousands of lines of code. Large language models just spit out text that fits the probability density of similar text it has seen before. It seems wildly unlikely that a LLM will generate an entire 100,000-line program any time soon, at least not the program you wanted.
What LLMs can do is act as a sophisticated auto-complete mechanism, that works alongside a programmer making them more productive, and perhaps, over time, a lot more productive. Rather than just auto-completing single words, it may fill in multiple lines of code. For example, rather than looking up the user manual for a data visualisation library, you may be able to say "Draw a graph showing the cumulative number of accidents over time, based on the data in array A".
However, calling it "super-auto-complete" substantially under-plays the role that LLMs can play. Auto-complete offers one suggestion; but LLMs offer a conversation. If the code it offers doesn't do the job, you can discuss changes with the LLM; at its best it can really feel like programming with a partner. To make this concrete, here is an example of a conversation that my colleague Rob Percival had with an LLM. Do take a look: it is hard not to be impressed with the quality of this interaction. Here's another reflective article from a professional software developer.
That is why these systems are often branded as "co-pilots", emphasising the partnership with a person, improving their productivity, but emphatically not replacing them. Co-pilots need pilots! The human pilot knows the goal, understands the code, and (with the help of an LLM partner that is really good at banging out boilerplate) iteratively guides the partnership towards the ill-specified but crucial goal of the software. "Taking flight with Copilot" is an accessible article discussing the effect of LLMs on programmer productivity.
Will this increased productivity mean that we need fewer programmers? No more so than when high-level languages displaced assembly code for most applications. Rather we will adjust upwards the scale and ambition of the software we write.
Programmers' jobs are safe for the foreseeable future. Indeed they will be in more demand than ever. You might enjoy "How coders can survive and thrive in a ChatGPT world" (IEEE Spectrum 2023).
2.2. Large language models write code that may or may not be correct
LLMs that generate English text are deeply prone to "hallucination". Asked to write a CV for you, they will confidently explain that you won the Nobel prize. Twice. They aren't "lying". They are simply producing text that fits the probability space of text they have seen before.
It's the same with programming. LLMs will produce extremely plausible code (in fairly tightly defined situations, as discussed in Section 1), but there is little reason to suppose that it is correct code. Every single thing that an LLM generates must be checked by a human, to ensure that it is fit for purpose, especially if it is going to form part of an inscrutable program on which other people will rely. A significant risk of AI-driving coding assistants is that outright-wrong code will find its way into running systems. (See for example "Asleep at the keyboard? Assessing the security of GitHub Copilot's code contributions" (Pearce et al 2022), or "Do users write more insecure code with AI assistants?" (Perry et al 2022).)
In education, one might wonder whether presenting students with lots of highly-plausible but sometimes-subtly-incorrect code is the best way to help them on their learning journey.
Of course, human beings write code with plenty of bugs too, but we are very far from the point where we might say that a big system written by ChatGPT will be less buggy than one written by a human, partly because it just makes stuff up, and partly because it has learned from a corpus of buggy, exploitable code.
But you can only check code if you understand how code works; in short, you must be a programmer.
Discussion
Please login to post a comment