Skip to main content

My personal top 10 'threshold moments' in 50 years of learning about computing

Downloaded: 31

Last edit: 15 August 2022

I learned what I now consider to be these key threshold concepts over the course of nearly 50 years (the first was in 1974, I think), but most of them could be learned in school. Indeed several of them are now touched on in school.

Some of these ideas will be familiar to most teachers and others might be quite new. I will aim to add some resources that will help make these threshold concepts accessible to more young people.

1) The transistor function

I can remember my sense of awe that the 'S' curve relating the input and output voltages of a simple transistor circuit (or, the valve before it) was the key to both audio amplifiers (in which I had an interest in my teens) and to digital logic. For the former you must stick to the central linear region of the curve - if your input signal wanders outside that region you will get distortion. For digital logic you must do the exact opposite: stay out of the linear region and exploit the 'flat' regions at the top and bottom of the curve.

(I've added a spreadsheet resource for this, that provides an interactive a graphical simulation of the transistor applied - on two separate tabs of the sheet - to analogue amplification and digital transmission respectively. On the first sheet, alter the two editable values for bias and threshold to try to get the biggest application i.e. sine wave with largest range, but without getting any distortion. On the second, try increasing the level of noise. You'll see that for quite high levels of noise, distorting the input signal, the digital output remains correct, but eventually you will start to see individual signals being wrongly transmitted. The key learning is to understand how all this comes from the 'S' shaped transistor function. It perhaps needs explaining that the input signal is applied to the X value on the function graph, and the output signal comes from the Y value of the graph.)

2) The programmable logic gate

We learned digital logic circuits in A-level (Nuffield) physics in the mid-70s - getting as far as a half- and full-adder. We were also taught the rudiments of Fortran programming (unusual then), and I knew quite a bit about programmable calculators. But I couldn't relate the two ideas: how could you get a fixed logic circuit to do something different at different points in time? And then I learned about a simple circuit that I would now call a 'programmable logic gate' - and it still intrigues me. Using just a handful of standard gates, you can design a circuit that can emulate any 2-input logic gate, according to the control signals you feed it. I can vividly recall that I suddenly felt that the two worlds were brought together.

(I promise that I will create a resource for this to make this powerful idea accessible. Watch this space, or email me if you want to be notified.)

3) Transforming the representation of the problem

At university I read Engineering Science - a tough but very enjoyable three years. Some of the practical knowledge and skills have stayed with me (my hobby is restoring a vintage car), but of the more theoretical work, one idea has stayed with me all my career. Years later I would read Herb Simon's elegant summary of it: The key to solving a problem is to transform the representation of the problem until the solution becomes obvious. In engineering we learned that a steel girder bridge could be represented as an electrical circuit of resistors, or that a problem in electronics could be mapped onto a problem in fluid dynamics. We also learned how to apply mathematical transforms such as Laplace to transform a daunting-looking 2nd order differential equation into a simple quadratic algebraic equation. As a computer scientist I would eventually relate that to the importance of finding the right data structures, or object model. This learning would also engender a later interest in analogue computing (see No. 10)

4) The Turing Machine

When I first learned (in a computer science module within my engineering degree) about Turing's proof that his simple, hypothetical, machine could solve any problem capable of being solved it blew my mind. I didn't understand his proof, and don't to this day, but the ramifications are enormous. Now the whole thing is inverted: if your device, or programming language, can emulate a TM (i.e. is 'Turing complete') then it can do anything that any other computational device / language can do. I love all the recondite stuff about programming languages that have just a single instruction (e.g. Decrement X and branch if it goes negative) - and you can prove that you can write a program that you can write with, say, Python or C#

5) VisiCalc

By a combination of good fortune and a little precociousness I was in at the start of the personal computing revolution (working for Commodore before university). But the first app (as we would now call it) to blow my mind was VisiCalc - the first spreadsheet - which I was fortunate to see soon after it was released in 1979. It was a very effective tool, and an extraordinary technical achievement (written directly in 6502 assembly code), but it was other things that captivated me. I think it was the first example I had seen of a 'direct manipulation' UI - you didn't need to type the formula for cell C1 as 'A1 + B1' - you reference those two cells by moving the cursor to them (there was no mouse available at that time). Furthermore, though you were clearly writing a 'program' of sorts in a high level language, there was no 'modality' - you did not switch between writing code and 'running' the program! It's taken decades for that idea to take root in, for example, what we now call REPLs.

6) The Pinball Construction Set and The Incredible Machine

The next piece of software that fundamentally changed my understanding of computing was The Pinball Construction Set (PCS). The idea was that instead of just emulating a pinball machine on screen, you could design and build your own pinball machine, dragging and dropping (more direct manipulation) components onto a blank table and then immediately playing it, with working bumpers, flippers and scoring. The spiritual successor to PCS, if you like, was The Incredible Machine (TIM) which many more teachers will know. Apart from being great fun, these two apps really changed my idea of how we should be thinking about computers, which I captured in the phrase 'treating the user as a problem-solver, not a process-follower'. Most software does the latter, and AI is making it much worse - disempowering the user. (Despite being a naïve enthusiast for AI in my younger days, I now keep as far away as possible!) Another way in which PCS and TIM changed my understanding of computing was that it gave me a much deeper understanding, I believe, of the idea of a 'virtual machine'. One way of looking at computing is that once you have got to something Turing complete, subsequent progress is by building a hierarchy of virtual machines...

7) Life and Mandelbrot

Conway's game of Life has fascinated me since I first saw it. It's hard to imaging that some people played it on paper - I first saw it as a program (written in BASIC!) on my Commodore PET. It opened up the whole idea of cellular automata (yet another idea first conceived by the genius John von Neumann), and today you can find the most extraordinary things done on it, including a working implementation of a Turing machine! (Another example the hierarchy of virtual machines). But in a deeper way what fascinated me was the idea that such a simple set of rules could generate such complexity. I felt the same thing when I first learned about the Mandelbrot set: here was a mathematical algorithm that you could implement today in a dozen lines of code, yet which could produce deterministic (non-random), non-recurring, patterns of infinite complexity and strange beauty. Philosophically, it led me to one of the most fundamental questions: are those patterns 'out there' and we are discovering them through the simulation of the algorithm, or are we creating them? (If there was ever a geekiness competition, I might mention that in the 1980s I subscribed to a newsletter just about exploring the Mandelbrot set!)

8) Object-orientation

I started to learn about OOP in the very late '80s, using SmallTalk. The whole idea of OOP appealed very strongly to me (and I started to realise the connection back to PCS and TIM, above), but I didn't really get a big aha moment for several more years, when I suddenly realised that the most important point in OOP isn't encapsulation, association, or inheritance. It's polymorphism. Polymorphism means that you can have two different entities which implement the same interface (e.g. method signature) in different ways, and with polymorphism (there are several different ways in which this is implemented) you no longer need to know what type of thing you are dealing with.

(For a resource, please see my Object Oriented Programming book with a foreword by Alan Kay).

9) Functional programming

Functional programming techniques started to creep into mainstream languages about 15 years ago, and I learned to use them. But I didn't really think deeply about FP as a concept in its own right until I (at very short notice!) decided to become a computer science teacher in 2016, and found myself having to teach it to pupils. Only then did I start to realise the fundamental importance of writing side-effect free functions, and of being able to treat a function ' as a first class object' i.e. to be passed into another function as a parameter, or returned as a result from a parament, just like an item of data. I now see FP as the future of programming, and look forward to the day (I hope in my lifetime) when we teach it from the outset. Incidentally, in my professional career, this is now the third paradigm shift I've been through. The first was the introduction of structured programming: I can remember saying 'but how can you write real programs without a GOTO statement (familiar to me from Fortran,BASIC, and Assembler)?' long before I read about Dijkstra's famous paper on that. And it was probably the OOP shift that introduced me to the whole idea of 'paradigm shift' and Thomas Kuhn's deeply influential book The Structure of Scientific Revolutions.

(For a resource please see my Functional Programming book with a foreword by Simon Peyton Jones.)

10) The differential analyzer

I've put this one last, even though it is a very old idea, just because it was the most recent 'threshold moment' for me. When I was a young boy I was a fan of Meccano and I can remember reading - in the Meccano Magazine, I think - about Douglas Hartree's Differential Analyser (DA) - an small-scale emulation of Vannevar Bush's invention - that Hartree made from Meccano for the Cambridge Computing Laboratory (a fragment of his machine is preserved in The Science Museum, along with a full-scale DA). The article described this contraption as a computer, but I had no idea how it worked, or what it could do. I assumed the word 'differential' referred to the same mechanism that you find in a car - and which had fascinated me since I first learned how to build one with the Meccano Mechanisms set. In fact a DA does use multiple such differentials, but that's not where the name comes from. Years later I learned that it was a machine to solve differential equations (a concept I hadn't heard of until A-level Applied Maths). But I still couldn't see how a DA worked. Just a couple of years ago I decided I would get to grips with it and read up Bush's original documentation. I now understand it well enough that I am confident I could 'program' one, and have written a very visual resource showing how it works.

The big aha was reading that Bush believed the DA was important not just because it could solve complex non-linear differential equations (that would not be solvable using calculus), but because using a DA would give someone a much more powerful understanding of just what differential equations are and how they work. I'm not a fan of 'computational thinking' - at least as it is currently conceived and taught in schools - but this, to me, is real example of computational thinking, and, if you read back you'll see that the same could be said of several other personal threshold moments that I have chosen to capture.

(I've now added the Differential Analyzer slides. This is not really intended for school level use, but some of your most able A-level pupils might just understanding it - as long as they have done basic differential equations in Maths - and especially those wanting to study Engineering at Uni - which is all differential equations for the first year!)