Skip to main content

13 March 2024

What's it like to work in AI? An interview with Andrew Lea

Becci Peters, our Computing Subject Lead, recently spoke with Andrew Lea about his career working across various roles in AI. We discussed his views on the future of AI and how we can best prepare students. See the full interview below.

I work for a company called PXP, writing programs that use AI to write what we think is highly effective marketing copy. It's quite interesting because it's actually using people's different personalities when writing the content, and I write the AI visits that it uses.

40 years ago I read Biology at Cambridge Natural Sciences, and that really got me interested: Just how we do this thing called thinking? That's where I first got into AI. All of my roles have involved AI and data analytics.

I've worked with AI throughout my career but it's a horizontal subject, isn't it? It's a thing that gets used in all sorts of different spaces, so I've worked in lots of different spaces, but they've all had that common theme.

Firstly, AI is fascinating. It's probably the most interesting subject in the world, even if that was a bit of AI bias there from me.

Secondly, I find that people who are interested in AI also tend to be fascinating people, because it touches on so many different subjects.

The final thing, which sometimes can be a bit of a pain, but as everyone's noticed, it's a really fast-moving field. It's getting faster and faster, and it feels to me like we're going through a second industrial revolution as we speak.

  1. The first being back in the 1980s, I think I wrote what was the first ever natural language summariser. At that point, it was almost a novel idea that you could take in natural language text (words that we use) and get a computer to do something with it
  2. The most interesting would be running a company with my brother, an Aerospace company. We applied AI to space exploration, planetary landers and things like that.
  3. And then kind of the most hilarious thing was back in 2005 when we wrote what was a generative large language model, even back then. And we had it mimicking politicians, that was hilarious. You can think of some obvious candidates, I'm sure!

Hard to say the impact it's had so far, but I can make some predictions about the future and where AI is going, and at what speed.

I think it can become very much more adaptable to individual children or individual people learning. This allows the teaching that it deploys to be very, very specific to where you are, to what you understand, to what you don't understand, to what you're interested in, to what your aptitude is. You can imagine it's teaching you the piano. It could say, "You don't seem to know those notes. So we'll put those notes up as being the ones you need to need to learn next."

I wrote a little system to do something like that for actually teaching Morse code. It was a very very fast way of teaching Morse code because it knows what you understand, what you don't understand, and how fast you can go. I think that adaptability means that it should be possible to learn complicated subjects with less effort or to learn more with the same effort, that would be my guess.

Mostly they don't tend to have a background in AI because, whenever I've used AI, it's always been part of a bigger team trying to achieve something else. For example, at PXP, it's the marketing, but I've also worked in sports companies where it's all about training, so the people there are going to be experts in gym and exercise. 

Because it's horizontal, you tend to wind up in teams with people, some of whom don't know anything about AI. You work in horizontal teams with other subjects and learn about other things that you don't know.

Really interesting question -  I think it's important to encourage anyone who finds AI fascinating to work at it, regardless of their background. And by the same token, I wouldn't encourage anyone who doesn't find it fascinating to work in it - they'd find it hard and they would find it frustrating. I would always say encourage people to do the things they find interesting, because that's where they're going to find life satisfaction - regardless of their background.

Yes, firstly, learn maths. Underneath these AI engines, there is a lot of mathematics. Program computers for fun, they are terrific fun, but it's also important to have other interests too. 

As we said earlier, AI is applied to other subjects, so learn other subjects too. For example, AI in medicine requires knowledge of both AI and medicine, so learn them both. I suppose it's generally good advice anyway, isn't it? To have more than one string to your bow. Make AI one of them, but only one.

I think AI is a very egalitarian subject. It's a new subject, isn't it? It's cutting new ground - it's not defined. I mean, we even now have a field called prompt engineering for these larger language models. A year ago, that field didn't even exist, so the fact that it's such a fast-moving subject I think means that there's space for anyone who wants to make their mark in it. I would say the qualification is being interested.

Firstly, I don't think AI quite divides up in that way. It doesn't make sense to say there's a technical part of AI an ethical part of AI and a commercial part of AI. They're all intertwined. When I'm writing AI I might think, what data can I use to do this? I'm just a technical person, and that's fine. In my role I need to consider the ethics of which data we use, and this might mean that I choose to use a different algorithm that doesn't need to know people's personal data. I think the fields are all intertwined. 

So in terms of what people should learn, learn the maths. As I said, read the classics, read the Bible to gain a firm foundation in ethics. Learn to write and present well, because one thing you're going to have to do all the time is present complex ideas to others in a simple way - which is very hard because down deep AI is hard. And of course, that's an incredibly valuable skill. Whatever field you wind up in, because so often you'll be an expert in a field, but you'll be dealing with other people who aren't experts and you've still got to get through to them.

First of all, it's going to be absolutely pervasive in the general workplace. It will be everywhere. Secondly, much of what we call AI at the moment will be seen just as an appliance, even as it is now with these large language models.

In terms of the skills, would I concentrate on the hard skills or the soft skills? Something I think we need more of in society is people with both hard and soft skills. Policies are often made by people that don't really understand that society is a system, with lots of complex parts interacting and these are engineering concepts. I think soft skills are most useful when they're actually accompanied by hard skills as well, and not just in isolation.

For a young person coming up, I would say build the foundation of the hard skills - the maths, the physics or whatever it might be, the computing, and build up the soft skills in parallel because you're going to need both, and I think the most valuable people will be the people who do straddle that full spectrum. So often today, you do see politicians that don't have that full understanding, and yet they're passing laws and so forth that are going to affect all of those things. So develop both sets of skills. I mean, it's all about being a rounded person, isn't it? Which I guess is one of the objectives of education anyway.