Alex More: Preserving Humanity in an AI-Enhanced Education

Video Snippets
Transcription
Daniel Emmerson 00:02
Welcome to Foundational Impact, a podcast series that focuses on education and artificial intelligence from a non profit perspective. My name is Daniel Emmerson and I'm the Executive Director of Good Future Foundation, a nonprofit whose mission is to equip educators to confidently prepare all students, regardless of their background, to benefit from and succeed in an AI infused world.
Okay, Alex, thank you ever so much indeed for being with us today for Foundational Impact. For those of you who are listening, of course. Foundational Impact is a podcast where we're exploring the role of artificial intelligence in education in numerous different guises. A real honour to have Alex with us today. We've been doing a good amount of work with the STEM Learning team and I think that's a really good place for us to start, Alex, if that's all right. If you could give our audience a bit of a. An introduction as to who you are and what you're doing, particularly on the STEM Learning side, that'd be fantastic.
Alex More 01:01
Yeah, sure. So my name's Alex. Thanks for having me here. I am a teacher two days a week and a senior leader in a school in Dorset. I teach computer science and STEM subjects. So mostly science, computer science. I also look after transition, so I do a lot of work in a primary school setting, bringing younger children up to our school. So that kind of transition between the two. And then I consult for the rest of the time when I'm not in teaching. And one of those people I consult for one of those organizations is STEM Learning. Particularly looking at AI for CPD for teachers, looking at how we can help teachers reduce workload, and look at some of the more difficult aspects of artificial intelligence, which ranges from safeguarding to ethics, all the way down to practical classroom activities. So that's my role, that's what I do.
Daniel Emmerson 01:46
And as far as artificial intelligence is concerned, I'm imagining that this hasn't always been a part of your job, even in the computer science side of things. What. What sort of drew you into that? And how did you become so actively involved?
Alex More 01:59
Great question. I've always been fascinated in edtech, educational technology. And in 2021, just before the pandemic hit, I created a space called the Future Classroom, where it's an old art room. And essentially what I did is I took over because it wasn't being used, this art room, it was just kind of abandoned. And with no budget, I managed to turn it into a pretty transformational space. In 2023 that won the 100.org global innovation for one of the top 100 innovations in the world. I think it's one of the only classrooms ever to get an award. Normally people get awards, not classrooms, but that was a really amazing project and that kind of gave me the insight into how powerful both technology and people. When you bring teachers and technology together, it can have a transformational impact. That's what inspired me to kind of go on this path. Then, of course, ChatGPT launched and I'd always had a bit of an interest in AI before I read a lot about it. I was a big fan of Ray Kurzweil's work and Singularity is Near and really interested in things like AI, Dark Winters and even all the way back to Ada Lovelace and Alan Turing's impact for the Turing Test. So I've always had a bit of a fascination in AI. So when ChatGPT launched and the world was like, whoa, I was like, I could see the potential transformation impact for teaching at that point.
Daniel Emmerson 03:05
Can you tell us a bit more about that classroom? Let's go back to that moment when you were imagining future classroom. What did you have there? What did it look like as an experience? And then maybe we can look at what differences there might be if you were to do that today.
Alex More 03:18
Yeah, great question. So I wanted to just shrug from the inside out because we know that particularly state schools are quite traditional. We kind of work towards an endpoint which tends to be the exam. I'm a bit of a disruptor in education, always have been. I've been in the classroom for 22 years now, and one of the things I like to do is do things differently. And I visited School 21 at Stratford and I was really inspired by the project based learning they were doing there. And they were using Oracy as a real vehicle to get kids to speak from quite difficult backgrounds about lots of issues. And I was inspired by that. And I thought I can see potential to do something a little bit different. So I first just got rid of the desks completely. So it has no desks in there, at least not permanent ones. And the students move around and use the furniture. There's whiteboards around the space. There's a big motherboard at the front. And the kids can create and collaborate ideas and projects on the whiteboards and then project it to what we call the motherboard, where six kids at the same time can work. And this was the kind of the embryonic stage. And what I guess I wanted to do is I wanted to make kids brilliant owners of knowledge, not consumers of it. Because I'm really conscious that sometimes students are just consumers of knowledge. And I wanted to break away from that. And particularly if we look at the post pandemic classroom, because there was three meter tape on the ground and, you know, teachers weren't allowed to go beyond it. Kids were socially distant, we've almost gone back to that default position. So the future classroom really disrupts that dynamic and it brings knowledge and skills together. They coexist rather than compete, which I think is really important because traditionally they tend to compete a bit in education for bandwidth. And then also it brings a teacher and technology together. Technology doesn't replace the teacher, the teacher uses the technology, but only if it's useful to them. That's the elevator pitch, I guess, of what the future classroom does. But then it became much more than that. I saw the potential to connect worlds. So every Monday at lunchtime, our students at Shaftesbury School connect with Ghana, a school in Accra, which we also have a future classroom in now, which is really cool. And the kids learn together. And it's so fascinating because the cultural differences, the sort of, the financial differences, like the kids in Ghana can't believe that all the kids in Shaftesbury have a cell phone, for example. And it's just bringing the world closer. So the world's our classroom. And technology has that ability to bring the world closer and closer and closer.
Daniel Emmerson 05:27
And how does artificial intelligence fit into that at the moment? Has it become part of the future classroom? Is it a collaborator? Is it a contributor? What does that experience look like for the students?
Alex More 05:39
All of the above. So the teachers that use it use a lot of wrapper apps, things like Perplexity, Gamma app, TeachMate AI, to create the resources for project based learning, because that can be quite time consuming. So that's a real help. And the quality, particularly on platforms like TeachMate AI, the quality of those resources are really rich. And teachers are very sort of complementary of that. But equally we do a lot with large language models with the students. So we do a lot around looking at biases, particularly between Ghana and the UK, because one of the things that's fascinating about that dynamic is how the children in Ghana don't necessarily see themselves in the outputs because of where the data's trained. And they're fascinating. What's great is the kids in Africa, not just Ghana, we do stuff with Botswana, Nigeria, they're really interested in artificial intelligence. But for them it's a little bit out of reach because of the way that the country's set up with energy supply. And there's some real challenges there that we tried to get under the surface of it. And that's really good for the kids in the UK because some of them are using these technologies at home. But also we've got some students from very deprived households where they don't necessarily have the access to devices and technology. They can't afford wifi. So there's all these interesting dynamics, but we essentially use it at two levels to answer your question, the teacher level and the student level. But we use it very, very differently.
Daniel Emmerson 06:59
And what sort of work are the students then doing around understanding and grappling with those biases that are almost baked into a lot of this technology? Is that something you're doing work around as far as debate or project work is concerned? It'd be really good to know what that looks like.
Alex More 07:14
Yeah, predominantly it looks like an oracy based task. So using student voice to kind of extend their opinions and views. And we try to do a lot of work around oracy in the school generally, but it fits the future classroom model beautifully. And when we come to talk about artificial intelligence, one of the most powerful things to do is to give students a voice. And I guess one of my criticisms of the way education is at the moment, and it's because of the pandemic and external pressures, is that the kids don't get a voice. I sit in the back of a lot of lessons and you wouldn't believe how, how little they speak. Sometimes kids speak three times a lesson and only two of those occasions are to the teacher. It's really, it's a shame. So I, I see like we need to be speaking about artificial intelligence more to children. And what was fascinating is the first AI sprint we did was with Darren Coxon and Pri Lakhani and student voice and agency came out as a real theme from them. And I agree, I think that even at the primary level, the younger the better. We need to be having conversations with these kids about what this technology is. And I actually did some research myself into this. I interviewed a bunch of kids, then I ran thematic analysis and the three themes that emerged were that students do not want AI to replace the human teacher. They very much see that we need human teachers. They also called AI “they”. Whereas the teachers called “it”. And I'm fascinated by that sort of vernacular because what do they mean by that term? They, you know, are they pointing towards almost a symbiosis of between technology and humans? Almost like on a post human level we're going to put a research category around it and the teachers are very category isn't it? And there was that language. I didn't notice it when I was interviewing the students. I only noticed it when I was going through the transcripts and it was so obvious. And then the other thing they say is they feel that. And I love this, this is my favorite part of the research, is that the children themselves feel that AI is just going to enhance humanity, but it shouldn't compete against our most beautiful human qualities, which is like love, empathy, consciousness, those types of things. They really felt that that was a human domain and they didn't want to see AI infiltrating that space.
Daniel Emmerson 09:15
What about then, examples of how students are using AI tools for beyond academic work and thinking through challenges in their social lives or with their peer groups or relationships and family matters, where we know, for example, that they're going to AI tools to vent, to express themselves, to try and find solutions. And by doing so, I mean, there are positives and negatives to that, right? The positive is that they are expressing themselves in, in some way. The negative is that they could be reducing the amount of time they actually spend talking to another human about issues and matters that concern them. I'm wondering what your views are on this, particularly on the social side of how AI is being used in schools at the moment.
Alex More 10:04
Yeah, it's a great question. And I get asked this a lot, actually, particularly from CEOs, people who are in senior leadership that are trying to think about the place of artificial intelligence within both primary and secondary and further education. I think in a nutshell, the positives are that it is companionship and it can be used to kind of take away sometimes some emotions that humans get attached to. So I'll give you an example. I know a lot of teachers that use it to debug passive aggressive emails from parents because at the end of a busy teaching day, you can be pretty exhausted, right? And like you get this email and it's probably not meant to be personal to you, but you read it as such. And AI is brilliant at stripping that out and just saying, actually, this is what they say and this is what you should do. But kids kind of use it in the same way, I think, and are useful for like debugging essays. And it's kind of used as this creative. This creative starting point. So my daughter is doing the IB at the moment. I know she uses it just as a creation tool. How do I get started with this project? But equally, there's this downside that Laura Knight speaks quite a lot about, which is this intellectual offloading. We could get into a danger of them academically relying on it and over reliance. But also this synthetic intimacy, it's quite a mouthful that, whereby we kind of children are getting into this idea that they believe that devices are almost on a human level. And that research that I spoke about a minute ago, that kind of points to that, doesn't it? The they, the they, the nod to that word. And I guess there's a danger there that we really need to be aware of as parents and as educators and as people sort of politicians in policy as well. How do we safeguard against that? Because that's happening in the unsupervised environment away from school. It's not happening in the supervised, under supervision of teachers and responsible adults. So there's a bit of work there to do, I think, with parents educating about the dangers, particularly on social media apps like Snapchat and TikTok that are not very strictly regulated, particularly if they're, they're not from EU domains. So I think there's some work there to do around how we teach kids to be responsible digital citizens in school. So when they leave in the unsupervised setting, they've got a good foundation.
Daniel Emmerson 12:08
Is that something that you think is going to increase in need and if so, what could that look like? And I'm thinking of examples that I've seen in terms of real world practice. When you look at something like NotebookLM and you've got two characters that you can now interact with and engage with while they're hosting a podcast on your, I don't know, values homework or your geography assignment. And it feels particularly to younger age groups as though it is a conscious being that they're engaging with as opposed to a machine. What might that best practice on the digital citizenship piece that you mentioned there, what could that look like? And how might schools be able to prioritise the time they need to get to it?
Alex More 12:50
I think the time is a really. I get asked that a lot and I think it's something that I endorse it being baked into the fabric of every lesson. I think AI has a space in every lesson. And I think it's really interesting that in the UAE they've just made it a subject compulsory from the age of four. That's a really bold move. And I think that kind of says a statement is, technology is not going away, we need to embrace it. And more and more I'm seeing to your point, I'm seeing educators saying, right, okay, we finally accept that AI is not going to go away and we do need to do stuff about it, we do need to write policy, we do need to train teachers. Whereas I would say up until about three months ago, there was still a feeling in a lot of schools that this was just another fad like VR or AR and it's just going to disappear. But it isn't. And I think the work that we need to do as teachers is we need to create frameworks in our school that are context based. Because the thing is, a school in Dubai is very different to a school in India, very different to a school in the UK. So it needs to be context domain specific. But essentially what we need to look at here is digital literacy. There's a suite of skills that we really need to invest time in very young, as early as sort of five, six, seven years of age, because just basic computer skills, because not just AI, but if we look at the future direction of travel for assessment in the next decade, all the exams, bar maths are going to go online, so kids are going to do GCSEs online. Now, the AI that sits within that is that there's going to be AI scribes, there's going to be AI readers, there's going to be voice to text. It's going to transform how kids do exams. But at the same time there's always that risk of plagiarism, not non authentic learner works. We've got these real challenges to grapple with in education. I think it's quite exciting, but I can see why people might be cautious of it. So I think my advice would be it's about digital literacy and it's about what we do in that piece to prepare students to go on and make good decisions when they're not with us.
Daniel Emmerson 14:40
And have you seen any good examples of digital literacy, particularly at the primary level?
Alex More 14:45
Yeah. And it can be anything from getting the kids to type 28 words per minute, by the way. And that has got nothing to do with AI, it's just the kids in the future, a lot of them interact on touchscreens. Okay. So their interaction is very much born from a touchscreen interaction. Whereas when they do exams online or write letters, they're going to need to use a keyboard and many of them can't. You wouldn't believe how many kids come to me at 11 years of age from primary school and they can only type eight words a minute. And for these exams that I'm speaking about, they're going to have to type 28 at least to be able to kind of keep up. So that that's one kind of aspect of it. And then as we progress up through the AI spheres, it's the biggest thing I think with AI is can the kids use it not to do their work, but to help them create ideas around their work. So it's a creative medium rather than over reliance. It will do my work for me. And if there's any teachers listening to this, which I'm sure there are, you'll know if a kid turns in an AI assignment, there's so many giveaways and it's almost educating the kids about that too. Saying, you know, the extended hyphen off the bat letter, that's an AI giveaway. Using the words like leveraging, empowering. AI loves those words. So it's how we can educate them to be a bit more, I guess, organic with this technology and use it to create rather than just do their tasks. I think is a very useful thing.
Daniel Emmerson 16:00
And what about knowledge and concepts around things like truth and what is a fact? And how do I know that what I'm seeing is, is grounded in evidence or even reality in some cases? Are these things from your experience that you think should be investigated at a primary level without, even without access to the technology?
Alex More 16:24
Yeah, I think in fairness, some primary educators do explore this. I know I'd get in trouble if I said they didn't because I know some guys that are really, and girls that are really pushing this technology and really. So I think that yeah, there's a place for it. The younger the better in my opinion. As soon as they can synthesize and understand the concepts. I guess it's, it's a complex issue though because it all comes down to time in schools and where does it fit within the curriculum. So if we look at the secondary, most schools now will deliver a six week unit on AI ethics or computer technology ethics. And that involves deep fakes, digital manipulations, fake news that sits at kind of the core of that truth that you were talking about. And one of the tasks they do is they have to debunk, they have to look at kind of an AI generated piece of work and a human and they have to say which one's which. They need to look at deep fake images that we can now create using lots of AI apps and they have to say which one's human and why. So they're kind of getting that at the age of 11. What I think is inconsistent is how much they're getting beyond that. So before that, and that really depends on the teacher, whether they're a specialist or non specialist, because in the UK obviously primary teachers don't tend to be specialists. I take my hat off to them because they teach everything rather than kind of, they're not so domain specific, whereas when we get to 11 and secondary school it's more domain specific and subjects are siloed. And at the moment AI definitely sits within the domain of computer science. It doesn't really get spoken about outside of that unless you've got a really passionate English teacher or French teacher, if that makes sense.
Daniel Emmerson 17:54
And as far as those concepts are concerned, there's also this argument that AI is very much at the worst level we're ever going to see it. Right. And I hear this a lot, certainly when we're speaking to teachers, that it's only going to continue to improve at a massively fast and rapid rate. And if that's the case, in terms of the quality of output, what we're teaching now about how to spot things isn't going to be relevant in 3, 4, 5 months time. Is there something underlying there that we can focus on? I'm really interested in your thoughts on this one.
Alex More 18:29
Yeah, I think it goes back to what you said about truth, because there's going to be some things that endure and if you look at the history and the research of computer science, this has always been a problem. Right. So you know, how true is it? And ultimately the companies like OpenAI are working quite aggressively towards something called AGI, which is going to be quite scary really, I think, think for implications for society, education and humanity as a whole. But they've always been working towards AGI. It's just, it's just looking a little bit more realistic now.
Daniel Emmerson 18:57
Can you unpack that a little for our audience?
Alex More 18:59
Yeah, so Artificial General Intelligence being as much as. There's this, this movement between quite progressives in the field, particularly over in Silicon Valley where a lot of these technologies are born with startups and I say startups because even Anthropics, Claude, which splintered from OpenAI, so OpenAI owned ChatGPT, which is one of the world's most largely used large language models. They are quite progressive. OpenAI, they want to move towards AGI as soon as possible because they see the benefits. And it's a little bit like in many ways kind of coming up with a cure for cancer. It's like who can get there first? Who can be the person within history that can get AGI first and get their name in the history book? So there's that kind of, there's that race going on and what happened about a year ago in Silicon Valley was fascinating because they.
Daniel Emmerson 19:45
Sorry to jump in with AGI. We're talking pretty much as close to human consciousness as we can get.
Alex More 19:50
Yeah. So let's unpack that quickly. Sorry. So Alan Turing, who was an amazing individual, very under, sort of celebrated, came up with a test called the Turing Test, whereby the AI can essentially convince humans that it isn't. It is a human itself. We're not quite there. It might look on the surface like we are in some. Some cases. And there have been some claims that the Turing Test has been passed, but not with any sort of validity at the moment. But the predictions are kind of placing it around this year of 2030. I see this a lot. 2030, 2032, the Turing Test will be passed. And what that means, then, Daniel, is that we're in an age where AI is really difficult to decipher AI from human. And in some cases it already is. Right. Essays, images, videos, voices. I know that Hollywood's really grappling with this at the moment, and so are musicians, because AI is doing a really good job of ripping them off and taking their, you know, their IP. But the race is split because not everyone's racing towards this goal. So, for example, when Anthropic were formed, they came from OpenAI and they weren't happy with the direction of travel, so they said, you know what, we're going to break away, we're going to do a startup. They were called Anthropic and they now own Claude. So Claude is their product. Claude does not train on your data. It's much safer, much more cautious. And it's not racing towards the AGI. It's got more of a sort of altruistic aim, really. It kind of wants to do good in humanity. So not all AI is bad, but what we have to appreciate is it's all human generated. So whatever the intentions behind the technology of the company are, is what you're going to see in the product and something we haven't really touched on, though I think it's really interesting. Have you heard this phrase, glazing? Is this on your radar?
Daniel Emmerson 21:28
It's not. It's not. Please enlighten me.
Alex More 21:31
So glazing is when the AI tells you lots of good stuff about your work or you personally, to make you feel better about yourself, so you're more likely to use the model more and more, because we all like to be told that we look good and that our essay is potentially a Brooker prize winner. And ChatGPT, for example, is really guilty of glazing. And the owner, a guy called Sam Altman, he confessed this about two weeks ago in a press conference because it glazes us and makes everything sound a lot more, a lot better than it actually is. Now, that's a danger with kids, right? Because if it's always bigging them up, how are they ever going to take any sort of criticism? So there's an element of digital resilience there as well that we need to unpick along with digital literacy. But I thought that was an interesting one to throw into the fray because not many people know it. But if you think about your large language models, NotebookLM is a classic. There's two podcasts, basically taking your book chapter, and I'm guilty of using this too, and glazing it. And you read it, it reads it back to you like, ah, this is a seriously decent piece of work, when actually it might not be right.
Daniel Emmerson 22:28
I hadn't heard the term glazing before, but I'm, I'm definitely familiar with what it is and what it can do. There's also ways that the technology is adapting, particularly in terms of keeping you locked in. Right. So there never used to be questions at the end of a, of a prompt, right. So you'd. You prompt, you get a response and that would be it, more or less. Right. Whereas now you're asked, oh, would you like me to do this? Would you prefer for me to do this for you on top of that? Or whatsoever? So it's locking you into that engagement. I'm keen to note, just as we wrap up Alex. There's still quite a lot of fear and trepidation around use of AI from teachers in the classroom. Particularly now that the conversation is moving towards things like data privacy and intellectual property rights, people are becoming slightly more mindful as to what that best practice looks like. Do you have any words of encouragement or advice for teachers who are waiting for their moment or perhaps adamant that this isn't going to be something they're using in the future?
Alex More 23:34
Yeah, I think, first of all, a bit of a provocation really, for the listeners. I always ask the questions to teachers and leaders and anyone I'm speaking to about AI. What does this technology want? And that might sound like quite a curious question, but essentially this technology wants something. Okay. Like your emails want to be emptied and they want to fill up other people's boxes. Your smartphone wants to be useful for you and keep your appointments and whatnot. So this technology does want something. And I think we need to view this AI in particular, as a branch of technology in terms of what it gives us, but also what it takes away from us as educators and as people. What does it give us and what does it give our students and young people? And it gives us a lot, actually. For me, it gives me my time back to spend with my wife and kids, which is invaluable. That's a really good thing that it gives me. It gives me an ability to create an email or a document really, really quickly and in a way that I might not do it. It gives me an opportunity to interact and ask questions and find more information like we used to use Google and the Internet for. But also what it can take away if you're not careful, is individual opinions, ideas, creativity, your intellectual property as a human being. It can strip that if you're not careful, and you can lean on it too heavily. But also, we're working towards AGI. It can take jobs and a lot more. And I think this is a really scary thing that teachers hear in the press. Oh, you know, AI is going to replace teachers. No, it's not. It definitely won't. And I think my message to teachers is always the same. It's not going away. We need to embrace it. But as Laura Knight said, we need to think with care.
Daniel Emmerson 25:09
Absolutely. And what a wonderful, wonderful way to wrap up. For listeners who haven't heard our episode with Laura Knight, it'd be worth going back and, and picking up on that, to hear some of these concepts through really wonderful stuff. Alex, we're big fans of your work with STEM Learning, and I'm sure our audience will be as well. Really appreciate your time today and look forward to catching up again soon.
Alex More 25:31
That was great. Thanks, Daniel.
Voiceover 25:34
That's it for this episode. Don't forget, the next episode is coming out soon, so make sure you click that option to follow or subscribe. It just means you won't miss it. But in the meantime, thank you for being here and we'll see you next time.
