Alex More: Preserving Humanity in an AI-Enhanced Education

September 1, 2025

Video Snippets

Transcription

Daniel Emmerson 00:02

Welcome to Foundational Impact, a podcast series that focuses on education and artificial intelligence from a non profit perspective. My name is Daniel Emmerson and I'm the Executive Director of Good Future Foundation, a nonprofit whose mission is to equip educators to confidently prepare all students, regardless of their background, to benefit from and succeed in an AI infused world. 

Okay, Alex, thank you ever so much indeed for being with us today for Foundational Impact. For those of you who are listening, of course. Foundational Impact is a podcast where we're exploring the role of artificial intelligence in education in numerous different guises. A real honour to have Alex with us today. We've been doing a good amount of work with the STEM Learning team and I think that's a really good place for us to start, Alex, if that's all right. If you could give our audience a bit of a. An introduction as to who you are and what you're doing, particularly on the STEM Learning side, that'd be fantastic.

Alex More 01:01

Yeah, sure. So my name's Alex. Thanks for having me here. I am a teacher two days a week and a senior leader in a school in Dorset. I teach computer science and STEM subjects. So mostly science, computer science. I also look after transition, so I do a lot of work in a primary school setting, bringing younger children up to our school. So that kind of transition between the two. And then I consult for the rest of the time when I'm not in teaching. And one of those people I consult for one of those organizations is STEM Learning. Particularly looking at AI for CPD for teachers, looking at how we can help teachers reduce workload, and look at some of the more difficult aspects of artificial intelligence, which ranges from safeguarding to ethics, all the way down to practical classroom activities. So that's my role, that's what I do.

Daniel Emmerson 01:46

And as far as artificial intelligence is concerned, I'm imagining that this hasn't always been a part of your job, even in the computer science side of things. What. What sort of drew you into that? And how did you become so actively involved?

Alex More 01:59

Great question. I've always been fascinated in edtech, educational technology. And in 2021, just before the pandemic hit, I created a space called the Future Classroom, where it's an old art room. And essentially what I did is I took over because it wasn't being used, this art room, it was just kind of abandoned. And with no budget, I managed to turn it into a pretty transformational space. In 2023 that won the 100.org global innovation for one of the top 100 innovations in the world. I think it's one of the only classrooms ever to get an award. Normally people get awards, not classrooms, but that was a really amazing project and that kind of gave me the insight into how powerful both technology and people. When you bring teachers and technology together, it can have a transformational impact. That's what inspired me to kind of go on this path. Then, of course, ChatGPT launched and I'd always had a bit of an interest in AI before I read a lot about it. I was a big fan of Ray Kurzweil's work and Singularity is Near and really interested in things like AI, Dark Winters and even all the way back to Ada Lovelace and Alan Turing's impact for the Turing Test. So I've always had a bit of a fascination in AI. So when ChatGPT launched and the world was like, whoa, I was like, I could see the potential transformation impact for teaching at that point.

Daniel Emmerson 03:05

Can you tell us a bit more about that classroom? Let's go back to that moment when you were imagining future classroom. What did you have there? What did it look like as an experience? And then maybe we can look at what differences there might be if you were to do that today.

Alex More 03:18

Yeah, great question. So I wanted to just shrug from the inside out because we know that particularly state schools are quite traditional. We kind of work towards an endpoint which tends to be the exam. I'm a bit of a disruptor in education, always have been. I've been in the classroom for 22 years now, and one of the things I like to do is do things differently. And I visited School 21 at Stratford and I was really inspired by the project based learning they were doing there. And they were using Oracy as a real vehicle to get kids to speak from quite difficult backgrounds about lots of issues. And I was inspired by that. And I thought I can see potential to do something a little bit different. So I first just got rid of the desks completely. So it has no desks in there, at least not permanent ones. And the students move around and use the furniture. There's whiteboards around the space. There's a big motherboard at the front. And the kids can create and collaborate ideas and projects on the whiteboards and then project it to what we call the motherboard, where six kids at the same time can work. And this was the kind of the embryonic stage. And what I guess I wanted to do is I wanted to make kids brilliant owners of knowledge, not consumers of it. Because I'm really conscious that sometimes students are just consumers of knowledge. And I wanted to break away from that. And particularly if we look at the post pandemic classroom, because there was three meter tape on the ground and, you know, teachers weren't allowed to go beyond it. Kids were socially distant, we've almost gone back to that default position. So the future classroom really disrupts that dynamic and it brings knowledge and skills together. They coexist rather than compete, which I think is really important because traditionally they tend to compete a bit in education for bandwidth. And then also it brings a teacher and technology together. Technology doesn't replace the teacher, the teacher uses the technology, but only if it's useful to them. That's the elevator pitch, I guess, of what the future classroom does. But then it became much more than that. I saw the potential to connect worlds. So every Monday at lunchtime, our students at Shaftesbury School connect with Ghana, a school in Accra, which we also have a future classroom in now, which is really cool. And the kids learn together. And it's so fascinating because the cultural differences, the sort of, the financial differences, like the kids in Ghana can't believe that all the kids in Shaftesbury have a cell phone, for example. And it's just bringing the world closer. So the world's our classroom. And technology has that ability to bring the world closer and closer and closer.

Daniel Emmerson 05:27

And how does artificial intelligence fit into that at the moment? Has it become part of the future classroom? Is it a collaborator? Is it a contributor? What does that experience look like for the students?

Alex More 05:39

All of the above. So the teachers that use it use a lot of wrapper apps, things like Perplexity, Gamma app, TeachMate AI, to create the resources for project based learning, because that can be quite time consuming. So that's a real help. And the quality, particularly on platforms like TeachMate AI, the quality of those resources are really rich. And teachers are very sort of complementary of that. But equally we do a lot with large language models with the students. So we do a lot around looking at biases, particularly between Ghana and the UK, because one of the things that's fascinating about that dynamic is how the children in Ghana don't necessarily see themselves in the outputs because of where the data's trained. And they're fascinating. What's great is the kids in Africa, not just Ghana, we do stuff with Botswana, Nigeria, they're really interested in artificial intelligence. But for them it's a little bit out of reach because of the way that the country's set up with energy supply. And there's some real challenges there that we tried to get under the surface of it. And that's really good for the kids in the UK because some of them are using these technologies at home. But also we've got some students from very deprived households where they don't necessarily have the access to devices and technology. They can't afford wifi. So there's all these interesting dynamics, but we essentially use it at two levels to answer your question, the teacher level and the student level. But we use it very, very differently.

Daniel Emmerson 06:59

And what sort of work are the students then doing around understanding and grappling with those biases that are almost baked into a lot of this technology? Is that something you're doing work around as far as debate or project work is concerned? It'd be really good to know what that looks like.

Alex More 07:14

Yeah, predominantly it looks like an oracy based task. So using student voice to kind of extend their opinions and views. And we try to do a lot of work around oracy in the school generally, but it fits the future classroom model beautifully. And when we come to talk about artificial intelligence, one of the most powerful things to do is to give students a voice. And I guess one of my criticisms of the way education is at the moment, and it's because of the pandemic and external pressures, is that the kids don't get a voice. I sit in the back of a lot of lessons and you wouldn't believe how, how little they speak. Sometimes kids speak three times a lesson and only two of those occasions are to the teacher. It's really, it's a shame. So I, I see like we need to be speaking about artificial intelligence more to children. And what was fascinating is the first AI sprint we did was with Darren Coxon and Pri Lakhani and student voice and agency came out as a real theme from them. And I agree, I think that even at the primary level, the younger the better. We need to be having conversations with these kids about what this technology is. And I actually did some research myself into this. I interviewed a bunch of kids, then I ran thematic analysis and the three themes that emerged were that students do not want AI to replace the human teacher. They very much see that we need human teachers. They also called AI “they”. Whereas the teachers called “it”. And I'm fascinated by that sort of vernacular because what do they mean by that term? They, you know, are they pointing towards almost a symbiosis of between technology and humans? Almost like on a post human level we're going to put a research category around it and the teachers are very category isn't it? And there was that language. I didn't notice it when I was interviewing the students. I only noticed it when I was going through the transcripts and it was so obvious. And then the other thing they say is they feel that. And I love this, this is my favorite part of the research, is that the children themselves feel that AI is just going to enhance humanity, but it shouldn't compete against our most beautiful human qualities, which is like love, empathy, consciousness, those types of things. They really felt that that was a human domain and they didn't want to see AI infiltrating that space.

Daniel Emmerson 09:15

What about then, examples of how students are using AI tools for beyond academic work and thinking through challenges in their social lives or with their peer groups or relationships and family matters, where we know, for example, that they're going to AI tools to vent, to express themselves, to try and find solutions. And by doing so, I mean, there are positives and negatives to that, right? The positive is that they are expressing themselves in, in some way. The negative is that they could be reducing the amount of time they actually spend talking to another human about issues and matters that concern them. I'm wondering what your views are on this, particularly on the social side of how AI is being used in schools at the moment.

Alex More 10:04

Yeah, it's a great question. And I get asked this a lot, actually, particularly from CEOs, people who are in senior leadership that are trying to think about the place of artificial intelligence within both primary and secondary and further education. I think in a nutshell, the positives are that it is companionship and it can be used to kind of take away sometimes some emotions that humans get attached to. So I'll give you an example. I know a lot of teachers that use it to debug passive aggressive emails from parents because at the end of a busy teaching day, you can be pretty exhausted, right? And like you get this email and it's probably not meant to be personal to you, but you read it as such. And AI is brilliant at stripping that out and just saying, actually, this is what they say and this is what you should do. But kids kind of use it in the same way, I think, and are useful for like debugging essays. And it's kind of used as this creative. This creative starting point. So my daughter is doing the IB at the moment. I know she uses it just as a creation tool. How do I get started with this project? But equally, there's this downside that Laura Knight speaks quite a lot about, which is this intellectual offloading. We could get into a danger of them academically relying on it and over reliance. But also this synthetic intimacy, it's quite a mouthful that, whereby we kind of children are getting into this idea that they believe that devices are almost on a human level. And that research that I spoke about a minute ago, that kind of points to that, doesn't it? The they, the they, the nod to that word. And I guess there's a danger there that we really need to be aware of as parents and as educators and as people sort of politicians in policy as well. How do we safeguard against that? Because that's happening in the unsupervised environment away from school. It's not happening in the supervised, under supervision of teachers and responsible adults. So there's a bit of work there to do, I think, with parents educating about the dangers, particularly on social media apps like Snapchat and TikTok that are not very strictly regulated, particularly if they're, they're not from EU domains. So I think there's some work there to do around how we teach kids to be responsible digital citizens in school. So when they leave in the unsupervised setting, they've got a good foundation.

Daniel Emmerson 12:08

Is that something that you think is going to increase in need and if so, what could that look like? And I'm thinking of examples that I've seen in terms of real world practice. When you look at something like NotebookLM and you've got two characters that you can now interact with and engage with while they're hosting a podcast on your, I don't know, values homework or your geography assignment. And it feels particularly to younger age groups as though it is a conscious being that they're engaging with as opposed to a machine. What might that best practice on the digital citizenship piece that you mentioned there, what could that look like? And how might schools be able to prioritise the time they need to get to it?

Alex More 12:50

I think the time is a really. I get asked that a lot and I think it's something that I endorse it being baked into the fabric of every lesson. I think AI has a space in every lesson. And I think it's really interesting that in the UAE they've just made it a subject compulsory from the age of four. That's a really bold move. And I think that kind of says a statement is, technology is not going away, we need to embrace it. And more and more I'm seeing to your point, I'm seeing educators saying, right, okay, we finally accept that AI is not going to go away and we do need to do stuff about it, we do need to write policy, we do need to train teachers. Whereas I would say up until about three months ago, there was still a feeling in a lot of schools that this was just another fad like VR or AR and it's just going to disappear. But it isn't. And I think the work that we need to do as teachers is we need to create frameworks in our school that are context based. Because the thing is, a school in Dubai is very different to a school in India, very different to a school in the UK. So it needs to be context domain specific. But essentially what we need to look at here is digital literacy. There's a suite of skills that we really need to invest time in very young, as early as sort of five, six, seven years of age, because just basic computer skills, because not just AI, but if we look at the future direction of travel for assessment in the next decade, all the exams, bar maths are going to go online, so kids are going to do GCSEs online. Now, the AI that sits within that is that there's going to be AI scribes, there's going to be AI readers, there's going to be voice to text. It's going to transform how kids do exams. But at the same time there's always that risk of plagiarism, not non authentic learner works. We've got these real challenges to grapple with in education. I think it's quite exciting, but I can see why people might be cautious of it. So I think my advice would be it's about digital literacy and it's about what we do in that piece to prepare students to go on and make good decisions when they're not with us.

Daniel Emmerson 14:40

And have you seen any good examples of digital literacy, particularly at the primary level?

Alex More 14:45

Yeah. And it can be anything from getting the kids to type 28 words per minute, by the way. And that has got nothing to do with AI, it's just the kids in the future, a lot of them interact on touchscreens. Okay. So their interaction is very much born from a touchscreen interaction. Whereas when they do exams online or write letters, they're going to need to use a keyboard and many of them can't. You wouldn't believe how many kids come to me at 11 years of age from primary school and they can only type eight words a minute. And for these exams that I'm speaking about, they're going to have to type 28 at least to be able to kind of keep up. So that that's one kind of aspect of it. And then as we progress up through the AI spheres, it's the biggest thing I think with AI is can the kids use it not to do their work, but to help them create ideas around their work. So it's a creative medium rather than over reliance. It will do my work for me. And if there's any teachers listening to this, which I'm sure there are, you'll know if a kid turns in an AI assignment, there's so many giveaways and it's almost educating the kids about that too. Saying, you know, the extended hyphen off the bat letter, that's an AI giveaway. Using the words like leveraging, empowering. AI loves those words. So it's how we can educate them to be a bit more, I guess, organic with this technology and use it to create rather than just do their tasks. I think is a very useful thing. 

Daniel Emmerson 16:00

And what about knowledge and concepts around things like truth and what is a fact? And how do I know that what I'm seeing is, is grounded in evidence or even reality in some cases? Are these things from your experience that you think should be investigated at a primary level without, even without access to the technology?

Alex More 16:24

Yeah, I think in fairness, some primary educators do explore this. I know I'd get in trouble if I said they didn't because I know some guys that are really, and girls that are really pushing this technology and really. So I think that yeah, there's a place for it. The younger the better in my opinion. As soon as they can synthesize and understand the concepts. I guess it's, it's a complex issue though because it all comes down to time in schools and where does it fit within the curriculum. So if we look at the secondary, most schools now will deliver a six week unit on AI ethics or computer technology ethics. And that involves deep fakes, digital manipulations, fake news that sits at kind of the core of that truth that you were talking about. And one of the tasks they do is they have to debunk, they have to look at kind of an AI generated piece of work and a human and they have to say which one's which. They need to look at deep fake images that we can now create using lots of AI apps and they have to say which one's human and why. So they're kind of getting that at the age of 11. What I think is inconsistent is how much they're getting beyond that. So before that, and that really depends on the teacher, whether they're a specialist or non specialist, because in the UK obviously primary teachers don't tend to be specialists. I take my hat off to them because they teach everything rather than kind of, they're not so domain specific, whereas when we get to 11 and secondary school it's more domain specific and subjects are siloed. And at the moment AI definitely sits within the domain of computer science. It doesn't really get spoken about outside of that unless you've got a really passionate English teacher or French teacher, if that makes sense.

Daniel Emmerson 17:54

And as far as those concepts are concerned, there's also this argument that AI is very much at the worst level we're ever going to see it. Right. And I hear this a lot, certainly when we're speaking to teachers, that it's only going to continue to improve at a massively fast and rapid rate. And if that's the case, in terms of the quality of output, what we're teaching now about how to spot things isn't going to be relevant in 3, 4, 5 months time. Is there something underlying there that we can focus on? I'm really interested in your thoughts on this one.

Alex More 18:29

Yeah, I think it goes back to what you said about truth, because there's going to be some things that endure and if you look at the history and the research of computer science, this has always been a problem. Right. So you know, how true is it? And ultimately the companies like OpenAI are working quite aggressively towards something called AGI, which is going to be quite scary really, I think, think for implications for society, education and humanity as a whole. But they've always been working towards AGI. It's just, it's just looking a little bit more realistic now.

Daniel Emmerson 18:57

Can you unpack that a little for our audience?

Alex More 18:59

Yeah, so Artificial General Intelligence being as much as. There's this, this movement between quite progressives in the field, particularly over in Silicon Valley where a lot of these technologies are born with startups and I say startups because even Anthropics, Claude, which splintered from OpenAI, so OpenAI owned ChatGPT, which is one of the world's most largely used large language models. They are quite progressive. OpenAI, they want to move towards AGI as soon as possible because they see the benefits. And it's a little bit like in many ways kind of coming up with a cure for cancer. It's like who can get there first? Who can be the person within history that can get AGI first and get their name in the history book? So there's that kind of, there's that race going on and what happened about a year ago in Silicon Valley was fascinating because they.

Daniel Emmerson 19:45

Sorry to jump in with AGI. We're talking pretty much as close to human consciousness as we can get.

Alex More 19:50

Yeah. So let's unpack that quickly. Sorry. So Alan Turing, who was an amazing individual, very under, sort of celebrated, came up with a test called the Turing Test, whereby the AI can essentially convince humans that it isn't. It is a human itself. We're not quite there. It might look on the surface like we are in some. Some cases. And there have been some claims that the Turing Test has been passed, but not with any sort of validity at the moment. But the predictions are kind of placing it around this year of 2030. I see this a lot. 2030, 2032, the Turing Test will be passed. And what that means, then, Daniel, is that we're in an age where AI is really difficult to decipher AI from human. And in some cases it already is. Right. Essays, images, videos, voices. I know that Hollywood's really grappling with this at the moment, and so are musicians, because AI is doing a really good job of ripping them off and taking their, you know, their IP. But the race is split because not everyone's racing towards this goal. So, for example, when Anthropic were formed, they came from OpenAI and they weren't happy with the direction of travel, so they said, you know what, we're going to break away, we're going to do a startup. They were called Anthropic and they now own Claude. So Claude is their product. Claude does not train on your data. It's much safer, much more cautious. And it's not racing towards the AGI. It's got more of a sort of altruistic aim, really. It kind of wants to do good in humanity. So not all AI is bad, but what we have to appreciate is it's all human generated. So whatever the intentions behind the technology of the company are, is what you're going to see in the product and something we haven't really touched on, though I think it's really interesting. Have you heard this phrase, glazing? Is this on your radar?

Daniel Emmerson 21:28

It's not. It's not. Please enlighten me.

Alex More 21:31

So glazing is when the AI tells you lots of good stuff about your work or you personally, to make you feel better about yourself, so you're more likely to use the model more and more, because we all like to be told that we look good and that our essay is potentially a Brooker prize winner. And ChatGPT, for example, is really guilty of glazing. And the owner, a guy called Sam Altman, he confessed this about two weeks ago in a press conference because it glazes us and makes everything sound a lot more, a lot better than it actually is. Now, that's a danger with kids, right? Because if it's always bigging them up, how are they ever going to take any sort of criticism? So there's an element of digital resilience there as well that we need to unpick along with digital literacy. But I thought that was an interesting one to throw into the fray because not many people know it. But if you think about your large language models, NotebookLM is a classic. There's two podcasts, basically taking your book chapter, and I'm guilty of using this too, and glazing it. And you read it, it reads it back to you like, ah, this is a seriously decent piece of work, when actually it might not be right.

Daniel Emmerson 22:28

I hadn't heard the term glazing before, but I'm, I'm definitely familiar with what it is and what it can do. There's also ways that the technology is adapting, particularly in terms of keeping you locked in. Right. So there never used to be questions at the end of a, of a prompt, right. So you'd. You prompt, you get a response and that would be it, more or less. Right. Whereas now you're asked, oh, would you like me to do this? Would you prefer for me to do this for you on top of that? Or whatsoever? So it's locking you into that engagement. I'm keen to note, just as we wrap up Alex. There's still quite a lot of fear and trepidation around use of AI from teachers in the classroom. Particularly now that the conversation is moving towards things like data privacy and intellectual property rights, people are becoming slightly more mindful as to what that best practice looks like. Do you have any words of encouragement or advice for teachers who are waiting for their moment or perhaps adamant that this isn't going to be something they're using in the future?

Alex More 23:34

Yeah, I think, first of all, a bit of a provocation really, for the listeners. I always ask the questions to teachers and leaders and anyone I'm speaking to about AI. What does this technology want? And that might sound like quite a curious question, but essentially this technology wants something. Okay. Like your emails want to be emptied and they want to fill up other people's boxes. Your smartphone wants to be useful for you and keep your appointments and whatnot. So this technology does want something. And I think we need to view this AI in particular, as a branch of technology in terms of what it gives us, but also what it takes away from us as educators and as people. What does it give us and what does it give our students and young people? And it gives us a lot, actually. For me, it gives me my time back to spend with my wife and kids, which is invaluable. That's a really good thing that it gives me. It gives me an ability to create an email or a document really, really quickly and in a way that I might not do it. It gives me an opportunity to interact and ask questions and find more information like we used to use Google and the Internet for. But also what it can take away if you're not careful, is individual opinions, ideas, creativity, your intellectual property as a human being. It can strip that if you're not careful, and you can lean on it too heavily. But also, we're working towards AGI. It can take jobs and a lot more. And I think this is a really scary thing that teachers hear in the press. Oh, you know, AI is going to replace teachers. No, it's not. It definitely won't. And I think my message to teachers is always the same. It's not going away. We need to embrace it. But as Laura Knight said, we need to think with care.

Daniel Emmerson 25:09

Absolutely. And what a wonderful, wonderful way to wrap up. For listeners who haven't heard our episode with Laura Knight, it'd be worth going back and, and picking up on that, to hear some of these concepts through really wonderful stuff. Alex, we're big fans of your work with STEM Learning, and I'm sure our audience will be as well. Really appreciate your time today and look forward to catching up again soon.

Alex More 25:31

That was great. Thanks, Daniel.

Voiceover 25:34

That's it for this episode. Don't forget, the next episode is coming out soon, so make sure you click that option to follow or subscribe. It just means you won't miss it. But in the meantime, thank you for being here and we'll see you next time.

About this Episode

Alex More: Preserving Humanity in an AI-Enhanced Education

Alex was genuinely fascinated when reviewing transcripts from his research interviews and noticed that students consistently referred to AI as "they," while adults, including teachers, used "it." This small but meaningful linguistic difference revealed a fundamental variation in how different generations perceive artificial intelligence. As a teacher, senior leader, and STEM Learning consultant, Alex developed his passion for educational technology through creating the award-winning "Future Classroom", a space designed to make students owners rather than consumers of knowledge. In this episode, he shares insights from his research on student voice, explores the race toward Artificial General Intelligence (AGI), and unpacks the concept of AI "glazing". While he touches on various topics around AI during his conversation with Daniel, the key theme that shines through is the importance of approaching AI thoughtfully and deliberately balancing technological progress with human connection.

Alex More

Education Consultant

Related Episodes

September 29, 2025

Matthew Pullen: Purposeful Technology and AI Deployment in Education

This episode features Matthew Pullen from Jamf, who talks about what thoughtful integration of technology and AI looks like in educational settings. Drawing from his experience working in the education division of a company that serves more than 40,000 schools globally, Mat has seen numerous use cases. He distinguishes between the purposeful application of technology to dismantle learning barriers and the less effective approach of adopting technology for its own sake. He also asserts that finding the correct balance between IT needs and pedagogical objectives is crucial for successful implementation.
September 15, 2025

Matt King: Creating a Culture of AI Literacy Through Conversation at Brentwood School

Many schools begin their AI journey by formulating AI policies. However, Matt King, Director of Innovative Learning at Brentwood School, reveals their preference for establishing guiding principles over rigid policies considering AI’s rapidly evolving nature.
June 16, 2025

David Leonard, Steve Lancaster: Approaching AI with cautious optimism at Watergrove Trust

This podcast episode was recorded during the Watergrove Trust AI professional development workshop, delivered by Good Future Foundation and Educate Ventures. Dave Leonard, the Strategic IT Director, and Steve Lancaster, a member of their AI Steering Group, shared how they led the Trust's exploration and discussion of AI with a thoughtful, cautious optimism. With strong support from leadership and voluntary participation from staff across the Trust forming the AI working group, they've been able to foster a trust-wide commitment to responsible AI use and harness AI to support their priority of staff wellbeing.
June 2, 2025

Thomas Sparrow: Navigating AI and the disinformation landscape

This episode features Thomas Sparrow, a correspondent and fact checker, who helps us differentiate misinformation and disinformation, and understand the evolving landscape of information dissemination, particularly through social media and the challenges posed by generative AI. He is also very passionate about equipping teachers and students with practical fact checking techniques and encourages educators to incorporate discussions about disinformation into their curricula.
May 19, 2025

Bukky Yusuf: Responsible technology integration in educational settings

With her extensive teaching experience in both mainstream and special schools, Bukky Yusuf shares how purposeful and strategic use of technology can unlock learning opportunities for students. She also equally emphasises the ethical dimensions of AI adoption, raising important concerns about data representation, societal inequalities, and the risks of widening digital divides and unequal access.
May 6, 2025

Dr Lulu Shi: A Sociological Lens on Educational Technology

In this enlightening episode, Dr Lulu Shi from the University of Oxford, shares technology’s role in education and society through a sociological lens. She examines how edtech companies shape learning environments and policy, while challenging the notion that technological progress is predetermined. Instead, Dr. Shi argues that our collective choices and actions actively shape technology's future and emphasises the importance of democratic participation in technological development.
April 26, 2025

George Barlow and Ricky Bridge: AI Implementation at Belgrave St Bartholomew’s Academy

In this podcast episode, Daniel, George, and Ricky discuss the integration of AI and technology in education, particularly at Belgrave St Bartholomew's Academy. They explore the local context of the school, the impact of technology on teaching and learning, and how AI is being utilised to enhance student engagement and learning outcomes. The conversation also touches on the importance of community involvement, parent engagement, and the challenges and opportunities presented by AI in the classroom. They emphasise the need for effective professional development for staff and the importance of understanding the purpose behind using technology in education.
April 2, 2025

Becci Peters and Ben Davies: AI Teaching Support from Computing at School

In this episode, Becci Peters and Ben Davies discuss their work with Computing at School (CAS), an initiative backed by BCS, The Chartered Institute for IT, which boasts 27,000 dedicated members who support computing teachers. Through their efforts with CAS, they've noticed that many teachers still feel uncomfortable about AI technology, and many schools are grappling with uncertainty around AI policies and how to implement them. There's also a noticeable digital divide based on differing school budgets for AI tools. Keeping these challenges in mind, their efforts don’t just focus on technical skills; they aim to help more teachers grasp AI principles and understand important ethical considerations like data bias and the limitations of training models. They also work to equip educators with a critical mindset, enabling them to make informed decisions about AI usage.
March 17, 2025

Student Council: Students Perspectives on AI and the Future of Learning

In this episode, four members of our Student Council, Conrado, Kerem, Felicitas and Victoria, who are between 17 and 20 years old, share their personal experiences and observations about using generative AI, both for themselves and their peers. They also talk about why it’s so crucial for teachers to confront and familiarize themselves with this new technology.
March 3, 2025

Suzy Madigan: AI and Civil Society in the Global South

AI’s impact spans globally across sectors, yet attention and voices aren’t equally distributed across impacted communities. This week, the Foundational Impact presents a humanitarian perspective as Daniel Emmerson speaks with Suzy Madigan, Responsible AI Lead at CARE International, to shine a light on those often left out of the AI narrative. The heart of their discussion centers on “AI and the Global South, Exploring the Role of Civil Society in AI Decision-Making”, a recent report that Suzy co-authored with Accentures, a multinational tech company. They discuss how critical challenges including digital infrastructure gaps, data representation, and ethical frameworks, perpetuate existing inequalities. Increasing civil society participation in AI governance has become more important than ever to ensure an inclusive and ethical AI development.
February 17, 2025

Liz Robinson: Leading Through the AI Unknown for Students

In this episode, Liz opens up about her path and reflects on her own "conscious incompetence" with AI - that pivotal moment when she understood that if she, as a leader of a forward-thinking trust, feels overwhelmed by AI's implications, many other school leaders must feel the same. Rather than shying away from this challenge, she chose to lean in, launching an exciting new initiative to help school leaders navigate the AI landscape.
February 3, 2025

Lori van Dam: Nurturing Students into Social Entrepreneurs

In this episode, Hult Prize CEO Lori van Dam pulls back the curtain on the global competition empowering student innovators into social entrepreneurs across 100+ countries. She believes in sustainable models that combine social good with financial viability. Lori also explores how AI is becoming a powerful ally in this space, while stressing that human creativity and cross-cultural collaboration remain at the heart of meaningful innovation.
January 20, 2025

Laura Knight: A Teacher’s Journey into AI Education

From decoding languages to decoding the future of education: Laura Knight takes us on her fascinating journey from a linguist to a computer science teacher, then Director of Digital Learning, and now a consultant specialising in digital strategy in education. With two decades of classroom wisdom under her belt, Laura has witnessed firsthand how AI is reshaping education and she’s here to help make sense of it all.
January 6, 2025

Richard Culatta: Understand AI's Capabilities and Limitations

Richard Culatta, former Government advisor, speaks about flying planes as an analogy to explain the perils of taking a haphazard approach to AI in education. Using aviation as an illustration, he highlights the most critical tech skills that teachers need today. The CEO of ISTE and ASCD draws a clear parallel: just as planes don't fly by magic, educators must deeply understand AI's capabilities and limitations.
December 16, 2024

Prof Anselmo Reyes: AI in Legal Education and Justice

Professor Anselmo Reyes, an international arbitrator and legal expert, discusses the potential of AI in making legal services more accessible to underserved communities. He notes that while AI works well for standardised legal matters, it faces limitations in areas requiring emotional intelligence or complex human judgment. Prof Reyes advocates for teaching law students to use AI critically as an assistive tool, emphasising that human oversight remains essential in legal decision making.
December 2, 2024

Esen Tümer: AI’s Role from Classrooms to Operating Rooms

Healthcare and technology leader Esen Tümer discusses how AI and emerging trends in technology are transforming medical settings and doctor-patient interactions. She encourages teachers not to shy away from technology, but rather understand how it’s reshaping society and prepare their students for this tech-enabled future.
November 19, 2024

Julie Carson: AI Integration Journey of Woodland Academy Trust

A forward-thinking educational trust shows what's possible when AI meets strategic implementation. From personalised learning platforms to innovative administrative solutions, Julie Carson, Director of Education at Woodland Academy Trust, reveals how they're enhancing teaching and learning across five primary schools through technology and AI to serve both classroom and operational needs.
November 4, 2024

Joseph Lin: AI Use Cases in Hong Kong Classrooms

In this conversation, Joseph Lin, an education technology consultant, discusses how some Hong Kong schools are exploring artificial intelligence and their implementation challenges. He emphasises the importance of data ownership, responsible use of AI, and the need for schools to adapt slowly to these technologies. Joseph also shares some successful AI implementation cases and how some of the AI tools may enhance creative learning experiences.
October 21, 2024

Sarah Brook: Rethinking Charitable Approaches to Tech and Sustainability

In our latest episode, we speak with Sarah Brook, Founder and CEO of the Sparkle Foundation, currently supporting 20,000 lives in Malawi. Sarah shares how education is evolving in Malawi and the role of AI plays to young people and international NGOs. She also provides a candid look at the challenges facing the charity sector, drawing from her daily work at Sparkle.
October 7, 2024

Rohan Light: Assurance and Oversight in the Age of AI

Join Rohan Light, Principal Analyst of Data Governance at Health New Zealand, as he discusses the critical need for accountability, transparency, and clear explanations of system behaviour. Discover the the government's role in regulation, and the crucial importance of strong data privacy practices.
September 23, 2024

Yom Fox: Leading Schools in an AI-infused World

With the rapid pace of technological change, Yom Fox, the high school principal at Georgetown Day School shares her insights on the importance of creating collaborative spaces where students and faculty learn together and teaching digital citizenship.
September 5, 2024

Debra Wilson: NAIS Perspectives on AI Professional Development

Join Debra Wilson, President of National Association of Independent Schools (NAIS) as she shares her insights on taking an incremental approach to exploring AI. Discover how to find the best solutions for your school, ensure responsible adoption at every stage, and learn about the ways AI can help tackle teacher burnout.
April 18, 2024

Steven Chan and Minh Tran: Preparing Students for AI and New Technologies

Discuss the importance of preparing students for AI and new technologies, the role of the Good Future Foundation in bridging the gap between technology and education, and the potential impact of AI on the future of work.

Alex More: Preserving Humanity in an AI-Enhanced Education

Published on
September 1, 2025

Alex is an award-winning educator and leader with 22 years’ experience driving innovation and equity in education. He founded the Future Classroom and GhanaProject, delivering transformative learning to over 22,000 children across 12countries. Alex’s work focuses on harnessing technology to enhance, not replace great teaching - creating meaningful global collaborations. As a consultant for STEMLearning, he works at the forefront of AI innovation, bridging education and industry.His PhD research explores AI, equity, and student voice, positioning him as a trustedthought leader shaping the future of education, curriculum design, and professional development worldwide.

Video Snippets

Transcription

Daniel Emmerson 00:02

Welcome to Foundational Impact, a podcast series that focuses on education and artificial intelligence from a non profit perspective. My name is Daniel Emmerson and I'm the Executive Director of Good Future Foundation, a nonprofit whose mission is to equip educators to confidently prepare all students, regardless of their background, to benefit from and succeed in an AI infused world. 

Okay, Alex, thank you ever so much indeed for being with us today for Foundational Impact. For those of you who are listening, of course. Foundational Impact is a podcast where we're exploring the role of artificial intelligence in education in numerous different guises. A real honour to have Alex with us today. We've been doing a good amount of work with the STEM Learning team and I think that's a really good place for us to start, Alex, if that's all right. If you could give our audience a bit of a. An introduction as to who you are and what you're doing, particularly on the STEM Learning side, that'd be fantastic.

Alex More 01:01

Yeah, sure. So my name's Alex. Thanks for having me here. I am a teacher two days a week and a senior leader in a school in Dorset. I teach computer science and STEM subjects. So mostly science, computer science. I also look after transition, so I do a lot of work in a primary school setting, bringing younger children up to our school. So that kind of transition between the two. And then I consult for the rest of the time when I'm not in teaching. And one of those people I consult for one of those organizations is STEM Learning. Particularly looking at AI for CPD for teachers, looking at how we can help teachers reduce workload, and look at some of the more difficult aspects of artificial intelligence, which ranges from safeguarding to ethics, all the way down to practical classroom activities. So that's my role, that's what I do.

Daniel Emmerson 01:46

And as far as artificial intelligence is concerned, I'm imagining that this hasn't always been a part of your job, even in the computer science side of things. What. What sort of drew you into that? And how did you become so actively involved?

Alex More 01:59

Great question. I've always been fascinated in edtech, educational technology. And in 2021, just before the pandemic hit, I created a space called the Future Classroom, where it's an old art room. And essentially what I did is I took over because it wasn't being used, this art room, it was just kind of abandoned. And with no budget, I managed to turn it into a pretty transformational space. In 2023 that won the 100.org global innovation for one of the top 100 innovations in the world. I think it's one of the only classrooms ever to get an award. Normally people get awards, not classrooms, but that was a really amazing project and that kind of gave me the insight into how powerful both technology and people. When you bring teachers and technology together, it can have a transformational impact. That's what inspired me to kind of go on this path. Then, of course, ChatGPT launched and I'd always had a bit of an interest in AI before I read a lot about it. I was a big fan of Ray Kurzweil's work and Singularity is Near and really interested in things like AI, Dark Winters and even all the way back to Ada Lovelace and Alan Turing's impact for the Turing Test. So I've always had a bit of a fascination in AI. So when ChatGPT launched and the world was like, whoa, I was like, I could see the potential transformation impact for teaching at that point.

Daniel Emmerson 03:05

Can you tell us a bit more about that classroom? Let's go back to that moment when you were imagining future classroom. What did you have there? What did it look like as an experience? And then maybe we can look at what differences there might be if you were to do that today.

Alex More 03:18

Yeah, great question. So I wanted to just shrug from the inside out because we know that particularly state schools are quite traditional. We kind of work towards an endpoint which tends to be the exam. I'm a bit of a disruptor in education, always have been. I've been in the classroom for 22 years now, and one of the things I like to do is do things differently. And I visited School 21 at Stratford and I was really inspired by the project based learning they were doing there. And they were using Oracy as a real vehicle to get kids to speak from quite difficult backgrounds about lots of issues. And I was inspired by that. And I thought I can see potential to do something a little bit different. So I first just got rid of the desks completely. So it has no desks in there, at least not permanent ones. And the students move around and use the furniture. There's whiteboards around the space. There's a big motherboard at the front. And the kids can create and collaborate ideas and projects on the whiteboards and then project it to what we call the motherboard, where six kids at the same time can work. And this was the kind of the embryonic stage. And what I guess I wanted to do is I wanted to make kids brilliant owners of knowledge, not consumers of it. Because I'm really conscious that sometimes students are just consumers of knowledge. And I wanted to break away from that. And particularly if we look at the post pandemic classroom, because there was three meter tape on the ground and, you know, teachers weren't allowed to go beyond it. Kids were socially distant, we've almost gone back to that default position. So the future classroom really disrupts that dynamic and it brings knowledge and skills together. They coexist rather than compete, which I think is really important because traditionally they tend to compete a bit in education for bandwidth. And then also it brings a teacher and technology together. Technology doesn't replace the teacher, the teacher uses the technology, but only if it's useful to them. That's the elevator pitch, I guess, of what the future classroom does. But then it became much more than that. I saw the potential to connect worlds. So every Monday at lunchtime, our students at Shaftesbury School connect with Ghana, a school in Accra, which we also have a future classroom in now, which is really cool. And the kids learn together. And it's so fascinating because the cultural differences, the sort of, the financial differences, like the kids in Ghana can't believe that all the kids in Shaftesbury have a cell phone, for example. And it's just bringing the world closer. So the world's our classroom. And technology has that ability to bring the world closer and closer and closer.

Daniel Emmerson 05:27

And how does artificial intelligence fit into that at the moment? Has it become part of the future classroom? Is it a collaborator? Is it a contributor? What does that experience look like for the students?

Alex More 05:39

All of the above. So the teachers that use it use a lot of wrapper apps, things like Perplexity, Gamma app, TeachMate AI, to create the resources for project based learning, because that can be quite time consuming. So that's a real help. And the quality, particularly on platforms like TeachMate AI, the quality of those resources are really rich. And teachers are very sort of complementary of that. But equally we do a lot with large language models with the students. So we do a lot around looking at biases, particularly between Ghana and the UK, because one of the things that's fascinating about that dynamic is how the children in Ghana don't necessarily see themselves in the outputs because of where the data's trained. And they're fascinating. What's great is the kids in Africa, not just Ghana, we do stuff with Botswana, Nigeria, they're really interested in artificial intelligence. But for them it's a little bit out of reach because of the way that the country's set up with energy supply. And there's some real challenges there that we tried to get under the surface of it. And that's really good for the kids in the UK because some of them are using these technologies at home. But also we've got some students from very deprived households where they don't necessarily have the access to devices and technology. They can't afford wifi. So there's all these interesting dynamics, but we essentially use it at two levels to answer your question, the teacher level and the student level. But we use it very, very differently.

Daniel Emmerson 06:59

And what sort of work are the students then doing around understanding and grappling with those biases that are almost baked into a lot of this technology? Is that something you're doing work around as far as debate or project work is concerned? It'd be really good to know what that looks like.

Alex More 07:14

Yeah, predominantly it looks like an oracy based task. So using student voice to kind of extend their opinions and views. And we try to do a lot of work around oracy in the school generally, but it fits the future classroom model beautifully. And when we come to talk about artificial intelligence, one of the most powerful things to do is to give students a voice. And I guess one of my criticisms of the way education is at the moment, and it's because of the pandemic and external pressures, is that the kids don't get a voice. I sit in the back of a lot of lessons and you wouldn't believe how, how little they speak. Sometimes kids speak three times a lesson and only two of those occasions are to the teacher. It's really, it's a shame. So I, I see like we need to be speaking about artificial intelligence more to children. And what was fascinating is the first AI sprint we did was with Darren Coxon and Pri Lakhani and student voice and agency came out as a real theme from them. And I agree, I think that even at the primary level, the younger the better. We need to be having conversations with these kids about what this technology is. And I actually did some research myself into this. I interviewed a bunch of kids, then I ran thematic analysis and the three themes that emerged were that students do not want AI to replace the human teacher. They very much see that we need human teachers. They also called AI “they”. Whereas the teachers called “it”. And I'm fascinated by that sort of vernacular because what do they mean by that term? They, you know, are they pointing towards almost a symbiosis of between technology and humans? Almost like on a post human level we're going to put a research category around it and the teachers are very category isn't it? And there was that language. I didn't notice it when I was interviewing the students. I only noticed it when I was going through the transcripts and it was so obvious. And then the other thing they say is they feel that. And I love this, this is my favorite part of the research, is that the children themselves feel that AI is just going to enhance humanity, but it shouldn't compete against our most beautiful human qualities, which is like love, empathy, consciousness, those types of things. They really felt that that was a human domain and they didn't want to see AI infiltrating that space.

Daniel Emmerson 09:15

What about then, examples of how students are using AI tools for beyond academic work and thinking through challenges in their social lives or with their peer groups or relationships and family matters, where we know, for example, that they're going to AI tools to vent, to express themselves, to try and find solutions. And by doing so, I mean, there are positives and negatives to that, right? The positive is that they are expressing themselves in, in some way. The negative is that they could be reducing the amount of time they actually spend talking to another human about issues and matters that concern them. I'm wondering what your views are on this, particularly on the social side of how AI is being used in schools at the moment.

Alex More 10:04

Yeah, it's a great question. And I get asked this a lot, actually, particularly from CEOs, people who are in senior leadership that are trying to think about the place of artificial intelligence within both primary and secondary and further education. I think in a nutshell, the positives are that it is companionship and it can be used to kind of take away sometimes some emotions that humans get attached to. So I'll give you an example. I know a lot of teachers that use it to debug passive aggressive emails from parents because at the end of a busy teaching day, you can be pretty exhausted, right? And like you get this email and it's probably not meant to be personal to you, but you read it as such. And AI is brilliant at stripping that out and just saying, actually, this is what they say and this is what you should do. But kids kind of use it in the same way, I think, and are useful for like debugging essays. And it's kind of used as this creative. This creative starting point. So my daughter is doing the IB at the moment. I know she uses it just as a creation tool. How do I get started with this project? But equally, there's this downside that Laura Knight speaks quite a lot about, which is this intellectual offloading. We could get into a danger of them academically relying on it and over reliance. But also this synthetic intimacy, it's quite a mouthful that, whereby we kind of children are getting into this idea that they believe that devices are almost on a human level. And that research that I spoke about a minute ago, that kind of points to that, doesn't it? The they, the they, the nod to that word. And I guess there's a danger there that we really need to be aware of as parents and as educators and as people sort of politicians in policy as well. How do we safeguard against that? Because that's happening in the unsupervised environment away from school. It's not happening in the supervised, under supervision of teachers and responsible adults. So there's a bit of work there to do, I think, with parents educating about the dangers, particularly on social media apps like Snapchat and TikTok that are not very strictly regulated, particularly if they're, they're not from EU domains. So I think there's some work there to do around how we teach kids to be responsible digital citizens in school. So when they leave in the unsupervised setting, they've got a good foundation.

Daniel Emmerson 12:08

Is that something that you think is going to increase in need and if so, what could that look like? And I'm thinking of examples that I've seen in terms of real world practice. When you look at something like NotebookLM and you've got two characters that you can now interact with and engage with while they're hosting a podcast on your, I don't know, values homework or your geography assignment. And it feels particularly to younger age groups as though it is a conscious being that they're engaging with as opposed to a machine. What might that best practice on the digital citizenship piece that you mentioned there, what could that look like? And how might schools be able to prioritise the time they need to get to it?

Alex More 12:50

I think the time is a really. I get asked that a lot and I think it's something that I endorse it being baked into the fabric of every lesson. I think AI has a space in every lesson. And I think it's really interesting that in the UAE they've just made it a subject compulsory from the age of four. That's a really bold move. And I think that kind of says a statement is, technology is not going away, we need to embrace it. And more and more I'm seeing to your point, I'm seeing educators saying, right, okay, we finally accept that AI is not going to go away and we do need to do stuff about it, we do need to write policy, we do need to train teachers. Whereas I would say up until about three months ago, there was still a feeling in a lot of schools that this was just another fad like VR or AR and it's just going to disappear. But it isn't. And I think the work that we need to do as teachers is we need to create frameworks in our school that are context based. Because the thing is, a school in Dubai is very different to a school in India, very different to a school in the UK. So it needs to be context domain specific. But essentially what we need to look at here is digital literacy. There's a suite of skills that we really need to invest time in very young, as early as sort of five, six, seven years of age, because just basic computer skills, because not just AI, but if we look at the future direction of travel for assessment in the next decade, all the exams, bar maths are going to go online, so kids are going to do GCSEs online. Now, the AI that sits within that is that there's going to be AI scribes, there's going to be AI readers, there's going to be voice to text. It's going to transform how kids do exams. But at the same time there's always that risk of plagiarism, not non authentic learner works. We've got these real challenges to grapple with in education. I think it's quite exciting, but I can see why people might be cautious of it. So I think my advice would be it's about digital literacy and it's about what we do in that piece to prepare students to go on and make good decisions when they're not with us.

Daniel Emmerson 14:40

And have you seen any good examples of digital literacy, particularly at the primary level?

Alex More 14:45

Yeah. And it can be anything from getting the kids to type 28 words per minute, by the way. And that has got nothing to do with AI, it's just the kids in the future, a lot of them interact on touchscreens. Okay. So their interaction is very much born from a touchscreen interaction. Whereas when they do exams online or write letters, they're going to need to use a keyboard and many of them can't. You wouldn't believe how many kids come to me at 11 years of age from primary school and they can only type eight words a minute. And for these exams that I'm speaking about, they're going to have to type 28 at least to be able to kind of keep up. So that that's one kind of aspect of it. And then as we progress up through the AI spheres, it's the biggest thing I think with AI is can the kids use it not to do their work, but to help them create ideas around their work. So it's a creative medium rather than over reliance. It will do my work for me. And if there's any teachers listening to this, which I'm sure there are, you'll know if a kid turns in an AI assignment, there's so many giveaways and it's almost educating the kids about that too. Saying, you know, the extended hyphen off the bat letter, that's an AI giveaway. Using the words like leveraging, empowering. AI loves those words. So it's how we can educate them to be a bit more, I guess, organic with this technology and use it to create rather than just do their tasks. I think is a very useful thing. 

Daniel Emmerson 16:00

And what about knowledge and concepts around things like truth and what is a fact? And how do I know that what I'm seeing is, is grounded in evidence or even reality in some cases? Are these things from your experience that you think should be investigated at a primary level without, even without access to the technology?

Alex More 16:24

Yeah, I think in fairness, some primary educators do explore this. I know I'd get in trouble if I said they didn't because I know some guys that are really, and girls that are really pushing this technology and really. So I think that yeah, there's a place for it. The younger the better in my opinion. As soon as they can synthesize and understand the concepts. I guess it's, it's a complex issue though because it all comes down to time in schools and where does it fit within the curriculum. So if we look at the secondary, most schools now will deliver a six week unit on AI ethics or computer technology ethics. And that involves deep fakes, digital manipulations, fake news that sits at kind of the core of that truth that you were talking about. And one of the tasks they do is they have to debunk, they have to look at kind of an AI generated piece of work and a human and they have to say which one's which. They need to look at deep fake images that we can now create using lots of AI apps and they have to say which one's human and why. So they're kind of getting that at the age of 11. What I think is inconsistent is how much they're getting beyond that. So before that, and that really depends on the teacher, whether they're a specialist or non specialist, because in the UK obviously primary teachers don't tend to be specialists. I take my hat off to them because they teach everything rather than kind of, they're not so domain specific, whereas when we get to 11 and secondary school it's more domain specific and subjects are siloed. And at the moment AI definitely sits within the domain of computer science. It doesn't really get spoken about outside of that unless you've got a really passionate English teacher or French teacher, if that makes sense.

Daniel Emmerson 17:54

And as far as those concepts are concerned, there's also this argument that AI is very much at the worst level we're ever going to see it. Right. And I hear this a lot, certainly when we're speaking to teachers, that it's only going to continue to improve at a massively fast and rapid rate. And if that's the case, in terms of the quality of output, what we're teaching now about how to spot things isn't going to be relevant in 3, 4, 5 months time. Is there something underlying there that we can focus on? I'm really interested in your thoughts on this one.

Alex More 18:29

Yeah, I think it goes back to what you said about truth, because there's going to be some things that endure and if you look at the history and the research of computer science, this has always been a problem. Right. So you know, how true is it? And ultimately the companies like OpenAI are working quite aggressively towards something called AGI, which is going to be quite scary really, I think, think for implications for society, education and humanity as a whole. But they've always been working towards AGI. It's just, it's just looking a little bit more realistic now.

Daniel Emmerson 18:57

Can you unpack that a little for our audience?

Alex More 18:59

Yeah, so Artificial General Intelligence being as much as. There's this, this movement between quite progressives in the field, particularly over in Silicon Valley where a lot of these technologies are born with startups and I say startups because even Anthropics, Claude, which splintered from OpenAI, so OpenAI owned ChatGPT, which is one of the world's most largely used large language models. They are quite progressive. OpenAI, they want to move towards AGI as soon as possible because they see the benefits. And it's a little bit like in many ways kind of coming up with a cure for cancer. It's like who can get there first? Who can be the person within history that can get AGI first and get their name in the history book? So there's that kind of, there's that race going on and what happened about a year ago in Silicon Valley was fascinating because they.

Daniel Emmerson 19:45

Sorry to jump in with AGI. We're talking pretty much as close to human consciousness as we can get.

Alex More 19:50

Yeah. So let's unpack that quickly. Sorry. So Alan Turing, who was an amazing individual, very under, sort of celebrated, came up with a test called the Turing Test, whereby the AI can essentially convince humans that it isn't. It is a human itself. We're not quite there. It might look on the surface like we are in some. Some cases. And there have been some claims that the Turing Test has been passed, but not with any sort of validity at the moment. But the predictions are kind of placing it around this year of 2030. I see this a lot. 2030, 2032, the Turing Test will be passed. And what that means, then, Daniel, is that we're in an age where AI is really difficult to decipher AI from human. And in some cases it already is. Right. Essays, images, videos, voices. I know that Hollywood's really grappling with this at the moment, and so are musicians, because AI is doing a really good job of ripping them off and taking their, you know, their IP. But the race is split because not everyone's racing towards this goal. So, for example, when Anthropic were formed, they came from OpenAI and they weren't happy with the direction of travel, so they said, you know what, we're going to break away, we're going to do a startup. They were called Anthropic and they now own Claude. So Claude is their product. Claude does not train on your data. It's much safer, much more cautious. And it's not racing towards the AGI. It's got more of a sort of altruistic aim, really. It kind of wants to do good in humanity. So not all AI is bad, but what we have to appreciate is it's all human generated. So whatever the intentions behind the technology of the company are, is what you're going to see in the product and something we haven't really touched on, though I think it's really interesting. Have you heard this phrase, glazing? Is this on your radar?

Daniel Emmerson 21:28

It's not. It's not. Please enlighten me.

Alex More 21:31

So glazing is when the AI tells you lots of good stuff about your work or you personally, to make you feel better about yourself, so you're more likely to use the model more and more, because we all like to be told that we look good and that our essay is potentially a Brooker prize winner. And ChatGPT, for example, is really guilty of glazing. And the owner, a guy called Sam Altman, he confessed this about two weeks ago in a press conference because it glazes us and makes everything sound a lot more, a lot better than it actually is. Now, that's a danger with kids, right? Because if it's always bigging them up, how are they ever going to take any sort of criticism? So there's an element of digital resilience there as well that we need to unpick along with digital literacy. But I thought that was an interesting one to throw into the fray because not many people know it. But if you think about your large language models, NotebookLM is a classic. There's two podcasts, basically taking your book chapter, and I'm guilty of using this too, and glazing it. And you read it, it reads it back to you like, ah, this is a seriously decent piece of work, when actually it might not be right.

Daniel Emmerson 22:28

I hadn't heard the term glazing before, but I'm, I'm definitely familiar with what it is and what it can do. There's also ways that the technology is adapting, particularly in terms of keeping you locked in. Right. So there never used to be questions at the end of a, of a prompt, right. So you'd. You prompt, you get a response and that would be it, more or less. Right. Whereas now you're asked, oh, would you like me to do this? Would you prefer for me to do this for you on top of that? Or whatsoever? So it's locking you into that engagement. I'm keen to note, just as we wrap up Alex. There's still quite a lot of fear and trepidation around use of AI from teachers in the classroom. Particularly now that the conversation is moving towards things like data privacy and intellectual property rights, people are becoming slightly more mindful as to what that best practice looks like. Do you have any words of encouragement or advice for teachers who are waiting for their moment or perhaps adamant that this isn't going to be something they're using in the future?

Alex More 23:34

Yeah, I think, first of all, a bit of a provocation really, for the listeners. I always ask the questions to teachers and leaders and anyone I'm speaking to about AI. What does this technology want? And that might sound like quite a curious question, but essentially this technology wants something. Okay. Like your emails want to be emptied and they want to fill up other people's boxes. Your smartphone wants to be useful for you and keep your appointments and whatnot. So this technology does want something. And I think we need to view this AI in particular, as a branch of technology in terms of what it gives us, but also what it takes away from us as educators and as people. What does it give us and what does it give our students and young people? And it gives us a lot, actually. For me, it gives me my time back to spend with my wife and kids, which is invaluable. That's a really good thing that it gives me. It gives me an ability to create an email or a document really, really quickly and in a way that I might not do it. It gives me an opportunity to interact and ask questions and find more information like we used to use Google and the Internet for. But also what it can take away if you're not careful, is individual opinions, ideas, creativity, your intellectual property as a human being. It can strip that if you're not careful, and you can lean on it too heavily. But also, we're working towards AGI. It can take jobs and a lot more. And I think this is a really scary thing that teachers hear in the press. Oh, you know, AI is going to replace teachers. No, it's not. It definitely won't. And I think my message to teachers is always the same. It's not going away. We need to embrace it. But as Laura Knight said, we need to think with care.

Daniel Emmerson 25:09

Absolutely. And what a wonderful, wonderful way to wrap up. For listeners who haven't heard our episode with Laura Knight, it'd be worth going back and, and picking up on that, to hear some of these concepts through really wonderful stuff. Alex, we're big fans of your work with STEM Learning, and I'm sure our audience will be as well. Really appreciate your time today and look forward to catching up again soon.

Alex More 25:31

That was great. Thanks, Daniel.

Voiceover 25:34

That's it for this episode. Don't forget, the next episode is coming out soon, so make sure you click that option to follow or subscribe. It just means you won't miss it. But in the meantime, thank you for being here and we'll see you next time.

JOIN OUR MAILING LIST

Be the first to find out more about our programs and have the opportunity to work with us
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.