Matt King: Creating a Culture of AI Literacy Through Conversation at Brentwood School

September 15, 2025

Video Recaps

Summary

Many schools begin their AI journey by formulating AI policies. However, Matt King, Director of Innovative Learning at Brentwood School, reveals their preference for establishing guiding principles over rigid policies considering AI’s rapidly evolving nature. 

Matt shares his approach to fostering dialogue with diverse school stakeholders. He describes engaging primary students in AI ethics and literacy, obtaining parental consent for use of AI in students’ learning, exploring AI applications with operational staff, and addressing teacher resistance through supportive one-on-one conversations. 

Throughout Brentwood’s AI journey, human connection remains central to cultivating a school-wide culture of AI literacy and integration.  It is the power of conversation, not rulebooks that create the foundation for responsible and effective AI adoption. 

Transcription

Daniel Emmerson 00:02

Welcome to Foundational Impact, a podcast series that focuses on education and artificial intelligence from a non profit perspective. My name is Daniel Emmerson and I'm the Executive Director of Good Future Foundation, a nonprofit whose mission is to equip educators to confidently prepare all students, regardless of their background, to benefit from and succeed in an AI infused world. 

Welcome to Foundational Impact. This is a podcast series where we're exploring artificial intelligence in education through lots of different lenses. And it's wonderful to have you with us as an educator who has just embarked on the AI quality mark with Good Future Foundation. Matt, it would be wonderful at the outset, if it's okay, please, for you to just give us a bit of insight as to who you are and what you do and a bit about your school as well, if you don't mind.

Matthew King 00:56

Okay. So my name is Matt King and I've just started actually in September in the position that I am currently in. So my title is Director of Innovative Learning at a school which is called Brentwood School in Essex. We are an independent school. We're an all through school about to have quite a large expansion in our boarding provision. And with that, as I'm sure lots of you are all experiencing, comes the challenge of digital security, digital safety. Part of my role is to lead on AI across the school, but also look at all of our digital tools that our students are using and identifying where they add, where in some cases they detract from the quality of the students learning.

Daniel Emmerson 01:41

When you approach that initial role, as you said, you're relatively new to the position and you learned that you'd be responsible for investigating AI and where it does and where it doesn't belong in the school. What was your immediate response to that and perhaps how has that changed over the course of the last few months?

Matthew King 01:59

So I took over from someone that is called Greg Justam, he's now the Deputy Head and he has done an absolutely brilliant job already in the journey towards AI as a school. He's a computer scientist and I am a biologist. So we have quite different takes on it, but in some ways useful because with my kind of biologist hat on, the strategic approach of it being a bit like an experiment with a hypothesis, me wanting to audit our current provision and see essentially where we are at and then use that as a starting point. And for me that's exactly where the AI quality mark has been useful because it's given me exactly that. Where are we at right now? Where are our areas that we could try and add further improvement in? Also I think there was a little bit of a feeling at the beginning when I started that I was going to need to hit the ground running and have this all correct and signed off and policies written, etc, but very, very quickly. You realize, especially with AI, having a policy a little bit daft because you need to rewrite that every few weeks. It's not something that is going to be stationary and you've got to be comfortable with that. You've just got to remain informed. And again, Good Future foundation has been a really good way of remaining informed about what other schools are doing. So I personally feel that we're in quite a good position now as a school and I know clearly what I need to do, what we need to do in order to get even better with our AI provision.

Daniel Emmerson 03:29

The policy approach is an interesting one for sure. I mean, we speak to schools up and down the country around whether or not policy is the right way to go. I think the key is golden rules and best practice, whether that takes the form of a policy document that sits on the website, or whether that's part of professional development, or whether there's a set of guidelines or even in some cases, schools who create a handbook around how to, how to approach AI tools. Can you give us a bit of insight, Matt, as to what you do around that space? Just so that teachers know what they can and can't do and if they've got questions, where do they go, that sort of thing.

Matthew King 04:10

So when I took over the role from Greg, he'd already established a team of around 25 teachers who had a real interest in AI. We refer to them as our AI associates. So that had already developed, like you do with any kind of change management process, a set of staff who had some influence in each of their departments to be able to springboard new initiatives and gauge ideas off of one another. So that, having that in place already for me was very useful because I then had that audience that I could work with. You use the phrase golden rules. We refer to ours as guiding principles. And that's exactly what we've got. We have. It's actually in the shape of a temple to kind of give it the idea that it's foundational. And we've got four guiding principles that we as an entire entity. And this is not just teachers, this is students, this is operational staff that, that we have all agreed to adhere to. And that's been partly developed through doing the AI quality mark process and realising where are our risk areas. For us, it's particularly around outward communication because we are an independent school. We have to manage our reputation. There are now tools that will claim to auto-respond for you and things like that we feel would be a reputational risk for us. So we've paired that into our four guiding principles. Another really big underpinning one for us is the importance of human interaction. I think. I've been in education for 16 years. I think that is what the, and the gold dust, I suppose, is of education is that we're living in a society with more and more and more and more social media and students are spending less and less and less and less time with each other. And actually schools are perhaps one of the last places now where they are having those social interactions with each other. And AI, we see that as a potential threat there if we're not careful with what AI tools we're going to allow our students to begin using. So that's another one of our guiding principles. I was listening to, actually a presentation that was being given by Ben Whitaker and he had, right at the very beginning, the phrase of AI is not a policy. There's absolutely no merit in having it as a standalone area on your school improvement plan. It needs to be woven into multiple aspects and we've certainly been doing that here. When we've had our strategic planning days, we have middle leader away days and we have senior leader away days at Brentwood. And that gives us a lot of opportunity to see, for example, where AI can have some benefits within teaching and learning. But also AI, we think, can start to have some benefits even within our catering provision, potentially in our site grounds team provision, potentially. So that was a definite second way that we've been trying to build in some principles and obviously CPD, and I am very lucky that Greg, who is deputy head in charge of start and CPD, is a computer scientist and therefore is quite willing to give time to staff to have these CPD sessions. So we've, we've done a few different ways now. We've had some online workshops, we've had some in person carouselled workshops during INSET. We have an online hub where lots of different tools are added and people are recorded using the tools to share with others. Because we feel that peer training is much more useful than certainly what I would term as guru-led training or educeleb-led training. Because, yeah, just having people on, you know, within, within our school that are using this has much more power, we think, to our CPD.

Daniel Emmerson 08:10

Well, you've said some really interesting things, Matt, and I want to come back to you on, on a few of them because they're, they're sort of recurring themes, I suppose, within the field and also within the podcast series. The first is on the social piece that you mentioned around the amount of time that students have to spend with each other as far as social and emotional development is concerned, as far as social engagement is concerned, and how AI is potentially a threat to what is ultimately one of the most valuable experiences, one might argue, of a school setting. Can you tell us a bit more about that in terms of the sort of things that you're seeing, things you might be hearing from students, and how the school is addressing this?

Matthew King 08:51

There is a shift already happening in what AI tools can be used from age 13 and above and those that can't. For example, I know we're not a Microsoft school, but Copilot is rumoured to be going below the age of 13. So we do or have asked our parents for consent on are they happy for AI tools to be enabled on student devices? And we do that from Year 9 upwards. And that's the point at which they would start to have some specific AI lessons, as in how to use the AI tools. But for us, it's definitely about AI literacy and the ethics surrounding AI, which can and does begin from early years. I'm just sort of new into the role, but we've just started having conversations in our prep school about where it would be appropriate to actually introduce the concept of what an AI is. And there's some fantastic books out there now that can do just that. Canva, as I'm sure lots of people are aware, can be used by students under the age of 13. So showing image bias. I've seen this done quite well in a Year 5 lesson where they generate an image of a scientist and then critique the image that it then produces and the bias that it has. And also, as you are very much aware, Daniel, because you are coming as Good Future foundation to it, we're hosting a STEM primary conference this year and part of that is going to really have an AI focus to it, because primary level is perhaps the area that needs to really be considered for bringing in this idea of AI ethics and AI literacy.

Daniel Emmerson 10:35

100% agree with that, Matt. I would emphasise, though, typically our approach to this, as you've, as you've suggested here, is conversation around what that looks like and activities around thinking through responsible AI use even from a very, very young age, as long as that content is age appropriate. Young learners don't need access to AI tools in order to start thinking about what responsible use looks like. And that can start from early years, as you suggested. I'm really interested as well in this parental consent, because you're the first school that I've learned about that has actually rolled this out. And often we have conversations with schools around, okay, there's parental consent that AI tools might be used, but different AI tools have different policies and do parents, or will all parents really understand what they're consenting to if they sign off on this one? How have you addressed those different things with the consent form?

Matthew King 11:39

I think you're absolutely right. To be able to produce almost a list of the exact AI tools that their student would be using would be nigh on impossible. But where we're asking for consent is quite specific within our Year 9 digital skills program, where we can actually give some quite specific guidance on what we're doing within that digital skills programme. And the point that we have as a school reflected on from the Good Futures Foundation AI quality Mark, is exactly that engagement with parents is an area that, and I've been discussing this at various other network group meetings, we all need to do a lot more contact with because some parents I speak to on the phone have quite a terrified response to AI and therefore they don't want to go near it. Some parents are specialists in the field and might be a specialist within, say, a financial position which has got quite different uses of AI and ethics surrounding it than we have as a school. So to sort of answer your question, I suppose succinctly it was a challenge and is a challenge to get parental consent. And ultimately if a parent does not want their child to use AI tools, we as a school are still, I feel responsible for discussing that use of AI. So it's more the direct use, direct enabling on their child's device that we're consenting for, not the entire AI conversation.

Daniel Emmerson 13:10

Great stuff, Matt. Thank you. Another piece that you mentioned earlier was around operations. I think you mentioned catering and grounds as departments where you were thinking through what AI use might look like. Could you unpack that a little bit and just sort of talk us through what you have in mind?

Matthew King 13:27

Yeah, so we've just very recently done our first operational staff workshop on AI. So that was led by somebody called Joe Scotland, who I think has actually been on one of our previous meetings and she just introduced the basics of ChatGPT and Google Gemini and got the various members of staff to think about a task that they're repeating and almost want the same response to each time. So, for example, that could be an order of food items in response to, I don't know, the monthly calendar and if there's a particularly Chinese New Year, for example, in that calendar, how that could be identified and informed, the food order. So we are absolutely not at the level where I could give you an example of where that is using right now, but we've just started to engage our operational staff into it who are very, very time poor. So it's a challenge to try and find a suitable time for them to undertake that sort of CPD. And we were really lucky, as I said, to have that in our last INSET day. But it's then even more of a challenge to keep that momentum going. So what we're doing at the moment is after lots of our different CPD sessions, we try to do the do more, do less, stop doing, start doing kind of approach. So we've got all of our academic departments have produced a do more, do less grid for AI and that would be our next move, is to now look into our operational departments. What one thing could we do that is might be transformational for you and then just focus on that because you can get very quickly overwhelmed and the new shiny factor, you unwrap a new present. Amazing bit like that with AI, but there's just, it's too much. So to just focus on one we feel is a good way forward.

Daniel Emmerson 15:18

Excellent. So it's almost an audit of roles and responsibilities and then thinking through collectively. Okay, where is an instance there that you might be able to automate a task or, or an initiative? Really, really great to hear about that, Matt. I think the third piece that I wanted to come back to you on though was the online hub that you mentioned and the peer training, because again, this is something that. It sounds as though you have quite a number of staff who were engaged with this. I mean, 20 plus folks who are interested in AI at the get go and who are keen to engage on that, I think is quite exceptional in terms of the schools that we're working with and getting people on board is a really big challenge for a lot of schools. But with this group, I suppose you're able to focus a bit more on the peer training. Can you tell us a bit more about what that looks like, what's involved and perhaps some of the challenges around it, particularly if we're talking to a school or a community that doesn't have as many teachers that are taking the initiative.

Matthew King 16:22

Yeah, so we are an Apple one-to-one school. So that brings down a barrier immediately because it means every single member of the teaching staff has an Apple MacBook and most teaching staff have an iPad. So therefore the ability to actually be there and then try out these tools without having to book a computer suite and all of the logistical nightmare that can come with organising CPD with a large number of people with computers is gone. Because we have that that has really helped. I think having a member that drives forward your AI provision that is sitting on SLT is very useful. So my role does sit on what we call the local SLT and that then means that you can be involved in CPD conversations and ask for time in X, Y and Z area. We have I believe around about 450 staff across the school. So that gives some I suppose fraction to when we say 20 teaching staff out of quite a huge number. We did also pitch that we wanted one from each department. It was the kind of an expectation that we needed that we keep in touch with everybody via Google Classroom which has been really useful. Lots of in fact the, the Good Future Foundation AI Quality Mark the, the audit process that we went through that was done digitally through a Google form. We have calendared meetings. There are three per year and that's very helpful. I think if you have that in there from September people are aware of exactly when they're going to be. But there are still challenges. There are challenges when we've got a new tool that's coming out or a problem with the GDPR process that how do we very quickly get that information out? We have like most schools briefing each week. We have obviously emails but you know, like I'm sure most schools there's a huge number of emails that can easily be missed. We have digital signage around the school that's been very useful when we've wanted to show something that's both student and teacher facing. But yeah, I think the, the challenge that we've certainly got now is I'm going to use the word convincing. Our kind of stuck fast teachers that are absolutely don't want to know about AI. Don't talk to me about AI. Nope, not interested. Don't like it that we've got to unpick that now and move that conversation into digital literacy and the conversation around. We are all responsible for numeracy, we are all responsible for literacy, we are now all responsible for AI literacy and it's getting that culture I think around the school that's the next challenge.

Daniel Emmerson 19:13

I suppose it is fear, right, that people have around. I'm not going to look at this, I'm not going to start using this. And this is something that we I suppose saw back in the early days of ChatGPT becoming more mainstream in education conversations. It does still very much exist within, I would say, the majority of schools that we're working in. People have probably got to saturation point with how much they can hear about AI and what they want to know about it. Have you had any conversations recently with some of those folks that are really not keen to get on board? And I'm wondering if you've got any tips or advice you might have for schools that find themselves in a similar position.

Matthew King 19:55

It's the power of conversation, it's the power of face to face, regular drop ins with those sorts of people around. So what is it that you don't like? For example, I'm sure lots of people are having conversations around academic integrity and where students are using AI in areas that they shouldn't be. So when we have a problem with that, there is then a conversation that is had with the student and with their teacher. And quite often, if I'm honest, it starts with the task being set with AI in mind and the task being set with an AI acceptable use in mind. So we've developed an AI barometer. That's really helped teachers to have conversations in the classroom. It's literally a scale of 1 to 5. But it's about repeating that, asking every single week, reminding people when we set a task, is it AI resistant? Fantastic tool if anybody has seen it on Magic Schools, which allows you to set an AI resistant task. It also just trains you actually you only need to do one and then you realise, ah, I see how that has made it an AI resistant task so that I can reuse it. So it's often, yeah, about sitting down and listening on a one to one basis with that person about what it is that they are scared about. More often than not it's about doing something wrong for the student and causing the student to become discredited for whatever reason. And more often than not a conversation around, well actually this is where it can be used and helped and then they go away and try it, showcase it as well. I've quite often offered examples. Come and have a look at me. Come and have a look at one of the 20 other members of staff that are using it a lot helps.

Daniel Emmerson 21:47

So something I know that will continue to be ongoing and I understand the value of dialogue and sharing in that space. And I should say as well that although yours is an independent school, you've done a great deal of dialoguing with our community and sharing best practice. What might you say as a final thought, Matt, to schools who are doing a lot of work with AI but are unsure about how to share what they're doing and whether or not they should be sharing more broadly, perhaps with schools that don't have the resource capability that are coming up against many of these resistances that we've talked about today. Where's a good place to start?

Matthew King 22:27

The first place to start is probably within your school and within the SLT team, establishing that sharing best practice, especially for this, is needed. There is a little bit, particularly in my sector of sharing state secrets, we need to be very careful. However, I don't think AI is one of those things that we can say, let's keep that to ourselves at all because it's so rapidly growing, it's so fast paced, it's got to be outwardly discussed. So getting that agreement there, getting people to realise that this is a collective responsibility that we all have as an education sector. I have and continue to be quite active on X and on LinkedIn and you learn very, very quickly about the different groups that are out there where this sort of thing is being discussed an awful lot. There are lots of network groups for both the state and the independent sector that people can get involved in. The Good Future foundation themselves have got some webinars that you can check into, and hear from best practice. There's a newsletter as well that we've all signed up to that we regularly have a little read through. I'm sure you've probably heard some of the edu celebs I won't name. But there, there are some newsletters that you can sign up to from some of the big names there. I've named one earlier actually, haven't I? But it's useful to have a couple of newsletters. Don't sign up for more than that because you'll just constantly be bombarded. And actually I think for me, my background, as I said, is biology and STEM and science, technology, engineering and mathematics. So STEM Learning have a really good online community as well, where you can log in and pitch a bit like an online Facebook, but just for educators, there's a section I believe, growing there for AI.

Daniel Emmerson 24:16

Fantastic stuff, Matt. Of course we're big advocates of STEM Learning and the incredible work that they're doing there as well. And indeed of the work that you continue to do both at school and also in, in an outward facing capacity. Matt, I know that many schools benefited from the presentation you gave on AI recently. Thank you so, so very much for that and also for your time today. It's always a pleasure catching up with you and learning about the work you're doing, Matt. A real pleasure. Thanks once again.

Matthew King 24:47

Thank you for having me.

Voiceover 24:48

That's it for this episode. Don't forget, the next episode is coming out soon, so make sure you click that option to follow or subscribe. It just means you won't miss it. But in the meantime, thank you for being here and we'll see you next time.

About this Episode

Matt King: Creating a Culture of AI Literacy Through Conversation at Brentwood School

Many schools begin their AI journey by formulating AI policies. However, Matt King, Director of Innovative Learning at Brentwood School, reveals their preference for establishing guiding principles over rigid policies considering AI’s rapidly evolving nature.

Matthew King

Director of Innovative Learning at Brentwood School

Related Episodes

September 29, 2025

Matthew Pullen: Purposeful Technology and AI Deployment in Education

This episode features Matthew Pullen from Jamf, who talks about what thoughtful integration of technology and AI looks like in educational settings. Drawing from his experience working in the education division of a company that serves more than 40,000 schools globally, Mat has seen numerous use cases. He distinguishes between the purposeful application of technology to dismantle learning barriers and the less effective approach of adopting technology for its own sake. He also asserts that finding the correct balance between IT needs and pedagogical objectives is crucial for successful implementation.
September 1, 2025

Alex More: Preserving Humanity in an AI-Enhanced Education

Alex was genuinely fascinated when reviewing transcripts from his research interviews and noticed that students consistently referred to AI as "they," while adults, including teachers, used "it." This small but meaningful linguistic difference revealed a fundamental variation in how different generations perceive artificial intelligence. As a teacher, senior leader, and STEM Learning consultant, Alex developed his passion for educational technology through creating the award-winning "Future Classroom", a space designed to make students owners rather than consumers of knowledge. In this episode, he shares insights from his research on student voice, explores the race toward Artificial General Intelligence (AGI), and unpacks the concept of AI "glazing". While he touches on various topics around AI during his conversation with Daniel, the key theme that shines through is the importance of approaching AI thoughtfully and deliberately balancing technological progress with human connection.
June 16, 2025

David Leonard, Steve Lancaster: Approaching AI with cautious optimism at Watergrove Trust

This podcast episode was recorded during the Watergrove Trust AI professional development workshop, delivered by Good Future Foundation and Educate Ventures. Dave Leonard, the Strategic IT Director, and Steve Lancaster, a member of their AI Steering Group, shared how they led the Trust's exploration and discussion of AI with a thoughtful, cautious optimism. With strong support from leadership and voluntary participation from staff across the Trust forming the AI working group, they've been able to foster a trust-wide commitment to responsible AI use and harness AI to support their priority of staff wellbeing.
June 2, 2025

Thomas Sparrow: Navigating AI and the disinformation landscape

This episode features Thomas Sparrow, a correspondent and fact checker, who helps us differentiate misinformation and disinformation, and understand the evolving landscape of information dissemination, particularly through social media and the challenges posed by generative AI. He is also very passionate about equipping teachers and students with practical fact checking techniques and encourages educators to incorporate discussions about disinformation into their curricula.
May 19, 2025

Bukky Yusuf: Responsible technology integration in educational settings

With her extensive teaching experience in both mainstream and special schools, Bukky Yusuf shares how purposeful and strategic use of technology can unlock learning opportunities for students. She also equally emphasises the ethical dimensions of AI adoption, raising important concerns about data representation, societal inequalities, and the risks of widening digital divides and unequal access.
May 6, 2025

Dr Lulu Shi: A Sociological Lens on Educational Technology

In this enlightening episode, Dr Lulu Shi from the University of Oxford, shares technology’s role in education and society through a sociological lens. She examines how edtech companies shape learning environments and policy, while challenging the notion that technological progress is predetermined. Instead, Dr. Shi argues that our collective choices and actions actively shape technology's future and emphasises the importance of democratic participation in technological development.
April 26, 2025

George Barlow and Ricky Bridge: AI Implementation at Belgrave St Bartholomew’s Academy

In this podcast episode, Daniel, George, and Ricky discuss the integration of AI and technology in education, particularly at Belgrave St Bartholomew's Academy. They explore the local context of the school, the impact of technology on teaching and learning, and how AI is being utilised to enhance student engagement and learning outcomes. The conversation also touches on the importance of community involvement, parent engagement, and the challenges and opportunities presented by AI in the classroom. They emphasise the need for effective professional development for staff and the importance of understanding the purpose behind using technology in education.
April 2, 2025

Becci Peters and Ben Davies: AI Teaching Support from Computing at School

In this episode, Becci Peters and Ben Davies discuss their work with Computing at School (CAS), an initiative backed by BCS, The Chartered Institute for IT, which boasts 27,000 dedicated members who support computing teachers. Through their efforts with CAS, they've noticed that many teachers still feel uncomfortable about AI technology, and many schools are grappling with uncertainty around AI policies and how to implement them. There's also a noticeable digital divide based on differing school budgets for AI tools. Keeping these challenges in mind, their efforts don’t just focus on technical skills; they aim to help more teachers grasp AI principles and understand important ethical considerations like data bias and the limitations of training models. They also work to equip educators with a critical mindset, enabling them to make informed decisions about AI usage.
March 17, 2025

Student Council: Students Perspectives on AI and the Future of Learning

In this episode, four members of our Student Council, Conrado, Kerem, Felicitas and Victoria, who are between 17 and 20 years old, share their personal experiences and observations about using generative AI, both for themselves and their peers. They also talk about why it’s so crucial for teachers to confront and familiarize themselves with this new technology.
March 3, 2025

Suzy Madigan: AI and Civil Society in the Global South

AI’s impact spans globally across sectors, yet attention and voices aren’t equally distributed across impacted communities. This week, the Foundational Impact presents a humanitarian perspective as Daniel Emmerson speaks with Suzy Madigan, Responsible AI Lead at CARE International, to shine a light on those often left out of the AI narrative. The heart of their discussion centers on “AI and the Global South, Exploring the Role of Civil Society in AI Decision-Making”, a recent report that Suzy co-authored with Accentures, a multinational tech company. They discuss how critical challenges including digital infrastructure gaps, data representation, and ethical frameworks, perpetuate existing inequalities. Increasing civil society participation in AI governance has become more important than ever to ensure an inclusive and ethical AI development.
February 17, 2025

Liz Robinson: Leading Through the AI Unknown for Students

In this episode, Liz opens up about her path and reflects on her own "conscious incompetence" with AI - that pivotal moment when she understood that if she, as a leader of a forward-thinking trust, feels overwhelmed by AI's implications, many other school leaders must feel the same. Rather than shying away from this challenge, she chose to lean in, launching an exciting new initiative to help school leaders navigate the AI landscape.
February 3, 2025

Lori van Dam: Nurturing Students into Social Entrepreneurs

In this episode, Hult Prize CEO Lori van Dam pulls back the curtain on the global competition empowering student innovators into social entrepreneurs across 100+ countries. She believes in sustainable models that combine social good with financial viability. Lori also explores how AI is becoming a powerful ally in this space, while stressing that human creativity and cross-cultural collaboration remain at the heart of meaningful innovation.
January 20, 2025

Laura Knight: A Teacher’s Journey into AI Education

From decoding languages to decoding the future of education: Laura Knight takes us on her fascinating journey from a linguist to a computer science teacher, then Director of Digital Learning, and now a consultant specialising in digital strategy in education. With two decades of classroom wisdom under her belt, Laura has witnessed firsthand how AI is reshaping education and she’s here to help make sense of it all.
January 6, 2025

Richard Culatta: Understand AI's Capabilities and Limitations

Richard Culatta, former Government advisor, speaks about flying planes as an analogy to explain the perils of taking a haphazard approach to AI in education. Using aviation as an illustration, he highlights the most critical tech skills that teachers need today. The CEO of ISTE and ASCD draws a clear parallel: just as planes don't fly by magic, educators must deeply understand AI's capabilities and limitations.
December 16, 2024

Prof Anselmo Reyes: AI in Legal Education and Justice

Professor Anselmo Reyes, an international arbitrator and legal expert, discusses the potential of AI in making legal services more accessible to underserved communities. He notes that while AI works well for standardised legal matters, it faces limitations in areas requiring emotional intelligence or complex human judgment. Prof Reyes advocates for teaching law students to use AI critically as an assistive tool, emphasising that human oversight remains essential in legal decision making.
December 2, 2024

Esen Tümer: AI’s Role from Classrooms to Operating Rooms

Healthcare and technology leader Esen Tümer discusses how AI and emerging trends in technology are transforming medical settings and doctor-patient interactions. She encourages teachers not to shy away from technology, but rather understand how it’s reshaping society and prepare their students for this tech-enabled future.
November 19, 2024

Julie Carson: AI Integration Journey of Woodland Academy Trust

A forward-thinking educational trust shows what's possible when AI meets strategic implementation. From personalised learning platforms to innovative administrative solutions, Julie Carson, Director of Education at Woodland Academy Trust, reveals how they're enhancing teaching and learning across five primary schools through technology and AI to serve both classroom and operational needs.
November 4, 2024

Joseph Lin: AI Use Cases in Hong Kong Classrooms

In this conversation, Joseph Lin, an education technology consultant, discusses how some Hong Kong schools are exploring artificial intelligence and their implementation challenges. He emphasises the importance of data ownership, responsible use of AI, and the need for schools to adapt slowly to these technologies. Joseph also shares some successful AI implementation cases and how some of the AI tools may enhance creative learning experiences.
October 21, 2024

Sarah Brook: Rethinking Charitable Approaches to Tech and Sustainability

In our latest episode, we speak with Sarah Brook, Founder and CEO of the Sparkle Foundation, currently supporting 20,000 lives in Malawi. Sarah shares how education is evolving in Malawi and the role of AI plays to young people and international NGOs. She also provides a candid look at the challenges facing the charity sector, drawing from her daily work at Sparkle.
October 7, 2024

Rohan Light: Assurance and Oversight in the Age of AI

Join Rohan Light, Principal Analyst of Data Governance at Health New Zealand, as he discusses the critical need for accountability, transparency, and clear explanations of system behaviour. Discover the the government's role in regulation, and the crucial importance of strong data privacy practices.
September 23, 2024

Yom Fox: Leading Schools in an AI-infused World

With the rapid pace of technological change, Yom Fox, the high school principal at Georgetown Day School shares her insights on the importance of creating collaborative spaces where students and faculty learn together and teaching digital citizenship.
September 5, 2024

Debra Wilson: NAIS Perspectives on AI Professional Development

Join Debra Wilson, President of National Association of Independent Schools (NAIS) as she shares her insights on taking an incremental approach to exploring AI. Discover how to find the best solutions for your school, ensure responsible adoption at every stage, and learn about the ways AI can help tackle teacher burnout.
April 18, 2024

Steven Chan and Minh Tran: Preparing Students for AI and New Technologies

Discuss the importance of preparing students for AI and new technologies, the role of the Good Future Foundation in bridging the gap between technology and education, and the potential impact of AI on the future of work.

Matt King: Creating a Culture of AI Literacy Through Conversation at Brentwood School

Published on
September 15, 2025

Experienced education leader and STEM lead. With 16 years of teaching experience, a National Professional Qualification in Senior Leadership (NPQSL), a Postgraduate Certificate in Education (PGCE), and a Bachelor of Science (BSc) in Oceanography.

My passion for STEM and EdTech has been recognized by multiple awards and honors, such as the STEM National Teaching Award for Outstanding Engagement with the STEM Community in 2022, the Royal Society of Biology Teacher of the Year Finalist in 2021, and the Apple Teacher Certification in 2019. I am also a qualified STEM Level 3/Senior Professional Development Lead and ENTHUSE Partnership Coach, and I have led the only specialist Biology Science Learning Partnership in the UK for STEM Learning since 2021.

Video Recaps

Summary

Many schools begin their AI journey by formulating AI policies. However, Matt King, Director of Innovative Learning at Brentwood School, reveals their preference for establishing guiding principles over rigid policies considering AI’s rapidly evolving nature. 

Matt shares his approach to fostering dialogue with diverse school stakeholders. He describes engaging primary students in AI ethics and literacy, obtaining parental consent for use of AI in students’ learning, exploring AI applications with operational staff, and addressing teacher resistance through supportive one-on-one conversations. 

Throughout Brentwood’s AI journey, human connection remains central to cultivating a school-wide culture of AI literacy and integration.  It is the power of conversation, not rulebooks that create the foundation for responsible and effective AI adoption. 

Transcription

Daniel Emmerson 00:02

Welcome to Foundational Impact, a podcast series that focuses on education and artificial intelligence from a non profit perspective. My name is Daniel Emmerson and I'm the Executive Director of Good Future Foundation, a nonprofit whose mission is to equip educators to confidently prepare all students, regardless of their background, to benefit from and succeed in an AI infused world. 

Welcome to Foundational Impact. This is a podcast series where we're exploring artificial intelligence in education through lots of different lenses. And it's wonderful to have you with us as an educator who has just embarked on the AI quality mark with Good Future Foundation. Matt, it would be wonderful at the outset, if it's okay, please, for you to just give us a bit of insight as to who you are and what you do and a bit about your school as well, if you don't mind.

Matthew King 00:56

Okay. So my name is Matt King and I've just started actually in September in the position that I am currently in. So my title is Director of Innovative Learning at a school which is called Brentwood School in Essex. We are an independent school. We're an all through school about to have quite a large expansion in our boarding provision. And with that, as I'm sure lots of you are all experiencing, comes the challenge of digital security, digital safety. Part of my role is to lead on AI across the school, but also look at all of our digital tools that our students are using and identifying where they add, where in some cases they detract from the quality of the students learning.

Daniel Emmerson 01:41

When you approach that initial role, as you said, you're relatively new to the position and you learned that you'd be responsible for investigating AI and where it does and where it doesn't belong in the school. What was your immediate response to that and perhaps how has that changed over the course of the last few months?

Matthew King 01:59

So I took over from someone that is called Greg Justam, he's now the Deputy Head and he has done an absolutely brilliant job already in the journey towards AI as a school. He's a computer scientist and I am a biologist. So we have quite different takes on it, but in some ways useful because with my kind of biologist hat on, the strategic approach of it being a bit like an experiment with a hypothesis, me wanting to audit our current provision and see essentially where we are at and then use that as a starting point. And for me that's exactly where the AI quality mark has been useful because it's given me exactly that. Where are we at right now? Where are our areas that we could try and add further improvement in? Also I think there was a little bit of a feeling at the beginning when I started that I was going to need to hit the ground running and have this all correct and signed off and policies written, etc, but very, very quickly. You realize, especially with AI, having a policy a little bit daft because you need to rewrite that every few weeks. It's not something that is going to be stationary and you've got to be comfortable with that. You've just got to remain informed. And again, Good Future foundation has been a really good way of remaining informed about what other schools are doing. So I personally feel that we're in quite a good position now as a school and I know clearly what I need to do, what we need to do in order to get even better with our AI provision.

Daniel Emmerson 03:29

The policy approach is an interesting one for sure. I mean, we speak to schools up and down the country around whether or not policy is the right way to go. I think the key is golden rules and best practice, whether that takes the form of a policy document that sits on the website, or whether that's part of professional development, or whether there's a set of guidelines or even in some cases, schools who create a handbook around how to, how to approach AI tools. Can you give us a bit of insight, Matt, as to what you do around that space? Just so that teachers know what they can and can't do and if they've got questions, where do they go, that sort of thing.

Matthew King 04:10

So when I took over the role from Greg, he'd already established a team of around 25 teachers who had a real interest in AI. We refer to them as our AI associates. So that had already developed, like you do with any kind of change management process, a set of staff who had some influence in each of their departments to be able to springboard new initiatives and gauge ideas off of one another. So that, having that in place already for me was very useful because I then had that audience that I could work with. You use the phrase golden rules. We refer to ours as guiding principles. And that's exactly what we've got. We have. It's actually in the shape of a temple to kind of give it the idea that it's foundational. And we've got four guiding principles that we as an entire entity. And this is not just teachers, this is students, this is operational staff that, that we have all agreed to adhere to. And that's been partly developed through doing the AI quality mark process and realising where are our risk areas. For us, it's particularly around outward communication because we are an independent school. We have to manage our reputation. There are now tools that will claim to auto-respond for you and things like that we feel would be a reputational risk for us. So we've paired that into our four guiding principles. Another really big underpinning one for us is the importance of human interaction. I think. I've been in education for 16 years. I think that is what the, and the gold dust, I suppose, is of education is that we're living in a society with more and more and more and more social media and students are spending less and less and less and less time with each other. And actually schools are perhaps one of the last places now where they are having those social interactions with each other. And AI, we see that as a potential threat there if we're not careful with what AI tools we're going to allow our students to begin using. So that's another one of our guiding principles. I was listening to, actually a presentation that was being given by Ben Whitaker and he had, right at the very beginning, the phrase of AI is not a policy. There's absolutely no merit in having it as a standalone area on your school improvement plan. It needs to be woven into multiple aspects and we've certainly been doing that here. When we've had our strategic planning days, we have middle leader away days and we have senior leader away days at Brentwood. And that gives us a lot of opportunity to see, for example, where AI can have some benefits within teaching and learning. But also AI, we think, can start to have some benefits even within our catering provision, potentially in our site grounds team provision, potentially. So that was a definite second way that we've been trying to build in some principles and obviously CPD, and I am very lucky that Greg, who is deputy head in charge of start and CPD, is a computer scientist and therefore is quite willing to give time to staff to have these CPD sessions. So we've, we've done a few different ways now. We've had some online workshops, we've had some in person carouselled workshops during INSET. We have an online hub where lots of different tools are added and people are recorded using the tools to share with others. Because we feel that peer training is much more useful than certainly what I would term as guru-led training or educeleb-led training. Because, yeah, just having people on, you know, within, within our school that are using this has much more power, we think, to our CPD.

Daniel Emmerson 08:10

Well, you've said some really interesting things, Matt, and I want to come back to you on, on a few of them because they're, they're sort of recurring themes, I suppose, within the field and also within the podcast series. The first is on the social piece that you mentioned around the amount of time that students have to spend with each other as far as social and emotional development is concerned, as far as social engagement is concerned, and how AI is potentially a threat to what is ultimately one of the most valuable experiences, one might argue, of a school setting. Can you tell us a bit more about that in terms of the sort of things that you're seeing, things you might be hearing from students, and how the school is addressing this?

Matthew King 08:51

There is a shift already happening in what AI tools can be used from age 13 and above and those that can't. For example, I know we're not a Microsoft school, but Copilot is rumoured to be going below the age of 13. So we do or have asked our parents for consent on are they happy for AI tools to be enabled on student devices? And we do that from Year 9 upwards. And that's the point at which they would start to have some specific AI lessons, as in how to use the AI tools. But for us, it's definitely about AI literacy and the ethics surrounding AI, which can and does begin from early years. I'm just sort of new into the role, but we've just started having conversations in our prep school about where it would be appropriate to actually introduce the concept of what an AI is. And there's some fantastic books out there now that can do just that. Canva, as I'm sure lots of people are aware, can be used by students under the age of 13. So showing image bias. I've seen this done quite well in a Year 5 lesson where they generate an image of a scientist and then critique the image that it then produces and the bias that it has. And also, as you are very much aware, Daniel, because you are coming as Good Future foundation to it, we're hosting a STEM primary conference this year and part of that is going to really have an AI focus to it, because primary level is perhaps the area that needs to really be considered for bringing in this idea of AI ethics and AI literacy.

Daniel Emmerson 10:35

100% agree with that, Matt. I would emphasise, though, typically our approach to this, as you've, as you've suggested here, is conversation around what that looks like and activities around thinking through responsible AI use even from a very, very young age, as long as that content is age appropriate. Young learners don't need access to AI tools in order to start thinking about what responsible use looks like. And that can start from early years, as you suggested. I'm really interested as well in this parental consent, because you're the first school that I've learned about that has actually rolled this out. And often we have conversations with schools around, okay, there's parental consent that AI tools might be used, but different AI tools have different policies and do parents, or will all parents really understand what they're consenting to if they sign off on this one? How have you addressed those different things with the consent form?

Matthew King 11:39

I think you're absolutely right. To be able to produce almost a list of the exact AI tools that their student would be using would be nigh on impossible. But where we're asking for consent is quite specific within our Year 9 digital skills program, where we can actually give some quite specific guidance on what we're doing within that digital skills programme. And the point that we have as a school reflected on from the Good Futures Foundation AI quality Mark, is exactly that engagement with parents is an area that, and I've been discussing this at various other network group meetings, we all need to do a lot more contact with because some parents I speak to on the phone have quite a terrified response to AI and therefore they don't want to go near it. Some parents are specialists in the field and might be a specialist within, say, a financial position which has got quite different uses of AI and ethics surrounding it than we have as a school. So to sort of answer your question, I suppose succinctly it was a challenge and is a challenge to get parental consent. And ultimately if a parent does not want their child to use AI tools, we as a school are still, I feel responsible for discussing that use of AI. So it's more the direct use, direct enabling on their child's device that we're consenting for, not the entire AI conversation.

Daniel Emmerson 13:10

Great stuff, Matt. Thank you. Another piece that you mentioned earlier was around operations. I think you mentioned catering and grounds as departments where you were thinking through what AI use might look like. Could you unpack that a little bit and just sort of talk us through what you have in mind?

Matthew King 13:27

Yeah, so we've just very recently done our first operational staff workshop on AI. So that was led by somebody called Joe Scotland, who I think has actually been on one of our previous meetings and she just introduced the basics of ChatGPT and Google Gemini and got the various members of staff to think about a task that they're repeating and almost want the same response to each time. So, for example, that could be an order of food items in response to, I don't know, the monthly calendar and if there's a particularly Chinese New Year, for example, in that calendar, how that could be identified and informed, the food order. So we are absolutely not at the level where I could give you an example of where that is using right now, but we've just started to engage our operational staff into it who are very, very time poor. So it's a challenge to try and find a suitable time for them to undertake that sort of CPD. And we were really lucky, as I said, to have that in our last INSET day. But it's then even more of a challenge to keep that momentum going. So what we're doing at the moment is after lots of our different CPD sessions, we try to do the do more, do less, stop doing, start doing kind of approach. So we've got all of our academic departments have produced a do more, do less grid for AI and that would be our next move, is to now look into our operational departments. What one thing could we do that is might be transformational for you and then just focus on that because you can get very quickly overwhelmed and the new shiny factor, you unwrap a new present. Amazing bit like that with AI, but there's just, it's too much. So to just focus on one we feel is a good way forward.

Daniel Emmerson 15:18

Excellent. So it's almost an audit of roles and responsibilities and then thinking through collectively. Okay, where is an instance there that you might be able to automate a task or, or an initiative? Really, really great to hear about that, Matt. I think the third piece that I wanted to come back to you on though was the online hub that you mentioned and the peer training, because again, this is something that. It sounds as though you have quite a number of staff who were engaged with this. I mean, 20 plus folks who are interested in AI at the get go and who are keen to engage on that, I think is quite exceptional in terms of the schools that we're working with and getting people on board is a really big challenge for a lot of schools. But with this group, I suppose you're able to focus a bit more on the peer training. Can you tell us a bit more about what that looks like, what's involved and perhaps some of the challenges around it, particularly if we're talking to a school or a community that doesn't have as many teachers that are taking the initiative.

Matthew King 16:22

Yeah, so we are an Apple one-to-one school. So that brings down a barrier immediately because it means every single member of the teaching staff has an Apple MacBook and most teaching staff have an iPad. So therefore the ability to actually be there and then try out these tools without having to book a computer suite and all of the logistical nightmare that can come with organising CPD with a large number of people with computers is gone. Because we have that that has really helped. I think having a member that drives forward your AI provision that is sitting on SLT is very useful. So my role does sit on what we call the local SLT and that then means that you can be involved in CPD conversations and ask for time in X, Y and Z area. We have I believe around about 450 staff across the school. So that gives some I suppose fraction to when we say 20 teaching staff out of quite a huge number. We did also pitch that we wanted one from each department. It was the kind of an expectation that we needed that we keep in touch with everybody via Google Classroom which has been really useful. Lots of in fact the, the Good Future Foundation AI Quality Mark the, the audit process that we went through that was done digitally through a Google form. We have calendared meetings. There are three per year and that's very helpful. I think if you have that in there from September people are aware of exactly when they're going to be. But there are still challenges. There are challenges when we've got a new tool that's coming out or a problem with the GDPR process that how do we very quickly get that information out? We have like most schools briefing each week. We have obviously emails but you know, like I'm sure most schools there's a huge number of emails that can easily be missed. We have digital signage around the school that's been very useful when we've wanted to show something that's both student and teacher facing. But yeah, I think the, the challenge that we've certainly got now is I'm going to use the word convincing. Our kind of stuck fast teachers that are absolutely don't want to know about AI. Don't talk to me about AI. Nope, not interested. Don't like it that we've got to unpick that now and move that conversation into digital literacy and the conversation around. We are all responsible for numeracy, we are all responsible for literacy, we are now all responsible for AI literacy and it's getting that culture I think around the school that's the next challenge.

Daniel Emmerson 19:13

I suppose it is fear, right, that people have around. I'm not going to look at this, I'm not going to start using this. And this is something that we I suppose saw back in the early days of ChatGPT becoming more mainstream in education conversations. It does still very much exist within, I would say, the majority of schools that we're working in. People have probably got to saturation point with how much they can hear about AI and what they want to know about it. Have you had any conversations recently with some of those folks that are really not keen to get on board? And I'm wondering if you've got any tips or advice you might have for schools that find themselves in a similar position.

Matthew King 19:55

It's the power of conversation, it's the power of face to face, regular drop ins with those sorts of people around. So what is it that you don't like? For example, I'm sure lots of people are having conversations around academic integrity and where students are using AI in areas that they shouldn't be. So when we have a problem with that, there is then a conversation that is had with the student and with their teacher. And quite often, if I'm honest, it starts with the task being set with AI in mind and the task being set with an AI acceptable use in mind. So we've developed an AI barometer. That's really helped teachers to have conversations in the classroom. It's literally a scale of 1 to 5. But it's about repeating that, asking every single week, reminding people when we set a task, is it AI resistant? Fantastic tool if anybody has seen it on Magic Schools, which allows you to set an AI resistant task. It also just trains you actually you only need to do one and then you realise, ah, I see how that has made it an AI resistant task so that I can reuse it. So it's often, yeah, about sitting down and listening on a one to one basis with that person about what it is that they are scared about. More often than not it's about doing something wrong for the student and causing the student to become discredited for whatever reason. And more often than not a conversation around, well actually this is where it can be used and helped and then they go away and try it, showcase it as well. I've quite often offered examples. Come and have a look at me. Come and have a look at one of the 20 other members of staff that are using it a lot helps.

Daniel Emmerson 21:47

So something I know that will continue to be ongoing and I understand the value of dialogue and sharing in that space. And I should say as well that although yours is an independent school, you've done a great deal of dialoguing with our community and sharing best practice. What might you say as a final thought, Matt, to schools who are doing a lot of work with AI but are unsure about how to share what they're doing and whether or not they should be sharing more broadly, perhaps with schools that don't have the resource capability that are coming up against many of these resistances that we've talked about today. Where's a good place to start?

Matthew King 22:27

The first place to start is probably within your school and within the SLT team, establishing that sharing best practice, especially for this, is needed. There is a little bit, particularly in my sector of sharing state secrets, we need to be very careful. However, I don't think AI is one of those things that we can say, let's keep that to ourselves at all because it's so rapidly growing, it's so fast paced, it's got to be outwardly discussed. So getting that agreement there, getting people to realise that this is a collective responsibility that we all have as an education sector. I have and continue to be quite active on X and on LinkedIn and you learn very, very quickly about the different groups that are out there where this sort of thing is being discussed an awful lot. There are lots of network groups for both the state and the independent sector that people can get involved in. The Good Future foundation themselves have got some webinars that you can check into, and hear from best practice. There's a newsletter as well that we've all signed up to that we regularly have a little read through. I'm sure you've probably heard some of the edu celebs I won't name. But there, there are some newsletters that you can sign up to from some of the big names there. I've named one earlier actually, haven't I? But it's useful to have a couple of newsletters. Don't sign up for more than that because you'll just constantly be bombarded. And actually I think for me, my background, as I said, is biology and STEM and science, technology, engineering and mathematics. So STEM Learning have a really good online community as well, where you can log in and pitch a bit like an online Facebook, but just for educators, there's a section I believe, growing there for AI.

Daniel Emmerson 24:16

Fantastic stuff, Matt. Of course we're big advocates of STEM Learning and the incredible work that they're doing there as well. And indeed of the work that you continue to do both at school and also in, in an outward facing capacity. Matt, I know that many schools benefited from the presentation you gave on AI recently. Thank you so, so very much for that and also for your time today. It's always a pleasure catching up with you and learning about the work you're doing, Matt. A real pleasure. Thanks once again.

Matthew King 24:47

Thank you for having me.

Voiceover 24:48

That's it for this episode. Don't forget, the next episode is coming out soon, so make sure you click that option to follow or subscribe. It just means you won't miss it. But in the meantime, thank you for being here and we'll see you next time.

JOIN OUR MAILING LIST

Be the first to find out more about our programs and have the opportunity to work with us
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.