Claire Archibald: Creating Effective AI Governance Structures in Schools

February 6, 2026

Video Recap

Summary

Is having an AI policy enough to protect your school? In this episode, Daniel Emmerson speaks with Claire Archibald, Legal Director at Brown Jacobson and former Data Protection Officer, about what effective AI governance in schools looks like.

Their conversation covers essential topics including what makes a good Data Protection Impact Assessment (DPIA), the importance of vendor due diligence, and why schools need robust governance structures beyond just having an AI policy. Claire emphasises the critical role of incident reporting, creating transparent cultures around AI use, and the need for collaborative approaches involving all stakeholders. She also shares a six-step governance framework and practical advice for schools starting their AI journey.

Transcript

Daniel 00:02

Welcome to Foundational Impact, a podcast series that focuses on education and artificial intelligence from a non profit perspective. My name is Daniel Emmerson and I'm the executive director of Good Future Foundation, a nonprofit whose mission is to equip educators to confidently prepare all students, regardless of their background, to benefit from and succeed in an AI infused world.

Daniel 00:27

Welcome everybody, once again to Foundational Impact. It's an absolute delight and privilege to have Claire Archibald with us today. We've spoken about lots of different aspects over the course of the previous episodes around AI responsible use and best practice and what that means, and I'm super excited to have Claire with us today to talk about that, particularly from a governance perspective. Claire, very, very warm welcome to you. Thank you so much for being with us today. I'm wondering if we could kick off by just learning a little bit about who you are and what you do in this space.

Claire 01:06

Okay. Hi. Thanks ever so much for having me on. I'm really pleased to be here. So I'm primarily a data protection, freedom of information AI lawyer. I'm a legal director at Brian Jacobson. Been there for about a year. Prior to that, I was a data protection officer in a local authority providing DPO support for about 400 schools and academies across the country. But prior to that, again, I was a school business manager. So how does a school business manager end up being a lawyer in a really big law firm? Well, my career started off as a lawyer in my early 20s. I spent a really long time trying not to be a lawyer. But I found myself back here and very much in my happy place, bringing together all the amazing experiences I've had throughout my career. Counseling, mediation, all the school business management stuff I did running a DPO service. And it's all come together in a really happy place at Brown Jacobson.

Daniel 02:07

Can we unpack that DPO role for a second? Just for folks who might not know what that is. What, what is a DPO and what does that mean in a school sense?

Claire 02:18

DPO stands for Data Protection Officer. The GDPR back in 2018 established a requirement for certain organisations to have a data protection officer as a statutory role. And that would include schools, their public authorities. The personal data that they process is what would become high risk, very sensitive, very personal data, I think second only to health really in terms of the sector that has the most confidential and sensitive data in society. So schools had to have a statutory data protection officer. And so I went into the local authority to set up a traded service to be part of a traded service to provide that Statutory DPO service to schools. And it grew and grew. It was just before the pandemic. And the pandemic fundamentally changed the nature of the way personal data is processed in schools. And so what we thought initially was going to be a small project just to help schools to, you know, get onto their own feet in terms of managing that actually became something really big. It grew and obviously we were there with schools throughout the pandemic, supporting them with all of their digital challenges, cybersecurity challenges and. Yeah, and then the more work schools did to improve their data protection compliance program, actually the more they realised they needed to do so. It's a real privilege to work with schools during what was a really, really difficult time for them. You know, I won't lie to you, some schools it was like skipping through a meadow hand in hand as we skipped towards data protection compliance. And for other schools it was a bit like pushing a broken down car up a hill in mud. But it didn't matter, we got there anyway. And you know, I tried to make sure that it was a really positive experience. So whether whether you loved it or loathed it, I tried to make it a joyful and happy experience. So it became happy work for hard working stuffs in schools, not just something that you had to be tolerated or endured. I think we achieved it. People do say they have a good time working with me.

Daniel 04:28

I'm sure that's very much the case, Claire. I'm really interested to know though, particularly for the international audience as well, who might not have exactly the same setups in their schools, particularly around data. How did this give you the grounding that you needed in order to be able to pivot, if that's the right word, towards AI.

Claire 04:55

Yeah, yeah. So, I mean, you know, obviously ChatGPT 2022 people started talking about ChatGPT. As a data protection officer, I was helping a lot of schools doing their data protection impact assessments and really thinking about the data protection considerations for the projects that they wanted to do. And I could see that ChatGPT was going to become a big thing in edtech. Using AI tools was going to become big thing. So primarily it was about the data protection challenge. And I thought, gosh, as DPO, I need to be ahead of this because schools are going to want a bit of this, this is going to fundamentally change how they work. So I need to understand what's going on here. And so I did a lot of work in terms of increasing literacy and awareness of AI, not just within schools, but actually within the local authority. That I was working in as well. And so, you know, kind of the first, most obvious thought for me was well, we need to have some kind of a policy in place. And so I was supported again by the local authority and by the traded service, which is fantastic. And I helped to produce and there was a couple of people doing this at the same time, a template that schools could use, you know, just to, at the outset of their AI governance journey that they had something, they didn't just have a blank piece of paper. And so they had a policy and we were able to release that under a Creative Commons license and make it available for anybody who wanted it. So I felt that was a really great contribution to the sector, moved to Brown Jacobson and actually, you know, obviously during this time use of AI is increasing exponentially, challenges are increasing. And you know, I said to the leaders there, I said, I think we need to be doing something around AI and giving schools some really proactive information on AI. You know, this policy, I think we could build on that. And to be fair, the leaders within Brown Jacobson, fantastic intellectual minds gave me a really hard time over it. They know schools really well, they know that a policy does not make a really good governance structure. They had great in depth experience of safeguarding and establishing safeguarding structures within schools. We know that in safeguarding you don't have a child protection policy and appoint a designated safeguarding lead and that's your safeguarding done. And so they challenged me to say, come on, you know, we need to do something much more meaningful in terms of AI governance. You can't just have a DPO and an AI policy needs something so much more. And then of course the firm doesn't just do educational, they do all sorts of law advising corporates. And I went and spoke to one of the corporate partners and said, you know, what are you doing for your corporate clients? They said, oh, we've built this framework and six step governance framework. I thought this is fantastic, the six steps. And then it went away, did the IAPP AI GP course. So got the qualification in that really learned much more about how AI works because fundamentally if you're going to govern it, you do need to know some of the terminology. One of the things I do notice in the sector actually there's lots of people who will have big opinions about AI and I read them and I'm sort of slightly cringing because I think I really wish you understood how the AI works a little bit, especially when they're talking about, you know, inputs, training AI models like it's some kind of active ongoing process that ChatGPT is continually lear from every AI input that you do. In fact, I wish it would because I have to tell my AI tool every single time I use it that I prefer to use sentence case and not capitalise every letter in a heading. Anyway, I digress. So went away really deeply understood it and then worked within the education team and then we really built that six step governance framework out for schools as well. And it's been real privilege then over the last sort of seven to eight months or so to accompany some of those schools and academy trusts on that governance journey as they implement those six steps. And they come back to me and say, you know, this bit's going well. Or actually we could do with a different kind of AI literacy framework for our support staff to our teaching staff. How do we create that? And so everything's been been around accompanying schools, you know, we're all learning together at the end of the day on this, aren't we? And so, you know, having all of those concepts and saying, guys, I think you should do it like this. And then actually accompanying them through the journey and seeing them put together things like the web pages for stakeholders, these terms of reference for their steering groups, seeing those first DPIAs coming through, seeing that concepts, you know, this kind of big plan I had, seeing them then implement it as being a real privilege.

Daniel 09:41

A couple of things, Claire, if I might just go back for a moment. So you talked about working with the local authority just to have a clearer understanding of what that work looked like on a day to day basis versus what you're doing now in the law firm, just to help the audience think through, okay, this is what that meant for Claire at that time.

Claire 10:04

So not hugely different because from a data protection point of view and from a kind of governance point of view, schools were always able to make their own decisions about what they wanted to do. They were their own data controllers, so they always had that autonomy. I suppose from the perspective I'm at now, I'm able to build, I suppose, more innovative services and products because I'm not within that local authority structure, but not too different. I mean, again, just looking back to the local authority days, it seems like a long time ago now, but we all had a job to do in terms of understanding what actually AI was. And so I think the first time I presented internally to the local authority, I set up three mock accounts on the three big tools and kind of put fake council data into the tools and said, look, I could be Doing this as a staff member, I could be doing that. I could be putting this. This is the warning that pops up when I use Gemini. This is the warning that pops up when I use Claude. What are we doing to kind of govern this? What are we doing to kind of monitor how your staff are using AI tools just through their web browsers? So it was a great moment. I think there was, you know, and it was really nice. Some of the things that, you know, don't be an ostrich with your head in the sand. I remember doing a particular slide, trying to encourage everybody to think about, you know, we. We cannot turn a blind eye to this. We need to front it out. And actually, what we really need is that concept of a bowling lane with bumpers. We've got to en our staff to use these fantastic tools, but we've got to be there as the bumpers to stop the ball going into the gutters. That terminology. So, yeah, really similar. And again, things have really changed. And I've obviously been advising and doing AI as part of data protection courses for data protection offices in schools for about a year now. And I would say the training courses that I was delivering a year ago are now really different to what I'm delivering this year. There's a much greater awareness, us in schools of where AI functionality is, how it's coming into the organization. And now for me, the next challenge is, well, what are you, you know, what are you going to do beyond a policy? How are you just going to make this part of normal school business?

Daniel 12:32

Well, there. There are some interesting mechanisms in place to help schools with that. And you mentioned DPIAs, so we should perhaps give a bit of an overview, Claire, if that's all right, as to what a good DPIA looks like. But then moving on from that, of course, as you've alluded to already, just having the DPIA in place doesn't necessarily mean that staff are adhering to best practice. We found in a lot of the conversations we've had with schools, and indeed in research, that AI use is often far from transparent in terms of how it's. How it's being used by different stakeholders across the organisation. So let's maybe tackle that one first. What is a DPIA and what does a good DPIA look like?

Claire 13:21

So a DPIA, you know, is a data risk assessment, very simple. And schools are really used to writing risk assessments. They do it all the time, every time they want to take their children off the premises or they want to do a PE lesson or they're climbing up and down the gym equipment, they do risk assessments, health and safety risk assessments and so on. So I always try to demystify it and say, you know, it's primarily a risk assessment, just with a data focus. It's really important to focus on the subject of that risk assessment on the data subjects, you know, the risk to those individuals. I see. I've seen lots of DPIs. I could tell you. So many mistakes I've seen with DPIAs. I think, you know, one of the big mistakes is that it really focuses on the risk to the organisation and, you know, we might get a fine if we are responsible for a data breach. That is the wrong way to go about a DPIA. A DPIA is, you know, what is the impact on the data subject and how are you going to mitigate against that? Not how is the organisation going to mitigate against them getting a reprimand or a fine from the regulator. So I think a really good risk assessment does lots of things. I think for a lot of education settings, I think one of the things that they really could improve on is having really good project plans. And so actually DPIA for me becomes in some ways a bit of a hook for an entire project plan. And I think it's really important at the outset of your DPIA to articulate what your intended aim is, what success looks like, what does good look like. So how do we know if the thing that we want to do has been achieved, the success criteria has been met. That makes the additional risk worth it. So as I say, one of the big things I've seen is, oh, well, we need to do a DPIA into this thing. Well, why do you want to do the thing? We don't know, we just want to school down the road does it? We saw it at the BETT roadshow, everybody's got it, it's the hottest new thing. And so actually really articulating what success looked like is the first, the big part for me. Lots of different things then and really going through in a good analysis of the risks, risk assessment element of it, I think DPIAs. Again, I see a fairly good sector getting more sophisticated in terms of the write up. And then I'm in my mind's eye, I'm looking specifically at the ICO template here and what you have then is the actual risk assessment and it's a table that you fill out. And then very often what I see is organisations, particularly schools, kind of running aground when they get to that risk assessment bit. And then they're like, oh, what's the risk? Oh, I don't know, there could be a cyber attack. So we'll have really strong passwords. And then, you know, I'm really disappointed then by that risk assessment. So, for me, a really good risk assessment for data is going to look at all of the data protection principles. Is it fair, lawful and transparent? Are you minimizing the data? How are you going to store it for the minimum amount of time possible? How are you security, all of those principles and then looking at each of the risks against each of the principles so that you really flush it out. So rather than looking at a blank table, you're going, well, these are all the risks and how might they surface? And then I think for AI, I think you could really adapt that DPIA template to then further embed some AI risks. So, you know, if you're looking at lawfulness, fairness and transparency, perhaps then you could, in, in that kind of category, you could look at intellectual property risk, you know, and call your DPIA a DPIA and an AI risk assessment to make it bigger and have a look at all of those, those risks as well. So again, you can build out and really expand that. The next thing, I think that is really often open, overlooked as part of the DPIA process. And I think the DPIA, an annex or an appendix, is a really good place to kind of show the audit trail of that is around the vendor due diligence, you know, so really having a detailed conversation, really looking at the vendor's terms in detail, making sure that you read them, understand, you know, any ambiguous terms or terms that are really in a vendor's favour and then, you know, making sure that you've got some evidence that you've gone and done some due diligence on that vendor as well, in terms of, you know, how do they store data, you know, what's their security protocols, you know, they're looking at their down chain as well in terms of their suppliers, really examining that. And then again, the security piece, the data protection security piece, again, you can expand that to do due diligence on maybe an AI model. So really looking at how the app is built, how maybe system prompts are working, what safeguards are in the system, what protection from jailbreaking is in the system in England or Wales, then the DFE standards for AI and for students, you know, ask if students, if a student face an AI, asking those particular questions around monitoring and filtering and how that's built into the system. So again, the DPIA can be a vehicle for a lot of evidence of what does success look like, what are absolutely all of the risks, not just from a data point of view, but as you say, you can build it out. There's no rule that says DPIAs need to just do one thing and you need to go and do AI risk assessments somewhere else. It can all be as part of one big project plan and then really that due diligence both in terms of the vendor and what they're doing and in terms of if it's an AI, how their AI model, tool, app interface, etc is all built as well. There's loads. You can do the DPIA, I love them, I could do them all day.

Daniel 19:03

I mean, some of what you were talking through there, Claire, it all sounds like a pretty substantial piece of work when you think through the number of applications and solutions that you might find across a multi academy trust. We've certainly had conversations with digital leads where you're looking at hundreds of solutions that may not have been acquired or deployed with AI modules in them, but have since evolved, particularly Trojan halls. Yes, yeah, 18 months or so. But there's also that there needs to be a level of ownership, right, with tech companies that want to play in this space and that want to work with schools in terms of the, the amount of information they can provide so that a school can complete a DPIA effectively. Where does that, that burden sit and should DPOs be, you know, have a, have a mandate to be able to ask questions and what if they don't get the answers that they're looking for?

Claire 20:13

Oh gosh, there's so much here. And again, I, you know, I, I'm hoping that people from the edtech space will be listening and watching this and I implore you, if you want better sales and if you want an easier purchase process, get all of this stuff out proactively. Don't wait for these questions to be asked to you. Dedicate an area of your website to a trust center. Make all of this stuff really transparent and really working. It's been interesting watching Google particularly come to a better understanding of this. And I think, you know, 18 months ago what Google offered in terms of their transparency around some of their AI models was not as good as it is now. It's better now. So, so if you're an edtech vendor, look at proactively providing that information. Make sure all your sales team really understand these questions so that they can answer them proactively and quickly. It's really frustrating as well for schools. Schools will do all of this work and they maybe don't think to ask their DPO until they've decided on the thing that they want to do. And so, so that's a real shame and a real missed opportunity because actually if you get your DPO involved as part of your procurement process, you know, we're looking for X solution, we're looking at A, B and C vendors. You know, DPO come and give us your opinion. You know, C is a bit more expensive and you know, A looks great because it's free and then the DPO will actually, you know, it's free because X, Y and Z. I'm really concerned about student data protection or security and then that organisation can make a really good choice. Okay, well option C might be more expensive, but we have much better guarantees around the security of our students data. So yeah, vendors, I would really encourage them to make sure they're being much more proactive and getting that information out there. Don't be surprised if you get these questions asked. Be proactive and be prepared to do it because ultimately there's nothing worse as is it the DPO then ultimately what should they do? Well, the DPO can't stop a project going through, but a really good DPO should be advising those that own the risk within an organisation, within a school if it's a high risk or, you know, concerning you. Ultimately it's a board of trustees issue. So the DPO should be, according to the law, reporting to the highest level of management within their organisation. So ultimately the DPO has got outstanding concerns about a project. They should be going to the board of trustees and saying, you know, no trust or school wants to do this particular thing. I have concerns that it's risky and the risk hasn't been mitigated. Of course, if it's high risk, actually the DPO would be advising the organisation. They have a statutory obligation to inform the information regulator, the ICO in England and Wales. So the DPO can't stop. But I think the DPO has a really important role to advise. And ultimately I've seen lots of projects where teachers or senior leaders in a school are like, oh yeah, yeah, I really want to do the thing. And then the DPO has to then advise the governors or the trustees. You know, they're really excited about the project. But I have to warn you that, you know, it's got, I've got these concerns about it and then the board of trustees have pulled the project. You know, it was just not the way you want to do things. We'd much Rather, everybody's very collaborative from, from the beginning so that the trustees then aren't in conflict with the, with the other functions of the school.

Daniel 23:52

So when we're looking at governance then and processes, the school has their appointed DPO, they're interested in subscribing to or, or acquiring an AI solution for, for their school or for their organization. And they run their sort of checks and balances. They have everything in place from a paperwork perspective or policy perspective. When it comes to implementing that then, and making sure that this is standard operating procedure for the educators, for the senior leaders, that this will have an impact on. Are you able to give us some examples of working with schools and how they're able to bring that to life and how they're able to embed that within their culture?

Claire 24:45

So I mentioned the framework that we put in and one of the big parts of the framework is the theme of control and really making sure that you've got ongoing risk assessments. So for me, I think what's currently really being overlooked is the idea or the concept of incident reporting, AI incident reporting and making sure that you are monitoring. Oh, you know, there's a biased output, there's a harmful output, and not just seeing that from a safeguarded lens, but actually seeing it from a specific AI reporting functionality. So I'd really like to see the real standards. You know, we have data breach reporting procedures well established within schools now. I'd be really worried if schools don't at this stage actually. So, but as I say, we have those procedures. So making sure that AI incidents, even, you know, those minor near misses, you know, are all being picked up and being tracked, I think that's really important in terms of the data that you get then. And I think it's because, you know, know, again, to withdraw the safeguarding analogy, we know that we're going to have safeguarding incidents in a school regardless of the safeguarding structures that we have. And so schools are really good at making sure those incidents are reported and tracked. Again, as a, as a trustee or a governor, you get metrics on that. There's so many safeguarding institutions, how we dealt with them, these are the live cases, etc. So actually having that kind of reporting functionality on AI as well, I think is really important. You don't just set it up, let it go and then forget about it. It's a continuing process. And again, that constant reporting helps you to adjust course as you go on as well. So you think we're having lots of incidents in relation to this particular tool. Or this particular use case, I must say, you know, it's not about risk assessing tools, it's about risk assessing the use. You know, so that tracking. And again, I think schools that have that information are in a really good place then in terms of, of regulator oversight. So whether that would be Ofsted or whether that would be the information commissioner actually having some data around those little accidents. The accident book for AI, if you like, provides a really good evidence that you are watching constantly and that you are looking for things to go wrong and adjusting course accordingly.

Daniel 27:17

In order for that to happen though Claire, wouldn't the organization need to have sort of destigmatized AI use, particularly through the lens of academic integrity, but also, you know, focused on this culture of transparency around what responsible use looks like because without that it's almost impossible to regulate and to know what's going on, isn't it?

Claire 27:42

I love that. And Daniel, we've been, been talking with some conversations, haven't we, where people might want to keep their AI use very private and don't necessarily want leadership to know that they're heavily reliant on AI. And again, that becomes a real leadership piece then, doesn't it? So leaders actively using AI in appropriate ways, sharing that those lessons within their organization. I think just this morning I've been using, we have an AI tool within work and I've been using it to, rather than sit and write an email from scratch, there's a take functionality so I can just kind of garble away into my AI tool and it will help me to turn my goblins into a well structured email. So I'm not just sat there with a blank email like okay, dear so and so so actually really sharing that. And anyway I shared it and I, you know, it could have been embarrassing. Here's a dictation of all my random garblings. But I didn't, I screen shared it, shared it with the people I worked as if finding this great, great new case for this. And, and this is how it's really revolutionizing the way I'm working. And so actually leaders, anecdote aside, have a real responsibility there to show their own journey and to make it a point of pride, not shame, when they're using AI to make their work more effective. So yeah, as I say, I said it before, didn't I, that we're all learning together kind of mindset on this is really important. There's nobody out there. And I felt sad when people go, I've cracked AI. I'm so advanced and look at me, aren't I wonderful? Actually, no, be humble about it. We're all going to get things wrong. Things are going to not quite work out the way we want them to. We might occasionally get so enamored with our AI we forget to be the, the human in the loop, you know, that confirmation bias, automation bias, I should say, you know, we can all be guilty of, of, of that and, and, and letting the AI, you know, do our thinking for us and taking away our human endeavors. But yeah, as I say, I think sharing and being really honest and then being honest about what goes wrong to take away any stigma. Again, I mean we did a lot of work when I was a data protection officer, we did a lot of work around taking away the stigma if you were responsible for a data breach. And you know the first thing I would always do if somebody had to report a data breach was I'd be reassuring them that it was okay and really thanking them for bringing it to my attention. Brilliant that you've observed this. Thank you so much. You know, you would never say I can't believe you did this, blah blah, blah. You know, you've got to make it a safe space for people to share their learnings, the good stuff as well as the bad stuff.

Daniel 30:29

And what about then Claire, when things go wrong or if things were to go wrong. And I'm thinking in a scenario where you might not say like we forget the regulation and we forget the laws around this and teachers are free to be able to upload personal information from students into a free version of an AI tool. What's the worst that could happen in that situation?

Claire 31:01

Well, obviously the problem is that if you were to upload somebody's private information. Again, we talk about personal data a lot as well, don't we? We talk about students and individuals. Let's not forget that there's a real risk to an organization in terms of their confidential, that commercially sensitive data. You know, the stuff that if there was a Freedom of Information act request for this, the organisation really wouldn't want it to be going into the public domain. They'd be looking for a reason, an exemption, to use the terminology, to withhold that. So it's about if you ultimately upload information and you lose control of it, there's a real risk not only to individuals but to your organisation. So I'll give you an example example. So you know, you're a multi academy trust, you're considering restructuring the organization and you're considering actually really losing some headcount and there's you know, so you use ChatGPT or whatever to help you to, you know, consider your restructure organisations. Is that a real risk that your employees and your stakeholders, your parents, your community could find out about your restructure plans by virtue of a, a leak there from the AI tool? So that's the risk. I'm sure it's probably already an issue. I should imagine there's substantial amount of people's personal data already in these tools and we've seen some alarming stories, haven't we, where individuals then have been able to jailbreak the, into the tool and then it discloses some of the inputs from other users or some of its training data.

Daniel 32:51

So, yeah, for a school that's looking at this from a fresh perspective, they can see the possible benefits of AI, they've thought about this from a pedagogy first approach and they're looking to draw up a policy or some guidelines and think about which tools they're looking at deploying. What are the best sort of first steps for a school that's. Or a trust that's at the very beginning of this journey from a government governance perspective?

Claire 33:22

I love this question. So first of all, don't try to do all of the AI all at once. That's my first thing is, you know, you don't have to conquer AI this year. You can just do one tiny thing and do it really well. So the first thing you've got to do is think fundamentally about the, why the purpose. And schools and trusts are great at really understanding their fundamental values. Everybody has a value statement and a core mission. And so really, I think the very, very first thing you need to do is think back to your core mission and to your value statement, your school improvement or your trust improvement plan, what are we trying to achieve here? And then align some AI principles as to how you want to use AI, how you see it as part of your organization, align those principles to those core values and missions that already exist. So make sure it's consistent with that, the mission and the school improvement journey. And so setting out some kind of. And again, you see this from the oecd, you've seen it in terms of the government then adopting the OECD principles. So those principles around fairness, around environmental sustainability, et cetera. So have those kind of core messages and then you benchmark everything you're doing around your. And keep going back to that why question then really, I think the second thing, after you've kind of worked out who you are and what you want to do and what your purpose is, is to start really communicating with your stakeholders and that's communicating with your staff. I talk a lot. I used to talk about building a boat and making sure that you build a boat big enough for everybody and that boat doesn't set sail. But in governance terms, we talk about building a cathedral and making sure that cathedral is big enough and encompassing enough for everybody to get into. So make sure that you're bringing all of your staff with you. You know, don't just be led by those who are really excited by AI and want to go off and kind of accidentally leave all of your nervous stuff out of the, out of the communication or the cathedral or the boat, whatever analogy you want. And I think as well, at this point it's really, really important to talk to your parents. You cannot do, you cannot. And again, I'm already seeing this. And don't do AI to your parents. Make sure you are doing it with your parents. Make sure that you are investing in them. You know, don't let the first time a parent know that you are using AI is when you send them a really obviously AI crafted letter that suddenly doesn't look or sound at all like the head teacher suddenly looks like, you know, a chat, a GPT has written it. So yeah, bring your parents along with you. Lots of literacy, lot, explanation, reassurance, lots and lots. And again with your students as well then I think I've seen some schools really not do their students a favour and have told their students all about the risks and how scary it is and how unsafe it is without telling them the amazing benefits. And actually, you know, you've got to bring your students along with you as well. You've got to explain to them again around things like academic integrity as well, making sure that they're not getting their tool to do their project, they're going to be assessed. You could fall, you lose your accreditation as an assessment center if you do that. So comms. Comms. Comms talking. So I would say make sure you're having a dedicated space on your website as well, that you're just keeping up to date with all of the news. There's an ongoing process. So comms. And then again after that, then there's initial steps of the kind of third phase of that initial step then is around setting up those governance structures and again leaning on, on what you already have in place and expanding that. So making sure that you've got the right terms of reference for the particular governance committee that's going to oversee this. Make sure they Understand that they're responsible for overseeing AI risk and then having things like a steering group. I've seen loads of schools do this now, and I think it's brilliant actually, having a committee of people who are going to be working on this. It's not just a data protection officer, an IT director function, and it's a, you know, pedagogical, perhaps some student representation on that as well, some maybe trustee representation and having that real. A working group as well. Put the most nervous person about AI in your organization, get them on that steering group, because they will have some really good critical questions to ask you as well before you all get carried away with yourselves. And so, yes, so that's, that's the first, you know, the first stage is to put those things in place. Who are you? You, what do you want to do? Are you talking to people and telling them what's going on? Are you setting off on a journey together and then again making sure that you've built those structures in place? Look at your existing structures, data protection, CyberSecurity. You know, AI is a natural kind of bedfellow within those risks. So how are you overseeing data protection, cybersecurity risk, and do the same with AI? And actually, if you're, if you go back and you think, oh, gosh, we've not really got the right kind of oversight for cybersecurity and data protection, now's the time to do that as well. Go back and fix those bits as well and bring them through as a bit of a kind of triad of risk.

Daniel 38:44

Claire, some fantastic practical examples of how schools might get started in this space. As always, it's fascinating listening to what you have to say, Claire, and speaking with you and learning from you. Very, very grateful for your time and for your energy. Please do keep us, keep us posted at Good Future foundation with what you're up to next. We'd love to stay connected and thank you ever so much once again for being here today.

Claire 39:11

Thank you ever so much. Thanks for having me.

About this Episode

Claire Archibald: Creating Effective AI Governance Structures in Schools

Is having an AI policy enough to protect your school? In this episode, Daniel Emmerson speaks with Claire Archibald, Legal Director at Brown Jacobson and former Data Protection Officer, about what effective AI governance in schools looks like. Their conversation covers essential topics including what makes a good Data Protection Impact Assessment (DPIA), the importance of vendor due diligence, and why schools need robust governance structures beyond just having an AI policy. Claire emphasises the critical role of incident reporting, creating transparent cultures around AI use, and the need for collaborative approaches involving all stakeholders. She also shares a six-step governance framework and practical advice for schools starting their AI journey.

Claire Archibald

Legal Director at Brown Jacobson

Related Episodes

January 14, 2026

Setting Visible Boundaries to Safeguard our Students in an AI-infused World

Daniel's conversation with Gemma Gwilliam, Portsmouth's Head of Digital Learning, Education and Innovation, explores transparency, privacy and safeguarding in AI education. The discussion takes a dramatic turn when Gemma puts on a pair of AI-enabled glasses which she purchased easily for under £10 right in the middle of the recording, bringing theoretical concerns into stark reality. This jaw-dropping demonstration underscores the urgent challenges teachers face as sophisticated AI wearables become increasingly accessible to students. While we may debate whether AI belongs in classrooms, we cannot ignore the significant risks these technologies present to young people. This episode reveals how Portsmouth supports its schools and teachers in approaching AI responsibly to strike a balance between innovation and essential safeguarding measures.
December 9, 2025

Hult Prize Accelerator Startups: How the Next Generation is Solving Global Problems with AI

What skills will our students genuinely need to thrive in a future driven by AI? To find the answer, Daniel Emmerson goes straight to the source and sits down with brilliant young minds behind seven teams from the Hult Prize Global Accelerator, one of the final stages of the world’s largest student startup competition.
November 11, 2025

Muireann Hendriksen: Adapting AI Tools Based on Learning Science

In this episode, Daniel speaks with Muireann Hendriksen, the Principal Research Scientist at Pearson, about her team's recent research study called "Asking to Learn" The study analysed 128,000 AI queries from 9,000 student users to gain deeper insights into how students learn when they interact with AI study tools. Their key finding revealed that approximately one-third of student queries demonstrated higher-order thinking skills. Their conversation also explores important themes around trust, student engagement, accessibility, and inclusivity, as well as how AI tools can promote active learning behaviours.
October 13, 2025

Leena, Alicia and Swati: Embracing AI in GEMS Winchester School Dubai

Leena, Alicia and Swati from GEMS Winchester School Dubai, share their remarkable journey to achieving AI Quality Mark gold status. Over 12 months, they developed a school-wide AI strategy by establishing an AI core team, working party, and champions across both primary and secondary divisions. Their systematic approach also included AI tool evaluation through detailed risk assessments, and the creation of a bespoke AI literacy programme for their teachers. Their conversation reveals how they engage all stakeholders, including teachers, students, and parents, to cope with the challenges of this rapidly evolving technology and prepare students for an AI-infused world.
September 29, 2025

Matthew Pullen: Purposeful Technology and AI Deployment in Education

This episode features Matthew Pullen from Jamf, who talks about what thoughtful integration of technology and AI looks like in educational settings. Drawing from his experience working in the education division of a company that serves more than 40,000 schools globally, Mat has seen numerous use cases. He distinguishes between the purposeful application of technology to dismantle learning barriers and the less effective approach of adopting technology for its own sake. He also asserts that finding the correct balance between IT needs and pedagogical objectives is crucial for successful implementation.
September 15, 2025

Matt King: Creating a Culture of AI Literacy Through Conversation at Brentwood School

Many schools begin their AI journey by formulating AI policies. However, Matt King, Director of Innovative Learning at Brentwood School, reveals their preference for establishing guiding principles over rigid policies considering AI’s rapidly evolving nature.
September 1, 2025

Alex More: Preserving Humanity in an AI-Enhanced Education

Alex was genuinely fascinated when reviewing transcripts from his research interviews and noticed that students consistently referred to AI as "they," while adults, including teachers, used "it." This small but meaningful linguistic difference revealed a fundamental variation in how different generations perceive artificial intelligence. As a teacher, senior leader, and STEM Learning consultant, Alex developed his passion for educational technology through creating the award-winning "Future Classroom", a space designed to make students owners rather than consumers of knowledge. In this episode, he shares insights from his research on student voice, explores the race toward Artificial General Intelligence (AGI), and unpacks the concept of AI "glazing". While he touches on various topics around AI during his conversation with Daniel, the key theme that shines through is the importance of approaching AI thoughtfully and deliberately balancing technological progress with human connection.
June 16, 2025

David Leonard, Steve Lancaster: Approaching AI with cautious optimism at Watergrove Trust

This podcast episode was recorded during the Watergrove Trust AI professional development workshop, delivered by Good Future Foundation and Educate Ventures. Dave Leonard, the Strategic IT Director, and Steve Lancaster, a member of their AI Steering Group, shared how they led the Trust's exploration and discussion of AI with a thoughtful, cautious optimism. With strong support from leadership and voluntary participation from staff across the Trust forming the AI working group, they've been able to foster a trust-wide commitment to responsible AI use and harness AI to support their priority of staff wellbeing.
June 2, 2025

Thomas Sparrow: Navigating AI and the disinformation landscape

This episode features Thomas Sparrow, a correspondent and fact checker, who helps us differentiate misinformation and disinformation, and understand the evolving landscape of information dissemination, particularly through social media and the challenges posed by generative AI. He is also very passionate about equipping teachers and students with practical fact checking techniques and encourages educators to incorporate discussions about disinformation into their curricula.
May 19, 2025

Bukky Yusuf: Responsible technology integration in educational settings

With her extensive teaching experience in both mainstream and special schools, Bukky Yusuf shares how purposeful and strategic use of technology can unlock learning opportunities for students. She also equally emphasises the ethical dimensions of AI adoption, raising important concerns about data representation, societal inequalities, and the risks of widening digital divides and unequal access.
May 6, 2025

Dr Lulu Shi: A Sociological Lens on Educational Technology

In this enlightening episode, Dr Lulu Shi from the University of Oxford, shares technology’s role in education and society through a sociological lens. She examines how edtech companies shape learning environments and policy, while challenging the notion that technological progress is predetermined. Instead, Dr. Shi argues that our collective choices and actions actively shape technology's future and emphasises the importance of democratic participation in technological development.
April 26, 2025

George Barlow and Ricky Bridge: AI Implementation at Belgrave St Bartholomew’s Academy

In this podcast episode, Daniel, George, and Ricky discuss the integration of AI and technology in education, particularly at Belgrave St Bartholomew's Academy. They explore the local context of the school, the impact of technology on teaching and learning, and how AI is being utilised to enhance student engagement and learning outcomes. The conversation also touches on the importance of community involvement, parent engagement, and the challenges and opportunities presented by AI in the classroom. They emphasise the need for effective professional development for staff and the importance of understanding the purpose behind using technology in education.
April 2, 2025

Becci Peters and Ben Davies: AI Teaching Support from Computing at School

In this episode, Becci Peters and Ben Davies discuss their work with Computing at School (CAS), an initiative backed by BCS, The Chartered Institute for IT, which boasts 27,000 dedicated members who support computing teachers. Through their efforts with CAS, they've noticed that many teachers still feel uncomfortable about AI technology, and many schools are grappling with uncertainty around AI policies and how to implement them. There's also a noticeable digital divide based on differing school budgets for AI tools. Keeping these challenges in mind, their efforts don’t just focus on technical skills; they aim to help more teachers grasp AI principles and understand important ethical considerations like data bias and the limitations of training models. They also work to equip educators with a critical mindset, enabling them to make informed decisions about AI usage.
March 17, 2025

Student Council: Students Perspectives on AI and the Future of Learning

In this episode, four members of our Student Council, Conrado, Kerem, Felicitas and Victoria, who are between 17 and 20 years old, share their personal experiences and observations about using generative AI, both for themselves and their peers. They also talk about why it’s so crucial for teachers to confront and familiarize themselves with this new technology.
March 3, 2025

Suzy Madigan: AI and Civil Society in the Global South

AI’s impact spans globally across sectors, yet attention and voices aren’t equally distributed across impacted communities. This week, the Foundational Impact presents a humanitarian perspective as Daniel Emmerson speaks with Suzy Madigan, Responsible AI Lead at CARE International, to shine a light on those often left out of the AI narrative. The heart of their discussion centers on “AI and the Global South, Exploring the Role of Civil Society in AI Decision-Making”, a recent report that Suzy co-authored with Accentures, a multinational tech company. They discuss how critical challenges including digital infrastructure gaps, data representation, and ethical frameworks, perpetuate existing inequalities. Increasing civil society participation in AI governance has become more important than ever to ensure an inclusive and ethical AI development.
February 17, 2025

Liz Robinson: Leading Through the AI Unknown for Students

In this episode, Liz opens up about her path and reflects on her own "conscious incompetence" with AI - that pivotal moment when she understood that if she, as a leader of a forward-thinking trust, feels overwhelmed by AI's implications, many other school leaders must feel the same. Rather than shying away from this challenge, she chose to lean in, launching an exciting new initiative to help school leaders navigate the AI landscape.
February 3, 2025

Lori van Dam: Nurturing Students into Social Entrepreneurs

In this episode, Hult Prize CEO Lori van Dam pulls back the curtain on the global competition empowering student innovators into social entrepreneurs across 100+ countries. She believes in sustainable models that combine social good with financial viability. Lori also explores how AI is becoming a powerful ally in this space, while stressing that human creativity and cross-cultural collaboration remain at the heart of meaningful innovation.
January 20, 2025

Laura Knight: A Teacher’s Journey into AI Education

From decoding languages to decoding the future of education: Laura Knight takes us on her fascinating journey from a linguist to a computer science teacher, then Director of Digital Learning, and now a consultant specialising in digital strategy in education. With two decades of classroom wisdom under her belt, Laura has witnessed firsthand how AI is reshaping education and she’s here to help make sense of it all.
January 6, 2025

Richard Culatta: Understand AI's Capabilities and Limitations

Richard Culatta, former Government advisor, speaks about flying planes as an analogy to explain the perils of taking a haphazard approach to AI in education. Using aviation as an illustration, he highlights the most critical tech skills that teachers need today. The CEO of ISTE and ASCD draws a clear parallel: just as planes don't fly by magic, educators must deeply understand AI's capabilities and limitations.
December 16, 2024

Prof Anselmo Reyes: AI in Legal Education and Justice

Professor Anselmo Reyes, an international arbitrator and legal expert, discusses the potential of AI in making legal services more accessible to underserved communities. He notes that while AI works well for standardised legal matters, it faces limitations in areas requiring emotional intelligence or complex human judgment. Prof Reyes advocates for teaching law students to use AI critically as an assistive tool, emphasising that human oversight remains essential in legal decision making.
December 2, 2024

Esen Tümer: AI’s Role from Classrooms to Operating Rooms

Healthcare and technology leader Esen Tümer discusses how AI and emerging trends in technology are transforming medical settings and doctor-patient interactions. She encourages teachers not to shy away from technology, but rather understand how it’s reshaping society and prepare their students for this tech-enabled future.
November 19, 2024

Julie Carson: AI Integration Journey of Woodland Academy Trust

A forward-thinking educational trust shows what's possible when AI meets strategic implementation. From personalised learning platforms to innovative administrative solutions, Julie Carson, Director of Education at Woodland Academy Trust, reveals how they're enhancing teaching and learning across five primary schools through technology and AI to serve both classroom and operational needs.
November 4, 2024

Joseph Lin: AI Use Cases in Hong Kong Classrooms

In this conversation, Joseph Lin, an education technology consultant, discusses how some Hong Kong schools are exploring artificial intelligence and their implementation challenges. He emphasises the importance of data ownership, responsible use of AI, and the need for schools to adapt slowly to these technologies. Joseph also shares some successful AI implementation cases and how some of the AI tools may enhance creative learning experiences.
October 21, 2024

Sarah Brook: Rethinking Charitable Approaches to Tech and Sustainability

In our latest episode, we speak with Sarah Brook, Founder and CEO of the Sparkle Foundation, currently supporting 20,000 lives in Malawi. Sarah shares how education is evolving in Malawi and the role of AI plays to young people and international NGOs. She also provides a candid look at the challenges facing the charity sector, drawing from her daily work at Sparkle.
October 7, 2024

Rohan Light: Assurance and Oversight in the Age of AI

Join Rohan Light, Principal Analyst of Data Governance at Health New Zealand, as he discusses the critical need for accountability, transparency, and clear explanations of system behaviour. Discover the the government's role in regulation, and the crucial importance of strong data privacy practices.
September 23, 2024

Yom Fox: Leading Schools in an AI-infused World

With the rapid pace of technological change, Yom Fox, the high school principal at Georgetown Day School shares her insights on the importance of creating collaborative spaces where students and faculty learn together and teaching digital citizenship.
September 5, 2024

Debra Wilson: NAIS Perspectives on AI Professional Development

Join Debra Wilson, President of National Association of Independent Schools (NAIS) as she shares her insights on taking an incremental approach to exploring AI. Discover how to find the best solutions for your school, ensure responsible adoption at every stage, and learn about the ways AI can help tackle teacher burnout.
April 18, 2024

Steven Chan and Minh Tran: Preparing Students for AI and New Technologies

Discuss the importance of preparing students for AI and new technologies, the role of the Good Future Foundation in bridging the gap between technology and education, and the potential impact of AI on the future of work.

Claire Archibald: Creating Effective AI Governance Structures in Schools

Published on
February 6, 2026

Video Recap

Summary

Is having an AI policy enough to protect your school? In this episode, Daniel Emmerson speaks with Claire Archibald, Legal Director at Brown Jacobson and former Data Protection Officer, about what effective AI governance in schools looks like.

Their conversation covers essential topics including what makes a good Data Protection Impact Assessment (DPIA), the importance of vendor due diligence, and why schools need robust governance structures beyond just having an AI policy. Claire emphasises the critical role of incident reporting, creating transparent cultures around AI use, and the need for collaborative approaches involving all stakeholders. She also shares a six-step governance framework and practical advice for schools starting their AI journey.

Transcript

Daniel 00:02

Welcome to Foundational Impact, a podcast series that focuses on education and artificial intelligence from a non profit perspective. My name is Daniel Emmerson and I'm the executive director of Good Future Foundation, a nonprofit whose mission is to equip educators to confidently prepare all students, regardless of their background, to benefit from and succeed in an AI infused world.

Daniel 00:27

Welcome everybody, once again to Foundational Impact. It's an absolute delight and privilege to have Claire Archibald with us today. We've spoken about lots of different aspects over the course of the previous episodes around AI responsible use and best practice and what that means, and I'm super excited to have Claire with us today to talk about that, particularly from a governance perspective. Claire, very, very warm welcome to you. Thank you so much for being with us today. I'm wondering if we could kick off by just learning a little bit about who you are and what you do in this space.

Claire 01:06

Okay. Hi. Thanks ever so much for having me on. I'm really pleased to be here. So I'm primarily a data protection, freedom of information AI lawyer. I'm a legal director at Brian Jacobson. Been there for about a year. Prior to that, I was a data protection officer in a local authority providing DPO support for about 400 schools and academies across the country. But prior to that, again, I was a school business manager. So how does a school business manager end up being a lawyer in a really big law firm? Well, my career started off as a lawyer in my early 20s. I spent a really long time trying not to be a lawyer. But I found myself back here and very much in my happy place, bringing together all the amazing experiences I've had throughout my career. Counseling, mediation, all the school business management stuff I did running a DPO service. And it's all come together in a really happy place at Brown Jacobson.

Daniel 02:07

Can we unpack that DPO role for a second? Just for folks who might not know what that is. What, what is a DPO and what does that mean in a school sense?

Claire 02:18

DPO stands for Data Protection Officer. The GDPR back in 2018 established a requirement for certain organisations to have a data protection officer as a statutory role. And that would include schools, their public authorities. The personal data that they process is what would become high risk, very sensitive, very personal data, I think second only to health really in terms of the sector that has the most confidential and sensitive data in society. So schools had to have a statutory data protection officer. And so I went into the local authority to set up a traded service to be part of a traded service to provide that Statutory DPO service to schools. And it grew and grew. It was just before the pandemic. And the pandemic fundamentally changed the nature of the way personal data is processed in schools. And so what we thought initially was going to be a small project just to help schools to, you know, get onto their own feet in terms of managing that actually became something really big. It grew and obviously we were there with schools throughout the pandemic, supporting them with all of their digital challenges, cybersecurity challenges and. Yeah, and then the more work schools did to improve their data protection compliance program, actually the more they realised they needed to do so. It's a real privilege to work with schools during what was a really, really difficult time for them. You know, I won't lie to you, some schools it was like skipping through a meadow hand in hand as we skipped towards data protection compliance. And for other schools it was a bit like pushing a broken down car up a hill in mud. But it didn't matter, we got there anyway. And you know, I tried to make sure that it was a really positive experience. So whether whether you loved it or loathed it, I tried to make it a joyful and happy experience. So it became happy work for hard working stuffs in schools, not just something that you had to be tolerated or endured. I think we achieved it. People do say they have a good time working with me.

Daniel 04:28

I'm sure that's very much the case, Claire. I'm really interested to know though, particularly for the international audience as well, who might not have exactly the same setups in their schools, particularly around data. How did this give you the grounding that you needed in order to be able to pivot, if that's the right word, towards AI.

Claire 04:55

Yeah, yeah. So, I mean, you know, obviously ChatGPT 2022 people started talking about ChatGPT. As a data protection officer, I was helping a lot of schools doing their data protection impact assessments and really thinking about the data protection considerations for the projects that they wanted to do. And I could see that ChatGPT was going to become a big thing in edtech. Using AI tools was going to become big thing. So primarily it was about the data protection challenge. And I thought, gosh, as DPO, I need to be ahead of this because schools are going to want a bit of this, this is going to fundamentally change how they work. So I need to understand what's going on here. And so I did a lot of work in terms of increasing literacy and awareness of AI, not just within schools, but actually within the local authority. That I was working in as well. And so, you know, kind of the first, most obvious thought for me was well, we need to have some kind of a policy in place. And so I was supported again by the local authority and by the traded service, which is fantastic. And I helped to produce and there was a couple of people doing this at the same time, a template that schools could use, you know, just to, at the outset of their AI governance journey that they had something, they didn't just have a blank piece of paper. And so they had a policy and we were able to release that under a Creative Commons license and make it available for anybody who wanted it. So I felt that was a really great contribution to the sector, moved to Brown Jacobson and actually, you know, obviously during this time use of AI is increasing exponentially, challenges are increasing. And you know, I said to the leaders there, I said, I think we need to be doing something around AI and giving schools some really proactive information on AI. You know, this policy, I think we could build on that. And to be fair, the leaders within Brown Jacobson, fantastic intellectual minds gave me a really hard time over it. They know schools really well, they know that a policy does not make a really good governance structure. They had great in depth experience of safeguarding and establishing safeguarding structures within schools. We know that in safeguarding you don't have a child protection policy and appoint a designated safeguarding lead and that's your safeguarding done. And so they challenged me to say, come on, you know, we need to do something much more meaningful in terms of AI governance. You can't just have a DPO and an AI policy needs something so much more. And then of course the firm doesn't just do educational, they do all sorts of law advising corporates. And I went and spoke to one of the corporate partners and said, you know, what are you doing for your corporate clients? They said, oh, we've built this framework and six step governance framework. I thought this is fantastic, the six steps. And then it went away, did the IAPP AI GP course. So got the qualification in that really learned much more about how AI works because fundamentally if you're going to govern it, you do need to know some of the terminology. One of the things I do notice in the sector actually there's lots of people who will have big opinions about AI and I read them and I'm sort of slightly cringing because I think I really wish you understood how the AI works a little bit, especially when they're talking about, you know, inputs, training AI models like it's some kind of active ongoing process that ChatGPT is continually lear from every AI input that you do. In fact, I wish it would because I have to tell my AI tool every single time I use it that I prefer to use sentence case and not capitalise every letter in a heading. Anyway, I digress. So went away really deeply understood it and then worked within the education team and then we really built that six step governance framework out for schools as well. And it's been real privilege then over the last sort of seven to eight months or so to accompany some of those schools and academy trusts on that governance journey as they implement those six steps. And they come back to me and say, you know, this bit's going well. Or actually we could do with a different kind of AI literacy framework for our support staff to our teaching staff. How do we create that? And so everything's been been around accompanying schools, you know, we're all learning together at the end of the day on this, aren't we? And so, you know, having all of those concepts and saying, guys, I think you should do it like this. And then actually accompanying them through the journey and seeing them put together things like the web pages for stakeholders, these terms of reference for their steering groups, seeing those first DPIAs coming through, seeing that concepts, you know, this kind of big plan I had, seeing them then implement it as being a real privilege.

Daniel 09:41

A couple of things, Claire, if I might just go back for a moment. So you talked about working with the local authority just to have a clearer understanding of what that work looked like on a day to day basis versus what you're doing now in the law firm, just to help the audience think through, okay, this is what that meant for Claire at that time.

Claire 10:04

So not hugely different because from a data protection point of view and from a kind of governance point of view, schools were always able to make their own decisions about what they wanted to do. They were their own data controllers, so they always had that autonomy. I suppose from the perspective I'm at now, I'm able to build, I suppose, more innovative services and products because I'm not within that local authority structure, but not too different. I mean, again, just looking back to the local authority days, it seems like a long time ago now, but we all had a job to do in terms of understanding what actually AI was. And so I think the first time I presented internally to the local authority, I set up three mock accounts on the three big tools and kind of put fake council data into the tools and said, look, I could be Doing this as a staff member, I could be doing that. I could be putting this. This is the warning that pops up when I use Gemini. This is the warning that pops up when I use Claude. What are we doing to kind of govern this? What are we doing to kind of monitor how your staff are using AI tools just through their web browsers? So it was a great moment. I think there was, you know, and it was really nice. Some of the things that, you know, don't be an ostrich with your head in the sand. I remember doing a particular slide, trying to encourage everybody to think about, you know, we. We cannot turn a blind eye to this. We need to front it out. And actually, what we really need is that concept of a bowling lane with bumpers. We've got to en our staff to use these fantastic tools, but we've got to be there as the bumpers to stop the ball going into the gutters. That terminology. So, yeah, really similar. And again, things have really changed. And I've obviously been advising and doing AI as part of data protection courses for data protection offices in schools for about a year now. And I would say the training courses that I was delivering a year ago are now really different to what I'm delivering this year. There's a much greater awareness, us in schools of where AI functionality is, how it's coming into the organization. And now for me, the next challenge is, well, what are you, you know, what are you going to do beyond a policy? How are you just going to make this part of normal school business?

Daniel 12:32

Well, there. There are some interesting mechanisms in place to help schools with that. And you mentioned DPIAs, so we should perhaps give a bit of an overview, Claire, if that's all right, as to what a good DPIA looks like. But then moving on from that, of course, as you've alluded to already, just having the DPIA in place doesn't necessarily mean that staff are adhering to best practice. We found in a lot of the conversations we've had with schools, and indeed in research, that AI use is often far from transparent in terms of how it's. How it's being used by different stakeholders across the organisation. So let's maybe tackle that one first. What is a DPIA and what does a good DPIA look like?

Claire 13:21

So a DPIA, you know, is a data risk assessment, very simple. And schools are really used to writing risk assessments. They do it all the time, every time they want to take their children off the premises or they want to do a PE lesson or they're climbing up and down the gym equipment, they do risk assessments, health and safety risk assessments and so on. So I always try to demystify it and say, you know, it's primarily a risk assessment, just with a data focus. It's really important to focus on the subject of that risk assessment on the data subjects, you know, the risk to those individuals. I see. I've seen lots of DPIs. I could tell you. So many mistakes I've seen with DPIAs. I think, you know, one of the big mistakes is that it really focuses on the risk to the organisation and, you know, we might get a fine if we are responsible for a data breach. That is the wrong way to go about a DPIA. A DPIA is, you know, what is the impact on the data subject and how are you going to mitigate against that? Not how is the organisation going to mitigate against them getting a reprimand or a fine from the regulator. So I think a really good risk assessment does lots of things. I think for a lot of education settings, I think one of the things that they really could improve on is having really good project plans. And so actually DPIA for me becomes in some ways a bit of a hook for an entire project plan. And I think it's really important at the outset of your DPIA to articulate what your intended aim is, what success looks like, what does good look like. So how do we know if the thing that we want to do has been achieved, the success criteria has been met. That makes the additional risk worth it. So as I say, one of the big things I've seen is, oh, well, we need to do a DPIA into this thing. Well, why do you want to do the thing? We don't know, we just want to school down the road does it? We saw it at the BETT roadshow, everybody's got it, it's the hottest new thing. And so actually really articulating what success looked like is the first, the big part for me. Lots of different things then and really going through in a good analysis of the risks, risk assessment element of it, I think DPIAs. Again, I see a fairly good sector getting more sophisticated in terms of the write up. And then I'm in my mind's eye, I'm looking specifically at the ICO template here and what you have then is the actual risk assessment and it's a table that you fill out. And then very often what I see is organisations, particularly schools, kind of running aground when they get to that risk assessment bit. And then they're like, oh, what's the risk? Oh, I don't know, there could be a cyber attack. So we'll have really strong passwords. And then, you know, I'm really disappointed then by that risk assessment. So, for me, a really good risk assessment for data is going to look at all of the data protection principles. Is it fair, lawful and transparent? Are you minimizing the data? How are you going to store it for the minimum amount of time possible? How are you security, all of those principles and then looking at each of the risks against each of the principles so that you really flush it out. So rather than looking at a blank table, you're going, well, these are all the risks and how might they surface? And then I think for AI, I think you could really adapt that DPIA template to then further embed some AI risks. So, you know, if you're looking at lawfulness, fairness and transparency, perhaps then you could, in, in that kind of category, you could look at intellectual property risk, you know, and call your DPIA a DPIA and an AI risk assessment to make it bigger and have a look at all of those, those risks as well. So again, you can build out and really expand that. The next thing, I think that is really often open, overlooked as part of the DPIA process. And I think the DPIA, an annex or an appendix, is a really good place to kind of show the audit trail of that is around the vendor due diligence, you know, so really having a detailed conversation, really looking at the vendor's terms in detail, making sure that you read them, understand, you know, any ambiguous terms or terms that are really in a vendor's favour and then, you know, making sure that you've got some evidence that you've gone and done some due diligence on that vendor as well, in terms of, you know, how do they store data, you know, what's their security protocols, you know, they're looking at their down chain as well in terms of their suppliers, really examining that. And then again, the security piece, the data protection security piece, again, you can expand that to do due diligence on maybe an AI model. So really looking at how the app is built, how maybe system prompts are working, what safeguards are in the system, what protection from jailbreaking is in the system in England or Wales, then the DFE standards for AI and for students, you know, ask if students, if a student face an AI, asking those particular questions around monitoring and filtering and how that's built into the system. So again, the DPIA can be a vehicle for a lot of evidence of what does success look like, what are absolutely all of the risks, not just from a data point of view, but as you say, you can build it out. There's no rule that says DPIAs need to just do one thing and you need to go and do AI risk assessments somewhere else. It can all be as part of one big project plan and then really that due diligence both in terms of the vendor and what they're doing and in terms of if it's an AI, how their AI model, tool, app interface, etc is all built as well. There's loads. You can do the DPIA, I love them, I could do them all day.

Daniel 19:03

I mean, some of what you were talking through there, Claire, it all sounds like a pretty substantial piece of work when you think through the number of applications and solutions that you might find across a multi academy trust. We've certainly had conversations with digital leads where you're looking at hundreds of solutions that may not have been acquired or deployed with AI modules in them, but have since evolved, particularly Trojan halls. Yes, yeah, 18 months or so. But there's also that there needs to be a level of ownership, right, with tech companies that want to play in this space and that want to work with schools in terms of the, the amount of information they can provide so that a school can complete a DPIA effectively. Where does that, that burden sit and should DPOs be, you know, have a, have a mandate to be able to ask questions and what if they don't get the answers that they're looking for?

Claire 20:13

Oh gosh, there's so much here. And again, I, you know, I, I'm hoping that people from the edtech space will be listening and watching this and I implore you, if you want better sales and if you want an easier purchase process, get all of this stuff out proactively. Don't wait for these questions to be asked to you. Dedicate an area of your website to a trust center. Make all of this stuff really transparent and really working. It's been interesting watching Google particularly come to a better understanding of this. And I think, you know, 18 months ago what Google offered in terms of their transparency around some of their AI models was not as good as it is now. It's better now. So, so if you're an edtech vendor, look at proactively providing that information. Make sure all your sales team really understand these questions so that they can answer them proactively and quickly. It's really frustrating as well for schools. Schools will do all of this work and they maybe don't think to ask their DPO until they've decided on the thing that they want to do. And so, so that's a real shame and a real missed opportunity because actually if you get your DPO involved as part of your procurement process, you know, we're looking for X solution, we're looking at A, B and C vendors. You know, DPO come and give us your opinion. You know, C is a bit more expensive and you know, A looks great because it's free and then the DPO will actually, you know, it's free because X, Y and Z. I'm really concerned about student data protection or security and then that organisation can make a really good choice. Okay, well option C might be more expensive, but we have much better guarantees around the security of our students data. So yeah, vendors, I would really encourage them to make sure they're being much more proactive and getting that information out there. Don't be surprised if you get these questions asked. Be proactive and be prepared to do it because ultimately there's nothing worse as is it the DPO then ultimately what should they do? Well, the DPO can't stop a project going through, but a really good DPO should be advising those that own the risk within an organisation, within a school if it's a high risk or, you know, concerning you. Ultimately it's a board of trustees issue. So the DPO should be, according to the law, reporting to the highest level of management within their organisation. So ultimately the DPO has got outstanding concerns about a project. They should be going to the board of trustees and saying, you know, no trust or school wants to do this particular thing. I have concerns that it's risky and the risk hasn't been mitigated. Of course, if it's high risk, actually the DPO would be advising the organisation. They have a statutory obligation to inform the information regulator, the ICO in England and Wales. So the DPO can't stop. But I think the DPO has a really important role to advise. And ultimately I've seen lots of projects where teachers or senior leaders in a school are like, oh yeah, yeah, I really want to do the thing. And then the DPO has to then advise the governors or the trustees. You know, they're really excited about the project. But I have to warn you that, you know, it's got, I've got these concerns about it and then the board of trustees have pulled the project. You know, it was just not the way you want to do things. We'd much Rather, everybody's very collaborative from, from the beginning so that the trustees then aren't in conflict with the, with the other functions of the school.

Daniel 23:52

So when we're looking at governance then and processes, the school has their appointed DPO, they're interested in subscribing to or, or acquiring an AI solution for, for their school or for their organization. And they run their sort of checks and balances. They have everything in place from a paperwork perspective or policy perspective. When it comes to implementing that then, and making sure that this is standard operating procedure for the educators, for the senior leaders, that this will have an impact on. Are you able to give us some examples of working with schools and how they're able to bring that to life and how they're able to embed that within their culture?

Claire 24:45

So I mentioned the framework that we put in and one of the big parts of the framework is the theme of control and really making sure that you've got ongoing risk assessments. So for me, I think what's currently really being overlooked is the idea or the concept of incident reporting, AI incident reporting and making sure that you are monitoring. Oh, you know, there's a biased output, there's a harmful output, and not just seeing that from a safeguarded lens, but actually seeing it from a specific AI reporting functionality. So I'd really like to see the real standards. You know, we have data breach reporting procedures well established within schools now. I'd be really worried if schools don't at this stage actually. So, but as I say, we have those procedures. So making sure that AI incidents, even, you know, those minor near misses, you know, are all being picked up and being tracked, I think that's really important in terms of the data that you get then. And I think it's because, you know, know, again, to withdraw the safeguarding analogy, we know that we're going to have safeguarding incidents in a school regardless of the safeguarding structures that we have. And so schools are really good at making sure those incidents are reported and tracked. Again, as a, as a trustee or a governor, you get metrics on that. There's so many safeguarding institutions, how we dealt with them, these are the live cases, etc. So actually having that kind of reporting functionality on AI as well, I think is really important. You don't just set it up, let it go and then forget about it. It's a continuing process. And again, that constant reporting helps you to adjust course as you go on as well. So you think we're having lots of incidents in relation to this particular tool. Or this particular use case, I must say, you know, it's not about risk assessing tools, it's about risk assessing the use. You know, so that tracking. And again, I think schools that have that information are in a really good place then in terms of, of regulator oversight. So whether that would be Ofsted or whether that would be the information commissioner actually having some data around those little accidents. The accident book for AI, if you like, provides a really good evidence that you are watching constantly and that you are looking for things to go wrong and adjusting course accordingly.

Daniel 27:17

In order for that to happen though Claire, wouldn't the organization need to have sort of destigmatized AI use, particularly through the lens of academic integrity, but also, you know, focused on this culture of transparency around what responsible use looks like because without that it's almost impossible to regulate and to know what's going on, isn't it?

Claire 27:42

I love that. And Daniel, we've been, been talking with some conversations, haven't we, where people might want to keep their AI use very private and don't necessarily want leadership to know that they're heavily reliant on AI. And again, that becomes a real leadership piece then, doesn't it? So leaders actively using AI in appropriate ways, sharing that those lessons within their organization. I think just this morning I've been using, we have an AI tool within work and I've been using it to, rather than sit and write an email from scratch, there's a take functionality so I can just kind of garble away into my AI tool and it will help me to turn my goblins into a well structured email. So I'm not just sat there with a blank email like okay, dear so and so so actually really sharing that. And anyway I shared it and I, you know, it could have been embarrassing. Here's a dictation of all my random garblings. But I didn't, I screen shared it, shared it with the people I worked as if finding this great, great new case for this. And, and this is how it's really revolutionizing the way I'm working. And so actually leaders, anecdote aside, have a real responsibility there to show their own journey and to make it a point of pride, not shame, when they're using AI to make their work more effective. So yeah, as I say, I said it before, didn't I, that we're all learning together kind of mindset on this is really important. There's nobody out there. And I felt sad when people go, I've cracked AI. I'm so advanced and look at me, aren't I wonderful? Actually, no, be humble about it. We're all going to get things wrong. Things are going to not quite work out the way we want them to. We might occasionally get so enamored with our AI we forget to be the, the human in the loop, you know, that confirmation bias, automation bias, I should say, you know, we can all be guilty of, of, of that and, and, and letting the AI, you know, do our thinking for us and taking away our human endeavors. But yeah, as I say, I think sharing and being really honest and then being honest about what goes wrong to take away any stigma. Again, I mean we did a lot of work when I was a data protection officer, we did a lot of work around taking away the stigma if you were responsible for a data breach. And you know the first thing I would always do if somebody had to report a data breach was I'd be reassuring them that it was okay and really thanking them for bringing it to my attention. Brilliant that you've observed this. Thank you so much. You know, you would never say I can't believe you did this, blah blah, blah. You know, you've got to make it a safe space for people to share their learnings, the good stuff as well as the bad stuff.

Daniel 30:29

And what about then Claire, when things go wrong or if things were to go wrong. And I'm thinking in a scenario where you might not say like we forget the regulation and we forget the laws around this and teachers are free to be able to upload personal information from students into a free version of an AI tool. What's the worst that could happen in that situation?

Claire 31:01

Well, obviously the problem is that if you were to upload somebody's private information. Again, we talk about personal data a lot as well, don't we? We talk about students and individuals. Let's not forget that there's a real risk to an organization in terms of their confidential, that commercially sensitive data. You know, the stuff that if there was a Freedom of Information act request for this, the organisation really wouldn't want it to be going into the public domain. They'd be looking for a reason, an exemption, to use the terminology, to withhold that. So it's about if you ultimately upload information and you lose control of it, there's a real risk not only to individuals but to your organisation. So I'll give you an example example. So you know, you're a multi academy trust, you're considering restructuring the organization and you're considering actually really losing some headcount and there's you know, so you use ChatGPT or whatever to help you to, you know, consider your restructure organisations. Is that a real risk that your employees and your stakeholders, your parents, your community could find out about your restructure plans by virtue of a, a leak there from the AI tool? So that's the risk. I'm sure it's probably already an issue. I should imagine there's substantial amount of people's personal data already in these tools and we've seen some alarming stories, haven't we, where individuals then have been able to jailbreak the, into the tool and then it discloses some of the inputs from other users or some of its training data.

Daniel 32:51

So, yeah, for a school that's looking at this from a fresh perspective, they can see the possible benefits of AI, they've thought about this from a pedagogy first approach and they're looking to draw up a policy or some guidelines and think about which tools they're looking at deploying. What are the best sort of first steps for a school that's. Or a trust that's at the very beginning of this journey from a government governance perspective?

Claire 33:22

I love this question. So first of all, don't try to do all of the AI all at once. That's my first thing is, you know, you don't have to conquer AI this year. You can just do one tiny thing and do it really well. So the first thing you've got to do is think fundamentally about the, why the purpose. And schools and trusts are great at really understanding their fundamental values. Everybody has a value statement and a core mission. And so really, I think the very, very first thing you need to do is think back to your core mission and to your value statement, your school improvement or your trust improvement plan, what are we trying to achieve here? And then align some AI principles as to how you want to use AI, how you see it as part of your organization, align those principles to those core values and missions that already exist. So make sure it's consistent with that, the mission and the school improvement journey. And so setting out some kind of. And again, you see this from the oecd, you've seen it in terms of the government then adopting the OECD principles. So those principles around fairness, around environmental sustainability, et cetera. So have those kind of core messages and then you benchmark everything you're doing around your. And keep going back to that why question then really, I think the second thing, after you've kind of worked out who you are and what you want to do and what your purpose is, is to start really communicating with your stakeholders and that's communicating with your staff. I talk a lot. I used to talk about building a boat and making sure that you build a boat big enough for everybody and that boat doesn't set sail. But in governance terms, we talk about building a cathedral and making sure that cathedral is big enough and encompassing enough for everybody to get into. So make sure that you're bringing all of your staff with you. You know, don't just be led by those who are really excited by AI and want to go off and kind of accidentally leave all of your nervous stuff out of the, out of the communication or the cathedral or the boat, whatever analogy you want. And I think as well, at this point it's really, really important to talk to your parents. You cannot do, you cannot. And again, I'm already seeing this. And don't do AI to your parents. Make sure you are doing it with your parents. Make sure that you are investing in them. You know, don't let the first time a parent know that you are using AI is when you send them a really obviously AI crafted letter that suddenly doesn't look or sound at all like the head teacher suddenly looks like, you know, a chat, a GPT has written it. So yeah, bring your parents along with you. Lots of literacy, lot, explanation, reassurance, lots and lots. And again with your students as well then I think I've seen some schools really not do their students a favour and have told their students all about the risks and how scary it is and how unsafe it is without telling them the amazing benefits. And actually, you know, you've got to bring your students along with you as well. You've got to explain to them again around things like academic integrity as well, making sure that they're not getting their tool to do their project, they're going to be assessed. You could fall, you lose your accreditation as an assessment center if you do that. So comms. Comms. Comms talking. So I would say make sure you're having a dedicated space on your website as well, that you're just keeping up to date with all of the news. There's an ongoing process. So comms. And then again after that, then there's initial steps of the kind of third phase of that initial step then is around setting up those governance structures and again leaning on, on what you already have in place and expanding that. So making sure that you've got the right terms of reference for the particular governance committee that's going to oversee this. Make sure they Understand that they're responsible for overseeing AI risk and then having things like a steering group. I've seen loads of schools do this now, and I think it's brilliant actually, having a committee of people who are going to be working on this. It's not just a data protection officer, an IT director function, and it's a, you know, pedagogical, perhaps some student representation on that as well, some maybe trustee representation and having that real. A working group as well. Put the most nervous person about AI in your organization, get them on that steering group, because they will have some really good critical questions to ask you as well before you all get carried away with yourselves. And so, yes, so that's, that's the first, you know, the first stage is to put those things in place. Who are you? You, what do you want to do? Are you talking to people and telling them what's going on? Are you setting off on a journey together and then again making sure that you've built those structures in place? Look at your existing structures, data protection, CyberSecurity. You know, AI is a natural kind of bedfellow within those risks. So how are you overseeing data protection, cybersecurity risk, and do the same with AI? And actually, if you're, if you go back and you think, oh, gosh, we've not really got the right kind of oversight for cybersecurity and data protection, now's the time to do that as well. Go back and fix those bits as well and bring them through as a bit of a kind of triad of risk.

Daniel 38:44

Claire, some fantastic practical examples of how schools might get started in this space. As always, it's fascinating listening to what you have to say, Claire, and speaking with you and learning from you. Very, very grateful for your time and for your energy. Please do keep us, keep us posted at Good Future foundation with what you're up to next. We'd love to stay connected and thank you ever so much once again for being here today.

Claire 39:11

Thank you ever so much. Thanks for having me.

JOIN OUR MAILING LIST

Be the first to find out more about our programs and have the opportunity to work with us
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.