Claire Archibald: Creating Effective AI Governance Structures in Schools

Video Recap
Summary
Is having an AI policy enough to protect your school? In this episode, Daniel Emmerson speaks with Claire Archibald, Legal Director at Brown Jacobson and former Data Protection Officer, about what effective AI governance in schools looks like.
Their conversation covers essential topics including what makes a good Data Protection Impact Assessment (DPIA), the importance of vendor due diligence, and why schools need robust governance structures beyond just having an AI policy. Claire emphasises the critical role of incident reporting, creating transparent cultures around AI use, and the need for collaborative approaches involving all stakeholders. She also shares a six-step governance framework and practical advice for schools starting their AI journey.
Transcript
Daniel 00:02
Welcome to Foundational Impact, a podcast series that focuses on education and artificial intelligence from a non profit perspective. My name is Daniel Emmerson and I'm the executive director of Good Future Foundation, a nonprofit whose mission is to equip educators to confidently prepare all students, regardless of their background, to benefit from and succeed in an AI infused world.
Daniel 00:27
Welcome everybody, once again to Foundational Impact. It's an absolute delight and privilege to have Claire Archibald with us today. We've spoken about lots of different aspects over the course of the previous episodes around AI responsible use and best practice and what that means, and I'm super excited to have Claire with us today to talk about that, particularly from a governance perspective. Claire, very, very warm welcome to you. Thank you so much for being with us today. I'm wondering if we could kick off by just learning a little bit about who you are and what you do in this space.
Claire 01:06
Okay. Hi. Thanks ever so much for having me on. I'm really pleased to be here. So I'm primarily a data protection, freedom of information AI lawyer. I'm a legal director at Brian Jacobson. Been there for about a year. Prior to that, I was a data protection officer in a local authority providing DPO support for about 400 schools and academies across the country. But prior to that, again, I was a school business manager. So how does a school business manager end up being a lawyer in a really big law firm? Well, my career started off as a lawyer in my early 20s. I spent a really long time trying not to be a lawyer. But I found myself back here and very much in my happy place, bringing together all the amazing experiences I've had throughout my career. Counseling, mediation, all the school business management stuff I did running a DPO service. And it's all come together in a really happy place at Brown Jacobson.
Daniel 02:07
Can we unpack that DPO role for a second? Just for folks who might not know what that is. What, what is a DPO and what does that mean in a school sense?
Claire 02:18
DPO stands for Data Protection Officer. The GDPR back in 2018 established a requirement for certain organisations to have a data protection officer as a statutory role. And that would include schools, their public authorities. The personal data that they process is what would become high risk, very sensitive, very personal data, I think second only to health really in terms of the sector that has the most confidential and sensitive data in society. So schools had to have a statutory data protection officer. And so I went into the local authority to set up a traded service to be part of a traded service to provide that Statutory DPO service to schools. And it grew and grew. It was just before the pandemic. And the pandemic fundamentally changed the nature of the way personal data is processed in schools. And so what we thought initially was going to be a small project just to help schools to, you know, get onto their own feet in terms of managing that actually became something really big. It grew and obviously we were there with schools throughout the pandemic, supporting them with all of their digital challenges, cybersecurity challenges and. Yeah, and then the more work schools did to improve their data protection compliance program, actually the more they realised they needed to do so. It's a real privilege to work with schools during what was a really, really difficult time for them. You know, I won't lie to you, some schools it was like skipping through a meadow hand in hand as we skipped towards data protection compliance. And for other schools it was a bit like pushing a broken down car up a hill in mud. But it didn't matter, we got there anyway. And you know, I tried to make sure that it was a really positive experience. So whether whether you loved it or loathed it, I tried to make it a joyful and happy experience. So it became happy work for hard working stuffs in schools, not just something that you had to be tolerated or endured. I think we achieved it. People do say they have a good time working with me.
Daniel 04:28
I'm sure that's very much the case, Claire. I'm really interested to know though, particularly for the international audience as well, who might not have exactly the same setups in their schools, particularly around data. How did this give you the grounding that you needed in order to be able to pivot, if that's the right word, towards AI.
Claire 04:55
Yeah, yeah. So, I mean, you know, obviously ChatGPT 2022 people started talking about ChatGPT. As a data protection officer, I was helping a lot of schools doing their data protection impact assessments and really thinking about the data protection considerations for the projects that they wanted to do. And I could see that ChatGPT was going to become a big thing in edtech. Using AI tools was going to become big thing. So primarily it was about the data protection challenge. And I thought, gosh, as DPO, I need to be ahead of this because schools are going to want a bit of this, this is going to fundamentally change how they work. So I need to understand what's going on here. And so I did a lot of work in terms of increasing literacy and awareness of AI, not just within schools, but actually within the local authority. That I was working in as well. And so, you know, kind of the first, most obvious thought for me was well, we need to have some kind of a policy in place. And so I was supported again by the local authority and by the traded service, which is fantastic. And I helped to produce and there was a couple of people doing this at the same time, a template that schools could use, you know, just to, at the outset of their AI governance journey that they had something, they didn't just have a blank piece of paper. And so they had a policy and we were able to release that under a Creative Commons license and make it available for anybody who wanted it. So I felt that was a really great contribution to the sector, moved to Brown Jacobson and actually, you know, obviously during this time use of AI is increasing exponentially, challenges are increasing. And you know, I said to the leaders there, I said, I think we need to be doing something around AI and giving schools some really proactive information on AI. You know, this policy, I think we could build on that. And to be fair, the leaders within Brown Jacobson, fantastic intellectual minds gave me a really hard time over it. They know schools really well, they know that a policy does not make a really good governance structure. They had great in depth experience of safeguarding and establishing safeguarding structures within schools. We know that in safeguarding you don't have a child protection policy and appoint a designated safeguarding lead and that's your safeguarding done. And so they challenged me to say, come on, you know, we need to do something much more meaningful in terms of AI governance. You can't just have a DPO and an AI policy needs something so much more. And then of course the firm doesn't just do educational, they do all sorts of law advising corporates. And I went and spoke to one of the corporate partners and said, you know, what are you doing for your corporate clients? They said, oh, we've built this framework and six step governance framework. I thought this is fantastic, the six steps. And then it went away, did the IAPP AI GP course. So got the qualification in that really learned much more about how AI works because fundamentally if you're going to govern it, you do need to know some of the terminology. One of the things I do notice in the sector actually there's lots of people who will have big opinions about AI and I read them and I'm sort of slightly cringing because I think I really wish you understood how the AI works a little bit, especially when they're talking about, you know, inputs, training AI models like it's some kind of active ongoing process that ChatGPT is continually lear from every AI input that you do. In fact, I wish it would because I have to tell my AI tool every single time I use it that I prefer to use sentence case and not capitalise every letter in a heading. Anyway, I digress. So went away really deeply understood it and then worked within the education team and then we really built that six step governance framework out for schools as well. And it's been real privilege then over the last sort of seven to eight months or so to accompany some of those schools and academy trusts on that governance journey as they implement those six steps. And they come back to me and say, you know, this bit's going well. Or actually we could do with a different kind of AI literacy framework for our support staff to our teaching staff. How do we create that? And so everything's been been around accompanying schools, you know, we're all learning together at the end of the day on this, aren't we? And so, you know, having all of those concepts and saying, guys, I think you should do it like this. And then actually accompanying them through the journey and seeing them put together things like the web pages for stakeholders, these terms of reference for their steering groups, seeing those first DPIAs coming through, seeing that concepts, you know, this kind of big plan I had, seeing them then implement it as being a real privilege.
Daniel 09:41
A couple of things, Claire, if I might just go back for a moment. So you talked about working with the local authority just to have a clearer understanding of what that work looked like on a day to day basis versus what you're doing now in the law firm, just to help the audience think through, okay, this is what that meant for Claire at that time.
Claire 10:04
So not hugely different because from a data protection point of view and from a kind of governance point of view, schools were always able to make their own decisions about what they wanted to do. They were their own data controllers, so they always had that autonomy. I suppose from the perspective I'm at now, I'm able to build, I suppose, more innovative services and products because I'm not within that local authority structure, but not too different. I mean, again, just looking back to the local authority days, it seems like a long time ago now, but we all had a job to do in terms of understanding what actually AI was. And so I think the first time I presented internally to the local authority, I set up three mock accounts on the three big tools and kind of put fake council data into the tools and said, look, I could be Doing this as a staff member, I could be doing that. I could be putting this. This is the warning that pops up when I use Gemini. This is the warning that pops up when I use Claude. What are we doing to kind of govern this? What are we doing to kind of monitor how your staff are using AI tools just through their web browsers? So it was a great moment. I think there was, you know, and it was really nice. Some of the things that, you know, don't be an ostrich with your head in the sand. I remember doing a particular slide, trying to encourage everybody to think about, you know, we. We cannot turn a blind eye to this. We need to front it out. And actually, what we really need is that concept of a bowling lane with bumpers. We've got to en our staff to use these fantastic tools, but we've got to be there as the bumpers to stop the ball going into the gutters. That terminology. So, yeah, really similar. And again, things have really changed. And I've obviously been advising and doing AI as part of data protection courses for data protection offices in schools for about a year now. And I would say the training courses that I was delivering a year ago are now really different to what I'm delivering this year. There's a much greater awareness, us in schools of where AI functionality is, how it's coming into the organization. And now for me, the next challenge is, well, what are you, you know, what are you going to do beyond a policy? How are you just going to make this part of normal school business?
Daniel 12:32
Well, there. There are some interesting mechanisms in place to help schools with that. And you mentioned DPIAs, so we should perhaps give a bit of an overview, Claire, if that's all right, as to what a good DPIA looks like. But then moving on from that, of course, as you've alluded to already, just having the DPIA in place doesn't necessarily mean that staff are adhering to best practice. We found in a lot of the conversations we've had with schools, and indeed in research, that AI use is often far from transparent in terms of how it's. How it's being used by different stakeholders across the organisation. So let's maybe tackle that one first. What is a DPIA and what does a good DPIA look like?
Claire 13:21
So a DPIA, you know, is a data risk assessment, very simple. And schools are really used to writing risk assessments. They do it all the time, every time they want to take their children off the premises or they want to do a PE lesson or they're climbing up and down the gym equipment, they do risk assessments, health and safety risk assessments and so on. So I always try to demystify it and say, you know, it's primarily a risk assessment, just with a data focus. It's really important to focus on the subject of that risk assessment on the data subjects, you know, the risk to those individuals. I see. I've seen lots of DPIs. I could tell you. So many mistakes I've seen with DPIAs. I think, you know, one of the big mistakes is that it really focuses on the risk to the organisation and, you know, we might get a fine if we are responsible for a data breach. That is the wrong way to go about a DPIA. A DPIA is, you know, what is the impact on the data subject and how are you going to mitigate against that? Not how is the organisation going to mitigate against them getting a reprimand or a fine from the regulator. So I think a really good risk assessment does lots of things. I think for a lot of education settings, I think one of the things that they really could improve on is having really good project plans. And so actually DPIA for me becomes in some ways a bit of a hook for an entire project plan. And I think it's really important at the outset of your DPIA to articulate what your intended aim is, what success looks like, what does good look like. So how do we know if the thing that we want to do has been achieved, the success criteria has been met. That makes the additional risk worth it. So as I say, one of the big things I've seen is, oh, well, we need to do a DPIA into this thing. Well, why do you want to do the thing? We don't know, we just want to school down the road does it? We saw it at the BETT roadshow, everybody's got it, it's the hottest new thing. And so actually really articulating what success looked like is the first, the big part for me. Lots of different things then and really going through in a good analysis of the risks, risk assessment element of it, I think DPIAs. Again, I see a fairly good sector getting more sophisticated in terms of the write up. And then I'm in my mind's eye, I'm looking specifically at the ICO template here and what you have then is the actual risk assessment and it's a table that you fill out. And then very often what I see is organisations, particularly schools, kind of running aground when they get to that risk assessment bit. And then they're like, oh, what's the risk? Oh, I don't know, there could be a cyber attack. So we'll have really strong passwords. And then, you know, I'm really disappointed then by that risk assessment. So, for me, a really good risk assessment for data is going to look at all of the data protection principles. Is it fair, lawful and transparent? Are you minimizing the data? How are you going to store it for the minimum amount of time possible? How are you security, all of those principles and then looking at each of the risks against each of the principles so that you really flush it out. So rather than looking at a blank table, you're going, well, these are all the risks and how might they surface? And then I think for AI, I think you could really adapt that DPIA template to then further embed some AI risks. So, you know, if you're looking at lawfulness, fairness and transparency, perhaps then you could, in, in that kind of category, you could look at intellectual property risk, you know, and call your DPIA a DPIA and an AI risk assessment to make it bigger and have a look at all of those, those risks as well. So again, you can build out and really expand that. The next thing, I think that is really often open, overlooked as part of the DPIA process. And I think the DPIA, an annex or an appendix, is a really good place to kind of show the audit trail of that is around the vendor due diligence, you know, so really having a detailed conversation, really looking at the vendor's terms in detail, making sure that you read them, understand, you know, any ambiguous terms or terms that are really in a vendor's favour and then, you know, making sure that you've got some evidence that you've gone and done some due diligence on that vendor as well, in terms of, you know, how do they store data, you know, what's their security protocols, you know, they're looking at their down chain as well in terms of their suppliers, really examining that. And then again, the security piece, the data protection security piece, again, you can expand that to do due diligence on maybe an AI model. So really looking at how the app is built, how maybe system prompts are working, what safeguards are in the system, what protection from jailbreaking is in the system in England or Wales, then the DFE standards for AI and for students, you know, ask if students, if a student face an AI, asking those particular questions around monitoring and filtering and how that's built into the system. So again, the DPIA can be a vehicle for a lot of evidence of what does success look like, what are absolutely all of the risks, not just from a data point of view, but as you say, you can build it out. There's no rule that says DPIAs need to just do one thing and you need to go and do AI risk assessments somewhere else. It can all be as part of one big project plan and then really that due diligence both in terms of the vendor and what they're doing and in terms of if it's an AI, how their AI model, tool, app interface, etc is all built as well. There's loads. You can do the DPIA, I love them, I could do them all day.
Daniel 19:03
I mean, some of what you were talking through there, Claire, it all sounds like a pretty substantial piece of work when you think through the number of applications and solutions that you might find across a multi academy trust. We've certainly had conversations with digital leads where you're looking at hundreds of solutions that may not have been acquired or deployed with AI modules in them, but have since evolved, particularly Trojan halls. Yes, yeah, 18 months or so. But there's also that there needs to be a level of ownership, right, with tech companies that want to play in this space and that want to work with schools in terms of the, the amount of information they can provide so that a school can complete a DPIA effectively. Where does that, that burden sit and should DPOs be, you know, have a, have a mandate to be able to ask questions and what if they don't get the answers that they're looking for?
Claire 20:13
Oh gosh, there's so much here. And again, I, you know, I, I'm hoping that people from the edtech space will be listening and watching this and I implore you, if you want better sales and if you want an easier purchase process, get all of this stuff out proactively. Don't wait for these questions to be asked to you. Dedicate an area of your website to a trust center. Make all of this stuff really transparent and really working. It's been interesting watching Google particularly come to a better understanding of this. And I think, you know, 18 months ago what Google offered in terms of their transparency around some of their AI models was not as good as it is now. It's better now. So, so if you're an edtech vendor, look at proactively providing that information. Make sure all your sales team really understand these questions so that they can answer them proactively and quickly. It's really frustrating as well for schools. Schools will do all of this work and they maybe don't think to ask their DPO until they've decided on the thing that they want to do. And so, so that's a real shame and a real missed opportunity because actually if you get your DPO involved as part of your procurement process, you know, we're looking for X solution, we're looking at A, B and C vendors. You know, DPO come and give us your opinion. You know, C is a bit more expensive and you know, A looks great because it's free and then the DPO will actually, you know, it's free because X, Y and Z. I'm really concerned about student data protection or security and then that organisation can make a really good choice. Okay, well option C might be more expensive, but we have much better guarantees around the security of our students data. So yeah, vendors, I would really encourage them to make sure they're being much more proactive and getting that information out there. Don't be surprised if you get these questions asked. Be proactive and be prepared to do it because ultimately there's nothing worse as is it the DPO then ultimately what should they do? Well, the DPO can't stop a project going through, but a really good DPO should be advising those that own the risk within an organisation, within a school if it's a high risk or, you know, concerning you. Ultimately it's a board of trustees issue. So the DPO should be, according to the law, reporting to the highest level of management within their organisation. So ultimately the DPO has got outstanding concerns about a project. They should be going to the board of trustees and saying, you know, no trust or school wants to do this particular thing. I have concerns that it's risky and the risk hasn't been mitigated. Of course, if it's high risk, actually the DPO would be advising the organisation. They have a statutory obligation to inform the information regulator, the ICO in England and Wales. So the DPO can't stop. But I think the DPO has a really important role to advise. And ultimately I've seen lots of projects where teachers or senior leaders in a school are like, oh yeah, yeah, I really want to do the thing. And then the DPO has to then advise the governors or the trustees. You know, they're really excited about the project. But I have to warn you that, you know, it's got, I've got these concerns about it and then the board of trustees have pulled the project. You know, it was just not the way you want to do things. We'd much Rather, everybody's very collaborative from, from the beginning so that the trustees then aren't in conflict with the, with the other functions of the school.
Daniel 23:52
So when we're looking at governance then and processes, the school has their appointed DPO, they're interested in subscribing to or, or acquiring an AI solution for, for their school or for their organization. And they run their sort of checks and balances. They have everything in place from a paperwork perspective or policy perspective. When it comes to implementing that then, and making sure that this is standard operating procedure for the educators, for the senior leaders, that this will have an impact on. Are you able to give us some examples of working with schools and how they're able to bring that to life and how they're able to embed that within their culture?
Claire 24:45
So I mentioned the framework that we put in and one of the big parts of the framework is the theme of control and really making sure that you've got ongoing risk assessments. So for me, I think what's currently really being overlooked is the idea or the concept of incident reporting, AI incident reporting and making sure that you are monitoring. Oh, you know, there's a biased output, there's a harmful output, and not just seeing that from a safeguarded lens, but actually seeing it from a specific AI reporting functionality. So I'd really like to see the real standards. You know, we have data breach reporting procedures well established within schools now. I'd be really worried if schools don't at this stage actually. So, but as I say, we have those procedures. So making sure that AI incidents, even, you know, those minor near misses, you know, are all being picked up and being tracked, I think that's really important in terms of the data that you get then. And I think it's because, you know, know, again, to withdraw the safeguarding analogy, we know that we're going to have safeguarding incidents in a school regardless of the safeguarding structures that we have. And so schools are really good at making sure those incidents are reported and tracked. Again, as a, as a trustee or a governor, you get metrics on that. There's so many safeguarding institutions, how we dealt with them, these are the live cases, etc. So actually having that kind of reporting functionality on AI as well, I think is really important. You don't just set it up, let it go and then forget about it. It's a continuing process. And again, that constant reporting helps you to adjust course as you go on as well. So you think we're having lots of incidents in relation to this particular tool. Or this particular use case, I must say, you know, it's not about risk assessing tools, it's about risk assessing the use. You know, so that tracking. And again, I think schools that have that information are in a really good place then in terms of, of regulator oversight. So whether that would be Ofsted or whether that would be the information commissioner actually having some data around those little accidents. The accident book for AI, if you like, provides a really good evidence that you are watching constantly and that you are looking for things to go wrong and adjusting course accordingly.
Daniel 27:17
In order for that to happen though Claire, wouldn't the organization need to have sort of destigmatized AI use, particularly through the lens of academic integrity, but also, you know, focused on this culture of transparency around what responsible use looks like because without that it's almost impossible to regulate and to know what's going on, isn't it?
Claire 27:42
I love that. And Daniel, we've been, been talking with some conversations, haven't we, where people might want to keep their AI use very private and don't necessarily want leadership to know that they're heavily reliant on AI. And again, that becomes a real leadership piece then, doesn't it? So leaders actively using AI in appropriate ways, sharing that those lessons within their organization. I think just this morning I've been using, we have an AI tool within work and I've been using it to, rather than sit and write an email from scratch, there's a take functionality so I can just kind of garble away into my AI tool and it will help me to turn my goblins into a well structured email. So I'm not just sat there with a blank email like okay, dear so and so so actually really sharing that. And anyway I shared it and I, you know, it could have been embarrassing. Here's a dictation of all my random garblings. But I didn't, I screen shared it, shared it with the people I worked as if finding this great, great new case for this. And, and this is how it's really revolutionizing the way I'm working. And so actually leaders, anecdote aside, have a real responsibility there to show their own journey and to make it a point of pride, not shame, when they're using AI to make their work more effective. So yeah, as I say, I said it before, didn't I, that we're all learning together kind of mindset on this is really important. There's nobody out there. And I felt sad when people go, I've cracked AI. I'm so advanced and look at me, aren't I wonderful? Actually, no, be humble about it. We're all going to get things wrong. Things are going to not quite work out the way we want them to. We might occasionally get so enamored with our AI we forget to be the, the human in the loop, you know, that confirmation bias, automation bias, I should say, you know, we can all be guilty of, of, of that and, and, and letting the AI, you know, do our thinking for us and taking away our human endeavors. But yeah, as I say, I think sharing and being really honest and then being honest about what goes wrong to take away any stigma. Again, I mean we did a lot of work when I was a data protection officer, we did a lot of work around taking away the stigma if you were responsible for a data breach. And you know the first thing I would always do if somebody had to report a data breach was I'd be reassuring them that it was okay and really thanking them for bringing it to my attention. Brilliant that you've observed this. Thank you so much. You know, you would never say I can't believe you did this, blah blah, blah. You know, you've got to make it a safe space for people to share their learnings, the good stuff as well as the bad stuff.
Daniel 30:29
And what about then Claire, when things go wrong or if things were to go wrong. And I'm thinking in a scenario where you might not say like we forget the regulation and we forget the laws around this and teachers are free to be able to upload personal information from students into a free version of an AI tool. What's the worst that could happen in that situation?
Claire 31:01
Well, obviously the problem is that if you were to upload somebody's private information. Again, we talk about personal data a lot as well, don't we? We talk about students and individuals. Let's not forget that there's a real risk to an organization in terms of their confidential, that commercially sensitive data. You know, the stuff that if there was a Freedom of Information act request for this, the organisation really wouldn't want it to be going into the public domain. They'd be looking for a reason, an exemption, to use the terminology, to withhold that. So it's about if you ultimately upload information and you lose control of it, there's a real risk not only to individuals but to your organisation. So I'll give you an example example. So you know, you're a multi academy trust, you're considering restructuring the organization and you're considering actually really losing some headcount and there's you know, so you use ChatGPT or whatever to help you to, you know, consider your restructure organisations. Is that a real risk that your employees and your stakeholders, your parents, your community could find out about your restructure plans by virtue of a, a leak there from the AI tool? So that's the risk. I'm sure it's probably already an issue. I should imagine there's substantial amount of people's personal data already in these tools and we've seen some alarming stories, haven't we, where individuals then have been able to jailbreak the, into the tool and then it discloses some of the inputs from other users or some of its training data.
Daniel 32:51
So, yeah, for a school that's looking at this from a fresh perspective, they can see the possible benefits of AI, they've thought about this from a pedagogy first approach and they're looking to draw up a policy or some guidelines and think about which tools they're looking at deploying. What are the best sort of first steps for a school that's. Or a trust that's at the very beginning of this journey from a government governance perspective?
Claire 33:22
I love this question. So first of all, don't try to do all of the AI all at once. That's my first thing is, you know, you don't have to conquer AI this year. You can just do one tiny thing and do it really well. So the first thing you've got to do is think fundamentally about the, why the purpose. And schools and trusts are great at really understanding their fundamental values. Everybody has a value statement and a core mission. And so really, I think the very, very first thing you need to do is think back to your core mission and to your value statement, your school improvement or your trust improvement plan, what are we trying to achieve here? And then align some AI principles as to how you want to use AI, how you see it as part of your organization, align those principles to those core values and missions that already exist. So make sure it's consistent with that, the mission and the school improvement journey. And so setting out some kind of. And again, you see this from the oecd, you've seen it in terms of the government then adopting the OECD principles. So those principles around fairness, around environmental sustainability, et cetera. So have those kind of core messages and then you benchmark everything you're doing around your. And keep going back to that why question then really, I think the second thing, after you've kind of worked out who you are and what you want to do and what your purpose is, is to start really communicating with your stakeholders and that's communicating with your staff. I talk a lot. I used to talk about building a boat and making sure that you build a boat big enough for everybody and that boat doesn't set sail. But in governance terms, we talk about building a cathedral and making sure that cathedral is big enough and encompassing enough for everybody to get into. So make sure that you're bringing all of your staff with you. You know, don't just be led by those who are really excited by AI and want to go off and kind of accidentally leave all of your nervous stuff out of the, out of the communication or the cathedral or the boat, whatever analogy you want. And I think as well, at this point it's really, really important to talk to your parents. You cannot do, you cannot. And again, I'm already seeing this. And don't do AI to your parents. Make sure you are doing it with your parents. Make sure that you are investing in them. You know, don't let the first time a parent know that you are using AI is when you send them a really obviously AI crafted letter that suddenly doesn't look or sound at all like the head teacher suddenly looks like, you know, a chat, a GPT has written it. So yeah, bring your parents along with you. Lots of literacy, lot, explanation, reassurance, lots and lots. And again with your students as well then I think I've seen some schools really not do their students a favour and have told their students all about the risks and how scary it is and how unsafe it is without telling them the amazing benefits. And actually, you know, you've got to bring your students along with you as well. You've got to explain to them again around things like academic integrity as well, making sure that they're not getting their tool to do their project, they're going to be assessed. You could fall, you lose your accreditation as an assessment center if you do that. So comms. Comms. Comms talking. So I would say make sure you're having a dedicated space on your website as well, that you're just keeping up to date with all of the news. There's an ongoing process. So comms. And then again after that, then there's initial steps of the kind of third phase of that initial step then is around setting up those governance structures and again leaning on, on what you already have in place and expanding that. So making sure that you've got the right terms of reference for the particular governance committee that's going to oversee this. Make sure they Understand that they're responsible for overseeing AI risk and then having things like a steering group. I've seen loads of schools do this now, and I think it's brilliant actually, having a committee of people who are going to be working on this. It's not just a data protection officer, an IT director function, and it's a, you know, pedagogical, perhaps some student representation on that as well, some maybe trustee representation and having that real. A working group as well. Put the most nervous person about AI in your organization, get them on that steering group, because they will have some really good critical questions to ask you as well before you all get carried away with yourselves. And so, yes, so that's, that's the first, you know, the first stage is to put those things in place. Who are you? You, what do you want to do? Are you talking to people and telling them what's going on? Are you setting off on a journey together and then again making sure that you've built those structures in place? Look at your existing structures, data protection, CyberSecurity. You know, AI is a natural kind of bedfellow within those risks. So how are you overseeing data protection, cybersecurity risk, and do the same with AI? And actually, if you're, if you go back and you think, oh, gosh, we've not really got the right kind of oversight for cybersecurity and data protection, now's the time to do that as well. Go back and fix those bits as well and bring them through as a bit of a kind of triad of risk.
Daniel 38:44
Claire, some fantastic practical examples of how schools might get started in this space. As always, it's fascinating listening to what you have to say, Claire, and speaking with you and learning from you. Very, very grateful for your time and for your energy. Please do keep us, keep us posted at Good Future foundation with what you're up to next. We'd love to stay connected and thank you ever so much once again for being here today.
Claire 39:11
Thank you ever so much. Thanks for having me.
.jpg)