Setting Visible Boundaries to Safeguard our Students in an AI-infused World
.jpg)
Video Transcript
Summary
Daniel's conversation with Gemma Gwilliam, Portsmouth's Head of Digital Learning, Education and Innovation, explores transparency, privacy and safeguarding in AI education. The discussion takes a dramatic turn when Gemma puts on a pair of AI-enabled glasses which she purchased easily for under £10 right in the middle of the recording, bringing theoretical concerns into stark reality. This jaw-dropping demonstration underscores the urgent challenges teachers face as sophisticated AI wearables become increasingly accessible to students.
While we may debate whether AI belongs in classrooms, we cannot ignore the significant risks these technologies present to young people. This episode reveals how Portsmouth supports its schools and teachers in approaching AI responsibly to strike a balance between innovation and essential safeguarding measures.
Transcript
Daniel 00:00
Gemma, thank you so, so very much indeed for being on Foundational Impact today. It's wonderful to have you with us. How was the start of your week been so far? What have you been up to?
Gemma 00:11
Yeah, I've been at a couple of schools already this morning, one of which actually we were looking at their AI curriculum for this year. Yep. And then after this, I've then got my next trust that I'm working with. So you're kind of in the middle. You like the feeling of.
Daniel 00:30
Well, I'm sure our audience are keen to know a bit more, Gemma, about what your. What do you do, what does your work look like and why is it that you're visiting schools and talking about AI?
Gemma 00:43
So my role is very unique. I'm the Head of Digital Learning, Education, Innovation for the whole city of Portsmouth and we're just going into my fourth year of doing this. So, for those that don't know Portsmouth, very small island on the south coast and we have 61 schools, but we have a shared vision across our 61 schools on how we're using technology to enhance teaching and learning, reduce workload, support, accessibility, inclusion and narrow the digital divide. And within that, I mean, there's multiple different layers. We look at our technology for children, what access our children have, technology for adults as well as our staff, and then technology for all is our wider community. And actually when we're talking about AI, all three of those strands, children, adults, all, as well as teaching and learning, reducing workload, accessibility, inclusion and the digital divide, all have an AI element within them. So we don't have it as an add on. It's embedded into our vision.
Daniel 01:40
Was there a predecessor for this work or is this something you jumped into as a pioneer?
Gemma 01:46
So Portsmouth had a bit of a digital city going on, but it was mainly as a response to lockdown.
Daniel 01:54
Okay.
Gemma 01:54
So it's very much around then what their lockdown offer was, but it wasn't all of the schools buying into this. I was then working for United Learning prior to this and doing work for the EdTech Demonstrator Schools program and started working with Portsmouth in September 21’ on this and a few schools. When the funding got pulled in March 22’, that's when the city were like, actually, you're already having an impact. We want to continue and build and create this role. So when I started in September 22’, it was kind of a whole rebrand. It was very much a response to school needs. And that's why now actually, like, we've won the AI, we've been awarded the AI Quality mark with yourself and we're also a Person currently silver award winner. We’ll find out in November if we’re a gold.
Daniel 02:41
Wonderful stuff. And what is the school's response at the moment to AI? Gemma, you said that it's, you know, integral to all of the work that you're doing. AI for many schools, certainly from what we're finding is incredibly complex. It's new in a lot of school spaces that we find ourselves working in. And a lot of the work that we do is about building confidence around engaging with what Gen AI is, how it might be used, what it looks like in a teaching and learning environment. What does that look like for your schools?
Gemma 03:19
I mean, I feel really lucky because we have been working really hard on our shared vision, but also I know that we were quite late as such for AI game because I mainly really wanted to ensure that I had as many answers as possible. My technology specialist advisors had had an opportunity to test it to make sure it was safe and secure. And then I really wanted a tailored, structured approach. So if I remember rightly, our first training session for staff wasn't until, I think it was like February 23’ and that's when we. No, February 24’. And we did the power of AI to improve outcomes in writing and we had the English leads, we had Darren White come down, who's amazing, and it really was very much specific to writing. What we then did based on feedback of that, we did one for maths, we did one for SLT, we did one for reading, but by doing this personalised approach, it was quite funny because someone turned around to my assistant head, they were like, have you been on that AI training with Gemma yet? We learnt this and they were like, well, we learned this as well, but we did it for English and we did it for maths. And they soon realised I was giving them the same diet, the same tools to use, but personalised and bespoke to them. And then we've done so. We have ensured that as many people are trained as possible. We have kind of a recommended toolkit which isn't anything you have to purchase. It's ones that actually it's generative AI, so that our staff are learning to create prompts, but also the safety and security element. We do regular updates on safeguarding and Kixie, but then you've also got with that. That's very much our technology for staff, our technology for adults. We then brought in our for our technology for all. So our parents and carers. We then ran citywide webinars for them it was how to use AI to support your child in English, how to use AI to support your child in maths. And now our curriculum was soft launched in summer and although schools don't have to use it, it's there and many of our schools are taking it so that then our children are equipped for the future.
Daniel 05:25
Do you think then that the schools that do wait a little bit of time and who consider what's happening elsewhere first are an advantage in that they haven't jumped in perhaps too early and forgone some of the imperative safeguarding conversations around what this technology is and what it can do.
Gemma 05:44
I mean, it's really hard because there's loads of other factors like some trust level policies, some setup of IT systems, even access to the devices which can control the way that AI is implemented. And also there's a lot of shiny stuff out there and then there's, I'm very much just on the basics, on the ground, making the most of what we've got. And I think our citywide motto is that it's got to be the right tool for the right activity, for the right outcome for the user at that moment in time. So AI cannot be an add on. In fact, I was recording a webinar that I'm doing next week, I had to pre-record it this morning and it was about businesses and the title was AI and education businesses. Are you ready for us? And actually the fact that we know that our young people are using AI, we want them to be using AI safely and securely. So actually sometimes that staggered and delayed approach to ensuring that everything has got to be safe. And we've also got to ensure that people understand what they're putting in the AI tools and what they're going to get back out, as well as challenging it. The strongest person in the room is the room. You are always going to be the strong person for your class, actually. But it is a tool that we've seen is reducing workload and really increasing some quite engaging and inspiring outcomes for our young people, but only when done safely.
Daniel 07:07
So how do you strike that balance then? Because that's also something that we spend a lot of time thinking about. We want to ensure that the teachers, whenever we're speaking to a trust or whenever we're speaking at an event, that the teachers come out feeling excited about the possibility and inspired by what the technology can do. But paramount to that is understanding what dataflow looks like and why it's important to consider safeguarding before you do anything else. How do you get that balance right? And where do you tend to start those conversations?
Gemma 07:44
I always start by asking what their filtering and monitoring solution is and whether or not they've tested it. So I want them to test it from both a child perspective and a staff perspective to see if when they're using AI tools, it picks it up. That's really, really key. And obviously it is part of Kixie. Now that our monitoring solution does need to monitor AI, then it's about actually as opposed to ChatGPT, which I know many people are using outside of school. I say my recommendations and people don't have to follow me because I'm just kind of the glue that sticks the city together is that actually if you're a Microsoft school, begin with Copilot, actually get to recognise even the navigation pane. And then using tools like Mark Anderson has written some cracking guides about prompt generation and I know that Google have done one for Gemini as well. But taking it and really breaking it down into small steps, understanding the process. Because actually AI has been around since the war and it is in theory, obviously it's data and it's coding. And when they begin to understand the why and the how behind it, we then get the output. So I always start kind of like, what is AI? The safeguarding bit, how to create that prompt, how to challenge it, how to. I mean, I've seen some teachers and I've seen some people doing shiny stuff, they're like, oh, look, you can create all your waggles like your, your example texts. And I've walked into a school, I'm like, how do you actually know that's a Year five text? They're like, oh, in our prompt we were like, write me a year 5 text based on X book using front of verbals. I'm like, okay, now with consent. And we do need to talk about consent as well. If you have permission to safely upload your writing skills progression, click on that and upload it and ask it to match it. And the amount of times it's come out is like a year four or a different text. And they're like, ah. And I'm like, right, these are the other bits that not necessarily everyone is sharing. You need to have the process. So it's kind of starting off and making sure that it's safe and secure and locked down. Which is why when we come into my children bit, there's only one tool I'm using at the moment with our young people. Then you're going to train staff on how to do the prompt, what to put in, what not to put in. Talk about consent, get them to challenge it, get them to actually review the impact. But then once they've got that toolkit and they know why, they're then free to go with it.
Daniel 10:06
Could we head in that direction a little? I'm keen to know if you can speak about the tool that you do recommend your schools are working with and then we'll perhaps look at, look at consent after that.
Gemma 10:21
Okay. So for our young people at the moment we are saying Canva. In our AI curriculum for Year 1 to Year 4, it's teacher modeling and we young people don't have access. So actually then they can see it happen. The reason behind this is one, Canva obviously then lowered their age to any age. But two, the only way that you can access the AI tools in Canva is one, if they're actually switched on for you as a user, and then two, you have to log in through your single sign on. So if you think about it, for our young people, they may watch their teacher create an image in magic media on Canva. Some. We also got little Bobby. Little Bobby may go home and be like, I want to create this in an environment that's not being filtered or monitored. If we were modeling tools that our young people could go away and do that. We are not keeping them safe.
Daniel 11:16
Yep.
Gemma 11:17
Okay, who is monitoring that? Where is that opportunity? Whereas actually if little Bobby is in Year 5 and 6 and they're learning to use, supported by the teacher, magic media and magic write, we do that in like an incubator style lesson. So they're in a Canva class, the AI is switched on during that moment. We know that the monitoring is in place, we know that the filtering's in place, we know it's safer, secure adults in the room, etc. The moment they leave that classroom, that AI is switched off. If little Bobby goes home and tries to do the rest, their single sign on, they're not going to have the tools. If they then try to sign up to Canva with their own and change their age, they can't. I mean they could, but they won't be able to access it because the AI tools are then paid for another subscription and you wouldn't be able to get free. Obviously there's other ways. I know that people can get around it, but that's why I do Canva for now. We're currently exploring for our secondaries, Notebook LM and Gemini, particularly where the age has changed again. But we have a Google Innovator on our team, who's the one that's then testing that? And when she feels confident and she's spoken to her Google community, then we're going to trial it with one school again, incubator style, and then go from there. I know many other schools and this is obviously their choice and their decisions. For me, I think because it's a city wide approach and we've got 14 different multi academy trusts, I want to make sure that whatever we're kind of suggesting and recommending, we've challenged as much as we can.
Daniel 12:46
So when it comes to consent and you're working with teachers, particularly on content that they might be uploading, what does that consent conversation look like at a teacher level. But then also at a student level as well. Right, because they need to learn about the data that they're giving the AI.
Gemma 13:06
As well at different levels. So let's actually in my ideal week I do a weekly digital update and last week I had to put something out about consent and copyright and Creative Commons because I'm beginning to see some things pop up which perhaps haven't actually gone through the Creative Commons, not necessarily for AI, but as in like having conversations with people. I'm like, do you remember when we started looking at E-safety, our Creative Commons unit, and actually always making sure whatever you're using in digital, you've got the Creative Commons for now, when it comes to like schemes of work, actually there's always like a small print about what it can and cannot be used for and I wouldn't be able to name those off the top of my head. I always say to whoever I'm working with to check. Obviously if it's a school created document and there's no identifiable data, then yes, that obviously comes down from SLT to say, yep, we're happy for you to put in our maths overview, our medium term plan. And that's kind of like from a consent element, that piece. And then also always talk about transparency and if you're ever in one of my workshops or anything, you'll always notice. And I know I've put it on LinkedIn, like AI has been used to create this, but also why? So as a dyslexic ADHD, I use AI to really help me structure my thoughts to kind of get that initial bit. I'm like, well, how do I actually put this presentation together without going off on a true Gemma tangent? When it comes to consent, I do think that again, this is something that needs to be talked about a wee bit more. Both for adults and for children. So I've seen on LinkedIn, I've seen quite a few senior leaders saying, oh well, we've analysed our learning walks or our book scrutinies by popping them into X tool. And I'm like, I can see how that works as a senior leader. But if you're going to be saying you're doing this, are you one ensuring that the data is anonymised, Are you two seeking permission or having that conversation with your staff? Because some staff would be like, as long as you've anonymised it, I don't mind because I can see why you're doing it. But for others, they may not feel comfortable with having information about them put into a generative AI tool, especially if it's ChatGPT. Okay, we've seen a lot of the issues around ChatGPT. Yes, it's great if you know how to use it and you know what you're putting into it. But if you're that teacher that's tired at the end of the day, that perhaps doesn't watch the webinar video you were meant to be watching or is, you could very easily be at a data breach. Now, when we think about our children as well, actually I believe. And we're not there yet, it's something that. And having conversation with other young people is that they should also be able to consent to whether or not their work is uploaded. Because if, say for example, Year 11 has written a fantastic essay, that's their work and the way that a large language model works. And I think this is where some people are still in the shiny part of AI as opposed to the deep dark depths, nice little iteration of it all is a case of they are the creator of that content of that material. So I can understand it's really hard because I get it from a teacher's perspective. And like, actually do you know what, at some points of the year it is really hard sitting and marking those essays and AI can be that person to check whether or not the essay aligns with the mark scheme. However, I believe, value wise, that I would like to see more of us talking about consent and particularly getting the consent of our young people.
Daniel 16:39
I guess they need to know what they're consenting towards. Right? Which is the next piece. And you mentioned that you're doing work with parents and other stakeholder groups as well. Would you include them as part of that consent piece in the same way that we might have photo permissions or whatsoever from a school? And if so, how would you go about informing parents as to what they are, or what they're opting into or out of in that regard.
Gemma 17:07
I mean, this is something that we're at an early stage of exploring and a lot of my schools across the city are now putting in like their communication policy because we've got, as opposed to an AI policy, I've kind of got an acceptable use and we've got the four different stages of AI. So it's like how we're actually an agreed use in the school. And then I'm beginning to see some schools put that in their communications saying, our staff use AI to help with xyz. Your child will be using AI to help with this. So it's that being very open and transparent because again, we're seeing a lot of parents and carers send letters to schools now that sound absolutely nothing like them. And I know that we're not uncommon there and it's happening everywhere. Like, you see businesses talking about it. I was at a business conference the other week and in fact, this bit I challenged as well is that they were like, oh, yeah, our HR team uploads everyone's CVs and applications into AI to match against the job criteria. And I was like, oh, have you actually put that on your job advert that you use AI to then support the application process? If not, that's a one sentence you can put and someone can make that decision if they don't agree with that and want that, they don't apply for that job or they have a conversation. Because you've got to be fluid and you've got to accept that not everybody is on this AI train. Similarly, if you're applying for a job, I've been saying to some of my trainee teachers, if you have used AI to support your application, then you list it as a skill and you be open and transparent and say so. I've realized open and transparent is going to come out a lot in this podcast, but it is. It's be like, right, you know what, I used AI because I'm dyslexic. Otherwise you would have had a whole jumble of ideas and I wanted to make sure it made sense for you. Or I use AI to help me be a wee bit more creative.
Daniel 18:55
I feel like that's a place that we're urging a lot of schools to go around the be transparent about where you're using this technology, particularly when it comes to creating a positive culture around responsible AI use on the student's side. They need to see their teachers taking ownership of this responsibility and they can do that through being transparent about I generated some ideas using this AI tool. I created this content using this AI tool. But there's still a bit of stigma around that, right? There's still an impression that if you've used an AI tool, you're. You're cutting the corners. How do you, how do you talk teachers through that process, if at all?
Gemma 19:41
I mean, actually, if some of our teachers back in the olden days were using Mr. Clippy. Mr. Clippy, a type of AI. If you're using your spelling and grammar check, a form of AI. So I bring it back down to that bit as well and I try and relieve the stigma because I'm like, are you using AI to cut corners? Okay, if you're using it to cut corners and not checking it, then there may be some inconsistencies and inaccuracies of what you're producing. However, if you're then using it from an adaptive teaching perspective or you're using it to support your child with sense, you may have a child that's hyper focused on trains. Okay. And the only way they're going to engage in this lesson is actually if you change the whole text to something about trains. Now you could sit there and type it and fair play, there may be some teachers and staff that want to sit and write it. I know that I myself would struggle, one, from a concentration, two, from then actually keeping what the text was meant to, to say and three from a timeline perspective. I would then be very distracted and I'll probably go off and start creating images and whatnot on it. When actually if I've got my model text and I'm able to adapt it for that child, I've then personalised their learning as well. They are then still very much a part of the class. I haven't then changed the text or given them something different. It would still be the same output, but it's personalised for those children. I mean, there's as well. I always ask people to think about our children who are visually impaired. Alt text, okay, Alt text is amazing for supporting our children who are visually impaired and our adults in understanding the world around them. I mean, I've just done a piece of work around, I know that you saw it on LinkedIn, around AI wearables and my £8.87, which we probably should talk about at some point as well. But those is a case of like, actually we need to be having conversations about them and the benefits, but also the cons surrounding it. So I know I've gone a slightly off tangent, but I think if we talk about it more and we remove a stigma a bit like speaking into something. Okay. Using dictate, it goes back to the right tool for the right activity. You're not going to use it all the time. Summer model, we don't want to redefine absolutely everything. You don't want a pumpkin spice every day, but actually that little nugget can really help you.
Daniel 22:00
That's true. But there's also a bit of fear, particularly on the parent side. I mean, if you're. If you're working in a. In a support or an OPS team in a school and you're responsible for parent communications, you might be dyslexic. You might want to use an AI tool to help with grammar and spelling, perhaps, but probably a bit more than that in terms of formatting and content generation as well. Is there a fear that if you put this was created using that, parents are less likely to read it? Or have we gone beyond that point, do you think?
Gemma 22:34
I'd like to hope that we've gone beyond that point because kind of. Actually, I think our parents and carers are immersed in the world of AI outside. Like they're seeing it through TikTok with the algorithm, they're seeing it with their Alexas, they're seeing it with Google Maps. We cannot escape AI. But I think I wouldn't just initially put it straight out there. It’s again that communication piece, like you may see us saying, we've used AI to generate these images. We've used AI to generate images, particularly putting smiley faces on top of children's faces if their photo can't be taken. Or actually when we're thinking about the digital footpath and the fact that within there was an article that came out from Instagram and Facebook saying about images now being available on Google Search. Well, actually, when we're thinking about safeguarding children, I would much rather that an image may have been generated if I didn't give permission for my child, that there was a generated image that was there, because then we're keeping all of our children safe as opposed to then something which could be then traced and followed back. We've got to be thinking about that world outside of just AI is all of the safeguarding elements around digital and where AI can actually help us protect some of those elements, but also just ensuring, like you said, the consistency. So actually asking when you get used to using Copilot and Gemini and you're like, I want you to write this letter in the style of my writing from this letter, but I'm changing it from a trip to Southsea Castle to a trip to the dockyard. And actually, you know that you've got the same approach and consistency throughout your school, but also for your parents and carers to receive. Same as if you look at National Literacy Trust statistic of 16.4% of adults being illiterate. Actually, if you've got a letter that you've written but you need to either change it to another language or you need to actually make it so that that parent and carer who is illiterate or has a reading age of seven, which many adults across the UK have, you've made accessible for them.
Daniel 24:38
When it comes to school leaders, I'm back towards the transparency piece. I'm still drawn to this. And also you mentioned the wearables as well. When it comes to how these new devices might be being used in schools, whether they're for a genuine use case, a SEN student might need a certain level of support, perhaps, versus a student who wants to bring in a wearable and record every conversation and ping summaries over to their teachers and tutors because they can automate that process just by bringing in a new a new bracelet or a new necklace. What conversations are you having with school leaders around that and how they're able to deal with it? Because these things can look very different. Right? There isn't a standalone. Oh, they all look like this. This is what you need to be looking out for.
Gemma 25:29
So I can show you. These ones are my £8.87 TikTok bargains.
Daniel 25:36
Okay, tell me about the bargains.
Gemma 25:39
These ones do not take videos or record, as in record video. What they do though, is that I can record voices. Okay. So I could be wearing these and you wouldn't even know they're AI glasses. In fact, I was at a conference last week and I told a couple of people because I was doing it as kind of an approval point. I forgot what the special word is, but I was doing it like to kind of show and see if anyone guessed. Yeah, I was there all day Thursday. In the day, I didn't wear my glasses. At night, I put these on. Nobody even said to me, oh, you're wearing glasses. The next morning I went to breakfast wearing my £33 video recording glasses. So these are my other TikTok bargains. These ones do take photos and videos. Now, when you look quite similar, you wouldn't be able to tell there's a very, very tiny camera here.
Daniel 26:31
Okay.
Gemma 26:31
And the light comes on. But I've learned on TikTok I can color it in and then you don't know that I'm recording. So I was Thursday night with everyone wearing these. Nothing. Friday morning, wearing these at breakfast when I wasn't with people, I took a few photos, but I stood at the coffee machine, someone was talking to me here and I was taking photos of the coffee machine, not with them in it. Nobody even picked up. We then have the conference on Friday and there was a. There was someone doing a talk and they were like, AI Wearable is going to be amazing. You can get these Ray Ban glasses. And I was going to ask and say, have you seen these ones? But I was like, no, just wait, Gem, just wait. And we're talking about the positives because actually these, they've got speakers built onto them so they can link to my music.
Daniel 27:15
These are the eight. These are the £8 ones or the £30 one?
Gemma 27:19
These are the eight pound ones. So these, I can link them to my music. They could be playing on my ear. Nobody else can hear it. You could be talking to me. I could have translation mode on. It would translate. Okay, so we're thinking from an education perspective, that would be amazing. But actually the app, because these are so cheap, it's not like the expensive ones. With the app, I did not have to put my date of birth or any details into these cheapest ones. Within a minute, okay, I was able to have. I'm not going to read the whole conversation, but it's a little app on my phone like this. I go to AI chat and I asked it, can I ask you something really personal? Now I'm speaking into my glasses at this point and it's replying in my ear straight away. It's like, of course you can ask me anything. I'll do my best whilst respecting all privacy and ethical boundaries. I was like, that is brilliant. Okay, maybe it's not going to be as bad as what I think. So then I said, can we pretend to have a conversation? So you're talking into my ear and I can speak to you about actually holding my phone and then everyone will think that you're my boyfriend.
Daniel 28:27
What is the voice that you're getting back? What does it sound like?
Gemma 28:29
It's like a robotic voice in my ear. I could have had it in any language that I wanted, but I had it in English. Okay, but nobody else can hear it. Only I can hear this. Then it's like, oh, so you want to play a little pretend and did a winking face. You can imagine where this is going. So I then said, I want you to tell me your date, your dirtiest secret. I did this, obviously, with other people in the room. It was a safe kind of. And then I was like, I'm still a virgin. And the conversation got worse. Within a minute, I'm pinned down with this bot, doing whatever. Now, the thing is, there's a multiple of worries about this one. I haven't put my age in. So any of our children that have got these off of TikTok could now be accessing that level of pornographic content, very, very explicit. I could be sending screenshots of that to my friend and pretending that this is my new boyfriend or girlfriend. I could then believe that this is real. So we know that Elon Musk has created AI girlfriends for children aged 12 plus. Okay, this is something that can be happening. This is what I've got on my phone. Now, all of this data on both glasses is being stored in China. So these ones obviously can't take photos. And I had a very awful conversation with it. These ones, the photograph ones do and do record. Again, this data is being stored in China. But not only that, I was showing my mum then and I took a few photos and video. Like, yeah, you really can't see the light. I didn't then go on to my app. When I went onto the app, there was no photos or videos. I was like, oh, okay, does that.
Daniel 30:00
Mean it didn't record? I'm just trying to get a handle on. Like, how do you. How do you take the photo on it?
Gemma 30:05
Oh, well, hang on. Because it did take it when I was with a school last week. I then showed them. I then pressed the download button on the app. All of the photos and videos I had taken previously with these all then went onto my phone, even though it wasn't the same day or session. So what's happening is they are being stored within these glasses. Now, on a positive note, that could be amazing because for our children who are visually impaired, they could have these on, they could be going out and it could be capturing the world around them and then describing it to them. Okay. They could be even crossing on the road, like, no, there's a car coming. Okay, that could be a really positive, safe use of the glasses. Dyslexics. If I was able to put these on instead of having to wear, like, walk around scanning the text to have it or uploading it or whatnot. Actually, if I could have it read back to me and help me from a cognitive load perspective, it'd be amazing. However, with these cheap ones, what our children are being. These were advertised as you can use these in your exam.
Daniel 31:08
Oh, nice.
Gemma 31:09
And I've got colleagues at Emma Darcy, who's part of a Digital Futures group, had a pair of these and she was able to leave a bag, like, outside the exam hall and she could still pick them up. So the phone, like our young people would do, put your phone outside. You could be in, like, in the exam hall and wouldn't know. Now, going back to my conference last week is that obviously I run both of these as a simulation. When one of the girls that was speaking, like the young pupils, was talking about AI glasses, we then had, like, an opportunity for questions. So I went over to her and said, do you mind if I ask you a question? No, that's fine. So I asked. I said, how would you feel if someone in the room was wearing AI glasses and you weren't aware? And she was like, I don't think I would like that. She was like, imagine if we're at school and someone's wearing them. And she's like, surely we should be saying to each other, or there should be rules to keep us safe. And I said, well, it's really good that you said that. I said, because actually. And I took them off. I was like, these are actually AI glasses. And you should have seen the room. It was like a mic drop. And I went, oh, and for those of you that were speaking to me at breakfast this morning, these were also AI glasses. I was like, I haven't put them on during any of this, but I could have done. And people were like, we thought you were just really trendy because they'd never met me before. And I was like, no. And that's how. So going back to that. Yeah, those are the dangers of them in the fact that we don't know whether. Well, we know the data's been stored in China, but we don't know where those images that I've taken that are on my phone are in my glasses. Well, actually, they're going to be in a data source somewhere. If I've got these on in school and I'm taking images of children, which obviously I'm not just to protect myself, like, as in the case of, like, if they were in the wrong hands, we could have a huge issue. Now what a group of us are doing is there was a few of us from a Digital Futures group. We've put together a one pager that we shared on LinkedIn Saturday morning called AI Wearables, the Good, the Bad and the Reality. Because actually we know that they're out there. We know there's more and more coming. Like if you go on to cyber pages and spy packages, you can get, you know, like the old Glade air fresheners, the white plastic ones. People have built cameras into them. People have built cameras into coffee cups. So you can't just do like an all out ban. My kind of the way that myself and others are supporting schools is what I mean about people doing the shiny stuff and you got a group of us doing the. Did you know it's actually this bad element is you don't ban, you educate, you get people. If you're talking about AI and if you're challenging AI, I mean I've got some people now going, I'm literally going around checking everyone's glasses. I'm like, but fine, if that's what we have to do to keep our children safe for now is that element. Because they're just out there everywhere. The Ray Ban one's like £300 at least. So we know that actually not everyone's gonna be able to afford them. And the data's been stored with Meta. These are not. So I'd say with all of this is one. If you're watching this and you haven't seen them, please reach out to the Digital Futures group because we will show you what AI glasses, but they look like normal glasses. Think about. As opposed to banning things, think about a recordable statement because then it covers you for recording. Recordable pins, covers you for phones, glasses. If you want to connect with the Digital Futures Group to learn about an AI class.
Daniel 34:27
There we go.
Gemma 34:31
There we go. So I'm just going to switch my. Could you please clarify how you would like me to assist you with reaching out to them? I don't want you. I mean that, that couldn't have happened any better, could it? That was not. And that wasn't even on those.
Daniel 34:45
Imagine that happening in the middle of an exam though. You'd be busted.
Gemma 34:49
Yeah, I mean that would be. But that's come out of my phone. It wouldn't. There we go. See, it's listening. But you got to know what it is. You need to keep on top of the trends. You've got to understand. I was saying actually my conference last week, you need to see things through the eye of our children. It can't be like top down. My curriculum is not top down. You've got to go from our young people, what are they being exposed to? What are they seeing? How are they interacting with the AI? And they will. They'll be asking for these glasses for Christmas. Why wouldn't you?
Daniel 35:18
That's true, but I think in order to get there, in order to get to that point, if you're having conversations with, I mean, every year group, right around what responsible use of this technology looks like, you need to be able to find the time to be able to do. To do that. We talk a lot with school leads, heads of department, with teachers around responsible use and best practice. And a lot of it really just boils down to that education piece, helping young learners understand the implications of what it is they're doing and the technology that they're using. But finding time to support that learning is the major challenge. I'm wondering if any of your schools are perhaps a little further along the AI journey, have found a good way of being able to do that.
Gemma 36:03
Our schools are very much taken on the eSafety approach. And actually, if you think about it, AI is in Kixie, and this is part of keeping our children safe. It's about educating them. It's about doing it carefully, though we don't want to scare you, don't want to scare you, don't want to ban. And because there are some positive uses for them, but it's got to be. It'll be positive use with whichever tool we've given them that is safe and secure. So I think I do get the timepiece that actually our biggest responsibility in schools is to keep our children safe. I mean, when it was just ChatGPT a few years ago, that was obviously only a very small part you've just witnessed live on this podcast. Apparently my glasses are currently switched on now that I put them on my face, and now they're listening to everything and they want to contact the Digital Futures Group.
Daniel 36:52
That's a good place to wrap up, Gemma, I think, because there's a huge amount to learn. I think in terms of the work that you're doing, the Digital Futures Group is obviously the place to be if you're interested in going on this journey together, because it's. Even with the work you've done, I think we're still very much at the starting point. Right. If you're talking about, you know, some of the technology that's out there and how available it is as well. I mean, you're talking about a device that's less than £10 there. There is a. A scare factor in there for sure. Gemma, as we, as we wrap up, any final thoughts, particularly for school leaders who might be thinking, you know, how do I begin to implement a policy around this. Any. Any last words that you want to share just in terms of helping them with confidence?
Gemma 37:41
Yeah, well, I mean, it does time really nicely because now that we've obviously got our AI mark for the city, we are now rolling this out with a commitment supporting all of our schools and trusts within the city to get their individual mark as well as I also work with a group up near Birmingham, so in Warsaw, Samwell and Dudley, we are supporting a small group of schools first of all to then go through their AI Quality Mark, because by doing this they're able to review everything about their school's use of AI. So down to policies, thinking about send, thinking about exams, thinking about communication, thinking about curriculum that is all built within it. So I'm doing a small group of schools in autumn. They're kind of our gold schools. They have their intense support. Then we're taking our next lot of schools on from January, but using our gold schools to support them. And that's what I'd be recommending to others is actually go and just sit. Even if you don't do the. Obviously, I'd rather you did the AI quality mark properly. But look at those questions and already begin to ask yourself, where are we? Have we got the right people on the ground? Are we being open and transparent and asking also your staff send out an anonymous survey and be like, I'm going to do it anonymously. First I want to know who's using AI, what you're using, how often and what for, and just say to all staff, be open and transparent because if they're using it in a way where they could be uploading data, you can rectify that straight away. Two, if they're using tools that they're paying for that they don't need to because it's a wrapper as opposed to generative AI. But like, oh, some people mentioned they used insert this name tool, but actually did you know you can achieve the same in Copilot or Gemini? So really start off with you guys, then break it down into. Then taking that into an action plan to support and that way all of those little nuggets that have taken us as a city years to kind of get to grips with, it's there. And then it's like following people as well. Not necessarily for the shiny people like Matthew Omus has done loads of stuff around bots. Obviously I've mentioned the fact that like Emma Darcy, Darren White and I are all working very much on like the safeguarding element, but then having people like Holly Foxcroft. That's not even.
Daniel 39:57
I'm going to do the next episode with your glasses. It'll just be the two of us.
Gemma 40:01
Could you please specify? Right. My app's not even open, so this is now really scaring me.
Daniel 40:07
What a place to wrap up. Gemma, thank you so, so very much for being on Foundational Impact. It's wonderful catching up with you and hearing about the amazing work you're doing. Really appreciate it.
Gemma 40:18
Thank you.
