Erin Mote: The AI Research to Classroom Gap No One is Talking About
Videos
Transcript
Daniel Emmerson 00:01
Welcome to Foundational Impact, a podcast series that focuses on education and artificial intelligence from a nonprofit perspective. My name is Daniel Emmerson and I'm the Executive Director of Good Future Foundation, a nonprofit whose mission is to equip educators to confidently prepare all students, regardless of their background, to benefit from and succeed in an AI infused world.
Daniel Emmerson 00:28
Welcome everybody, to another edition of Foundational Impact. Thank you so much for being here and thank you for joining us. And also a huge thank you to our guest for today. We have with us Erin Mote, who is or seems to belong to so many different projects and organisations, it's hard to pick out which one is going to be the key for us to address here. Although InnovateEDU, I think, is the primary one. Erin, thank you so much for being with us on the show today. I'd love it if we could start by just sort of unpacking some of the roles and the projects that you're responsible for. I think that a number of our listeners will be familiar with Project Unicorn in particular, but maybe if we take a step back and look at InnovateEDU and the work that you do there, can you tell us a bit about what it is and how it relates to AI in schools?
Uncommon alliances for system change
Erin Mote 01:23
Yeah, absolutely. And thanks for having me. I'm so excited to just be in conversation with you today and thanks so much to the listeners for joining as well. So at InnovateEDU, I really talk about it as a house of brands, not a branded house. Because the fundamental architecture for action at InnovateEDU is this idea of uncommon alliances. How do we bring people together across policy, practice, technology, including industry, to work at system level change in education and, you know, for your listeners, and I'm happy to share in the show notes, you know, we have a manifesto as an organisation around the types of change we look at and architecture for action. So when we choose a product or project to take on, why do we do it? And it's really focused on how do we bring folks together to find common ground to advance systemic levels for change in education.
Explanation of data interoperability challenges in education
Erin Mote
Our oldest project is Project Unicorn. It's celebrating its 10th birthday. If you could imagine, as an alliance, a 10 year alliance working on data interoperability this year. Don't worry, we're going to get a nice unicorn cake for those spirit committee members. We'll have a whole party. But you know, that project really started because as someone who built and led schools in New York City, I was experiencing what it was like to have a ton of different types of technology applications that my teachers were using and that the school was using, from things like multi tiered systems of support, to the, you know, student information system we were using to take attendance, to the, you know, formative assessment tool that my science teacher was using for hinge questions to check for understanding. And these tools never worked together. And what was challenging about that is I was watching so many of my educators spend their Sundays rostering tools. I was watching them bring printouts of data from these platforms to their team meetings, their grade level meetings, our whole faculty meetings. And as an enterprise architect, I just was like, this is stupid. And so, you know, I have a community of practice that we also shepherd and found called the Data Wiz Crew. So we did some mapping of what tools people were using. What did that look like? And from there we really discovered a couple things. One, that so many of us were developing and building interoperability ourselves. So we were like the glue that was holding these tools together. And maybe that was spreadsheets and maybe that was some like, airtable code. But, you know, by and large, we were the interoperability engineers and we were each doing it ourselves, which also felt really stupid. And some of us were paying, in some cases, some school districts and school networks were paying six figures to get their data out of curriculum tools.
Daniel Emmerson 04:32
This is the connectivity of data between different platforms and systems.
Erin Mote 04:39
Yeah. So think about being able to just move data safely and securely between systems or, you know, a teacher having the insight of being able to see what's happening in a science class when they're working in a math class. And is this student maybe struggling with some of the same conceptual understanding in science as they might be struggling with in math and really giving that whole picture of a student. And so field based insight has always strived when we think something is right for action. And so that's how Project Unicorn was born. That's how the EDSAFE Alliance was born.
EDUSAFE AI Alliance formation
Erin Mote
In 2020, before the consumer breakthrough of generative AI and ChatGPT. We formed EDSAFE because we were very concerned about the things we were seeing around machine learning and algorithms in education tools. Doing things where, you know, it was frankly, like tracking kids and really restricting access and opportunity. And we thought it was important that we come up with a framework. Now, the S.A.F.E framework is used globally, that focuses on safety, accountability, fairness, transparency and the efficacy of AI tools. So everything at InnovateEDU, even though it feels like they're distinct projects, are things that are A, both right for systems change and B, really came from educators in the field and saying we need to move in order to achieve this vision, which is access and opportunity for all learners to be able to really engage in education in a way that moves the needle for them.
Learning Personalization vs. Tracking
Daniel Emmerson 06:14
Can we unpack that a little bit? The tracking kids in particular that you mentioned and restricting opportunity. I want to pick up that thread, if I may, just because when we look at a number of the AI focused solutions that are out there today and how they're promoted, those things are. They don't quite go together. Right. The tracking is seen as something that increases potential or opportunity for learners. Can you help me sort of explore a little bit more what that restrictive element looks like and why that's important?
Erin Mote 06:50
Well, I think the opportunity for personalisation in education with AI tools is extremely high. But we have to understand that what we know from the science of learning and development is that learning is jagged, that not all kids are proficient at the same time, in the same way, in the same skill strand. And in fact, we've built an education system, a schooling system that's focused on the average. And I think what we're understanding from the learning sciences and the development, that there actually is no average when it comes to how we learn and what that looks like. And so we have a system that is asymmetric actually to, I think, the research and evidence base around learning that's emerged, you know, as we think about how much we've learned about the brain and learning sciences over the last 20 years. You know, we used to have this idea that kids, if they weren't on track to read by third grade, it was sort of all over in the United States when we thought about reading and literacy. And what we now understand about the brain is that there's really two significant times for neuroplasticity in a child's life. And it's not at third grade, it's zero to two. So think about all the evolution a baby does to get from what they are able to do at zero to what they can do by two walking, talking, relating, emoting, all those things. And the second time is actually as a child goes through puberty and so generally like 11, 12, 13. And so these are times of enormous neuroplasticity. And so we used to say, well, if you're not on track by third grade to read, we have to, we're sort of, you're sort of lost in our learning architecture. And the, and the reality is that's actually not true, that there is a way to use data and personalisation to backfill literacy gaps and math gaps in order to keep kids on grade level and accelerate them even beyond grade level to proficiency. I experienced that all the time at Brooklyn Lab. But there's also a dark side to this personalisation, which is in some, in some educational technology software. It doesn't give young people the ability to sort of prove what they know beyond a really sort of stratified sort of grade band. So because we know that learning is jagged, you might not know fractions in sixth grade because you didn't get your third grade number line, but you're able to do some other 6th grade skill strands. But what can be really challenging about some ways that education technology is built in terms of tracking is it doesn't sort of have allowances for that neuroplastic for that jaggedness. And so one of the things I think that we see in the best types of edtech tools is tools that allow students to both do what is classically known as skill remediation, but also what is known as sort of skill expansion or zooming ahead. And the vast majority of tools don't allow for that type of differentiation and that student driven agency. And so one of the things we were seeing in 2020 when we formed the EDSAFE AI Alliance, was the vast majority of tools were really tracking students and breaking through those like sort of barriers to content access and opportunity. The ability to demonstrate what, you know, maybe above grade level, maybe the ability to sort of have agency in the way that you're choosing content and curriculum, whether you're an educator or student, was incredibly restrictive. And so we saw also in a number of tools, frankly like student profiling happening, we were seeing in the behavior space, so we were seeing things like race and ethnicity, homeless status, disability status, triggering a different type of response when it came to student discipline because of how some of these systems were using that demographic profiling versus a student who might not have those characteristics. And so data can be so powerful, but we also have to be critical and interrogate the outputs of what that data is giving us, because so much of how data is served up really matters what is sort of the underlying both data source you're considering, but also some of those data fields. So we don't want to, I think we want to make sure that all students have that opportunity to punch above their weight. And that for me is really, really important. And so we have to be deliberate in the design of our tools in order to get there.
Establishing standards for AI tools using the SAFE framework
Daniel Emmerson 11:53
So when you were investigating that, I mean going back to 2020, of course, this is way, way before the mainstream gen AI tools that everyone is very familiar with today. Was that research initially conducted in, in the US or did that span internationally? What sort of geographies were you operating in at that point?
Erin Mote 12:13
Yeah, so there was some global research happening, but mostly in the US and there's some pretty landmark RCTs, randomized control trials and that were happening in Florida and Texas and New York that were starting to show these disturbing patterns that were emerging from machine learning and from algorithmic bias. And so it's really important for folks to know that while generative AI feels like this whole brand new amazing thing, AI has existed since World War II. And the use of AI in education around machine learning and algorithms is something that's been happening for decades. And so the arrival technology of generative AI, particularly in the consumer space, took me by surprise and I know so many others. And really the game has changed now with the ubiquity of these tools in classrooms. But the sort of standards around safety and accountability, fairness, transparency and efficacy remain, I think, a constant and steadfast way for us to think about should we be using AI in education and under what circumstances. And for me, safety is incredibly important as table stakes in this conversation.
Daniel Emmerson 13:33
That's your, your acronym, right? For the. It is where that comes from. And I'd love to talk to you more about that in a moment. But you mentioned your manifesto earlier and the reason that I was asking about where, where the research is happening, I think this is, this is where I picked this up. But you were talking about scarcity as a, as a problem, and you refer to a set pie and a percentage of the pie that education or systems might be looking to take and that there's a limit on what people can learn from and develop themselves. And in fact, we need to be thinking about these problems, particularly around AI, from a very different perspective when it comes to sharing what we know and understand and collaborating more. I was wondering if we could investigate that as to whether or not that that still holds true from the time you wrote the manifesto, or if that looks a little differently today.
Rejection of scarcity mindset in favor of collaboration
Erin Mote 14:26
Yeah, I mean, I think this idea of a fixed pie is really. So this is a like economic concept. I did not invent the fixed pie, but this is an idea that in order for me to get something, you have to lose something that, you know, we are in this situation where, whether that's organisations who might be competing for resources or whether that's students who, you know, are in two different districts or systems or schools or networks. It's this idea that someone has to lose for me to gain something. I just reject that as a, as a premise. I think that the type of systems change and movement building we need to be doing in education means that we need to be thinking through and thinking about how we collaborate together, how we share resources, how we think about best practices. And I think there's a bit of a clarion call in this moment around AI because for me, I talk about research and evidence all the time. And I think evidence is the sort of needs to be the undergirding of every decision that we make in education. And yet it takes time to generate randomised control trials or quasi experimental designs or the type of really sort of high level research that, you know, can help us make these decisions over the long term. And if you're a teacher, you need to know if something's not working in November, not five years from now. And so, you know, I think we need to be thinking about the evidence enterprise in, in ways that preserve those types of things. Quasi experimental designs and RCTs, they definitely have their place. They're really, really important. And, and we need to be thinking about, you know, does every tool that we're bringing into our classroom at least have a logic model, an educated hypothesis about what they're trying to do and are they working up an evidence chain? Are we as organisations and practitioners confident that if we're sharing best practices and collaborating together, we're moving sort of that evidence chain? I think the other thing that's really important is that I think we have far more in common than we do. That's different. And so a lot of times I'll be engaged in conversations across international lines or state lines or district lines, and I'll talk to folks and they'll say, well, that worked for that school, but it's not going to work for my school because of this, this and this. And that I think is really challenging because while I love that every community is unique and has a special context, I think there are things that we can learn from each other if we're in a learning posture. And it's always ironic to me that in education, which is basically the business of learning, that I see so many organisations and folks who aren't in a learning posture, who aren't thinking about curiosity and challenge, discernment and pushing on each other. And so that's so much of the culture at InnovateEDU is to not think we have it all figured out. But how do we build a table that brings together diverse perspectives who oftentimes don't work together. And frankly, in an increasingly polarised political situation here in the United States, they might not have ever spoken to each other before. And so how do you build trust? How do you find common ground? How do you remove this idea that for you to get this, I have to lose something? And how do we create the right incentives to build movements and push people to action? And I think there's kind of no greater time for us to be in a learning posture right now. When we look at how much AI is in our classrooms, no matter what country you're in, it's in the high 70s, 80s in terms of use. And we are not going to be able to have an evidence based that that takes five years to generate educated hypothesis about what we should be doing. And so this is really, I think, a clarion call for all of us to come together and to really think about how we share those best practices, how we find inspiration in each other. And I'm certainly inspired by some of the work that's happening in the UK right now around test beds, around shared infrastructure. I've always been such a huge fan, I'm such a fan girl of the Education Endowment Foundation and how they really try to think about making research more discernible to practitioners. And those are the things that I think we need to be looking at and thinking about. How do we rapidly build those public infrastructures for understanding, but also for collaboration?
Daniel Emmerson 19:18
Has recent genAI tools pushed that level of eagerness to share? Because I suppose everyone was at a similar starting point. When we're thinking about either schools or districts, everyone was thinking, crikey, you know, in November 2022, ChatGPT 3.5, everyone was at a similar starting point. Even though of course, AI existed long before that. This was the main sort of mainstreamification of the technology. Did that encourage people to share more? Did that encourage school districts to maybe reframe what they were thinking about in terms of maybe we could collaborate more, maybe we could share more around our practice? I'm wondering if that was a catalyst at all in the US. In the UK, I think it probably was, yeah.
Building national collaboration infrastructure through policy labs
Erin Mote 20:05
I think in some ways it was. In some ways it wasn't. I mean, I think there's, you know, one of the things I think that the UK has that the US doesn't is a set of regulatory undergirding around child privacy and data privacy that sits at the national level. And we don't have that in the US. We have some pieces of the puzzle, but not with the level of sophistication that, frankly, other parts of the world do. And I think what that often means is that collaboration often happens in states here in the US rather than sort of nationally. But this is a place where at EDSAFE, we built structures right away to try to build that national collaboration. And so we have a group of policy labs that started and actually started with the biggest school district in the United States. New York City Public Schools is how our policy lab started, was our first policy lab. And we started working with them the day after they banned generative AI. So they banned generative AI, and they said, we're not ready. And then they said, but we're going to get there. And so, you know, we've worked with them over the last three years in order to really put together policy and guidance. And they actually just released it into the world two weeks ago. And so I'm happy to share it for the shownotes. But it took a really deliberative process and a lot of internal work around their procurement systems and making sure that they could look parents in the face and say, we're not putting any tools in front of kids that haven't met our privacy, security and interoperability standards. They have a funny acronym for it. It's called ERMA, Not a person. But that's literally the name of the process, the ERMA process. And they did a huge amount of refinement in the ERMA process to really be able to make that promise to parents, teachers, communities, and students. And so we built this national network of policy labs. Cohort one, I would say, were the fast runners. The Gwinnett, you know, Canon City, Colorado, El Segundo, Santa Ana, so California school districts, Georgia school districts, so on and so forth. And we did it with only 12, and they were their own cohort, and they did a ton of sharing. But one of the essential elements of it is whatever they developed became open source and open science so that we would publish it, and then other school districts could take it as starter dough. We don't want them to copy it holistically, but we want them to be able to have a starting point. And then EDSAFE built a series of resources, from a glossary to planning resources, to things you could print out as worksheets if you needed to for your staff meeting around generative AI. That network has now grown to 29 districts and 10 states here in the US and so now I think we have, over the last three years, really built this national collaboration infrastructure that's not just about policy and not just around having a unified policy stack and practice, but it's really about building human connections with people to share expertise, to share best practices so they can pick up the phone and call each other. It's just like very beautiful thing that I love so much is when I am talking to our policy lab, so state leaders or district leaders, and they're like, oh yeah, I just talked to XYZ and I'm like, makes me so happy you picked up the phone and you called somebody that's a state away or that's, you know, a neighboring district and, and, and had a conversation about a challenge. You were having an opportunity and that you problem solved together. And actually not to get too like philosophical, but that's actually like the origin of learning. Like that is the Socratic method that has existed in ancient Greece for long before any of us existed was this idea of folks coming together collaboratively in person or virtually, and thinking through the future they want to imagine for education, for AI, for young people and for themselves. So it makes me feel really good about building that collaboration infrastructure, but you have to be deliberate about building it because people aren't inclined to do it naturally. There has to be structures and systems and trust that underpins this type of network.
Daniel Emmerson 24:37
And how did you go about developing that when you started?
Erin Mote 24:40
Yeah, I mean, part of it, we have a whole architecture for action, which I'll share. But part of it is like the pain was really, really acute. So people were willing to collaborate in different ways. Folks were facing questions and concerns. They were, you know, wrestling with hard things that they knew they couldn't do themselves. They might have lacked the technical expertise and knowledge to understand AI and how AI worked. And so, you know, EDSAFE provides so much technical assistance and materials and resources. I do it ask me anything. Every month with all of our policy lab leaders. We don't really have an agenda in these meetings. Like there's sort of this like loose agenda, but it really is just this hour of what is top of mind and what are the things that folks are thinking about. And then we think at EDSAFE about is this at a national level that we need to build resources or materials or lean in in terms of policy. And so there's all these types of structures that are both like what I would consider like proactive infrastructures for collaboration, trust building and communication and then reactive infrastructures so that if someone says, this is a serious problem for me, we have capacity to sprint to meet it and I'll, I'll name, you know.
Addressing Chatbot safety and making policy impact
We in February released a policy paper around the safe framework and chatbots and companion use in education. And that was entirely responsive to what we were seeing as demonstrable harms that were happening in schools and with young people around the use of AI companions in chat bots that were, you know, consumer tools not properly offloading students, aligned with mandated reporter practices, the use of companions to form intimate relationships with young people. And so what we sort of scaled and ran fast, put together a task team that released this policy paper in February, and we were going to release mandated reporter guides guidance in February aligned to it. And frankly, our policy labs and our state ed chiefs and even some governors said, I need you to release this asap. And so, because they were wrestling with really important legal concerns, considerations and questions about what happens if a young person ideates about suicide or is in mental health crisis in a chatbot or companion that they're accessing through school technology. What is our responsibility? How do we put forward a process? How do we put forward the work? And so we released that guidance and then one of our school districts in the policy lab just took on open sourcing their chatbots and companions policy by January. So even before the policy paper is released, we have released guidance, given legal instructions, and we have a model policy from a school district on the ground so that as school districts read this paper, they're not just worried about this and aware of this, there's actually some action steps and what to do and how to move forward. So again, like the proactive infrastructures and the reactive infrastructures. And then if you look at some of the things that have happened in the US since state legislatures have introduced almost 90 bills around prohibiting the types of behavior that we call out in the policy paper. So we know our state policymakers are picking that policy paper up and they're writing legislation and policy to regulate foreseeable harms. And then in late March or mid March, Congress actually got a bill through committee called the SafeBots bill that's aligned to the research agenda and the policy paper and now is for consideration in the full House and there's a companion bill in the Senate. So what's really important is where we are aligned on safety, where we have found common ground, where we have done this together, we can move fast to protect young people's learning experience, but also their foundational safety when it comes to these tools. So I'm really proud of the coalition that's come together to do that work and all the sort of reactive and proactive work that school districts and leaders have done to say, not on our watch.
Daniel Emmerson 29:12
It's an incredible achievement. Huge, huge. Congratulations. I know it's, you know, an ongoing piece of work.
Erin Mote 29:18
Oh, yeah. We'll be in Congress on 28 April for a briefing. So stay tuned, y'all. This is not. This is not the end. But, you know, I think in a. In a place where we're also finding community with your, with the DfE in the UK around standards to protect young people, around chatbots and companions. So how do we bring some of that really great work that's happening internationally to inform our work here in the US and vice versa? We see governments all over the world wrestling with this question around where there is demonstrable, foreseeable harms. How do we move faster to protect young people?
Procurement as an expression of values and the policy stack approach
Daniel Emmerson 29:59
Well, your SAFE is the safe, accountable, fair and efficacious. Right? That's the principles that drive the EDSAFE initiative. I'm wondering, are there some core or key takeaways, I should say, for folks who aren't familiar with this work, particularly those that are overseas, here in the UK or elsewhere, to get people thinking about procurement and deployment and what they should be and what they shouldn't be accessing in a school environment?
Erin Mote 30:30
Procurement is like the sexiest thing, I think, in our ecosystem because it is the way we express our values, it is the way we express what we hope for young people. And so, you know, we have a policy stack where it's really funny when I, when I work with our policy labs and they sort of first come to us along with our team. A woman named Andrea Klaber leads it at EDSAFE, all rta. They want to, like, start with procurement. And I'm like, I love you that you want to start with procurement, but there's a whole set of things that we need to do before procurement that are about understanding what our intention is. So the very bottom step of the policy stack is a board and district school, district school network vision statement. In the case of states, it's often a state vision statement about what they hope for AI in education. So we're really clear about what we want the outcome to be. And then the top of the stack is actually procurement.
Quality indicators for EdTech
And New York is a great example of how long it took to get there so that the procurement system was aligned. But we think that there are and have put a stake in the ground around what we think are five quality indicators for ed tech use in general. We're really focused on driving a distinction between purpose built education technology tools built for use in schools and consumer tools. And the reality is in the US I know also in the UK it's a bit of a mixed bag about what's being used in classrooms. There's consumer tools being used in classrooms that were not intended for educational purposes. And same in the U.S. and so first, how do we draw that distinction between purpose built educational tools and tools designed for learning, tools designed for Socratic thinking rather than sycophancy? How do we draw that distinction versus consumer tools? And then when we're thinking about the procurement process, along with a consortium of organisations, we've developed five quality indicators. And then just in March with instructure, a for profit company that has Canva as one of their major offerings, but it’s a huge for profit edtech company, we released the 2026 evidence report that uses these quality indicators, contrast consumer tools versus edtech tools, the top 150 used in the US and tells folks where they are vis a vis these quality indicators around interoperability, privacy, security, accessibility, efficacy and inclusion. And so really so universal design for learning is the measure and inclusion and uses market certifications in order to start to give people some idea of how this tool stacks up against an evidence base around how this tool stacks up around privacy and security. And we can't leave that to every tech director, educator and school. That is not fair. And so this is really a clarion call. We hope for the ed tech industry to prioritise evidence and to understand that we need to give clear market signals with this distinction around consumer tech versus ed tech. So I really will share the Instructure report with all of you and in just a week we'll release that list of 150, how we classified them and specifically what certifications they each have. But I think for us these are the ecosystem levers that organisations like InnovateEDU and others need to be pulling in order to make it easier and more transparent for parents, communities, students and educators to be using edtech in the way it should be used, which is in support of a human centered learning experience.
Guidance for school leaders on addressing AI implementation
Daniel Emmerson 34:19
I suppose just to wrap up Erin on a, on a similar point, of course, procurement is, is where a lot of, a lot of decisions get left to those that are responsible for the DPIA. It's the final part of the process. Right. Thinking this through a process lens is the way to think about it. What is the purpose of the technology that you're deploying? Yeah, as a head of school though, you're confronted with this seeming need to do something about AI because of how prevalent it is in the headlines, how frequently parents are talking about it, you know, that your students are using it, what might you say to a head of school who's really struggling to find that first rung on the ladder at the moment?
Erin Mote 35:00
As a former head of school who led a middle and high school in New York City, I have such deep empathy for this question and for folks who are in that situation. And, you know, I think you first have. You do have to do something. That's the first thing I want to say. You can't sort of walk away right now and say, well, we're just not going to deal with that, so first do something. And I think that something has to be centered around building AI literacy with your students, with your educators, and with your communities. And I'm deliberate about calling those three things out because I think the set of developmentally appropriate things we're doing to build AI literacy with students can look really different. It can, you know, in the K123 space, be about sorting and categorising and the principles that underline computational thinking and AI. And then it can get to use and actual tool use as you sort of gradually go up. And we have a paper, which I'll share with you, which is a whole blueprint for AI literacy that thinks about anchoring in the science of learning and developmentally appropriate AI literacy tools. And then the other thing I would say is educators are another place that we really need to have a calling in.
Learning from Social Media Mistakes
When I see here in the US that there's only 30% of districts and states who have provided guidance, not even policy, guidance about AI use, and 86%, almost 90% of educators, saying they're using AI at least once a week in their classroom, that gap is not acceptable. That is what happened with social media. We are on that path right now. So as an ecosystem, we need to lean in and think about how do we build the capacity, knowledge and expertise of our frontline educators with AI literacy to be able to understand these tools and to be critical consumers of this technology. I'm not an accelerationist. I think we need to be scouts. We need to balance the promise and peril of this technology. But we must equip frontline educators with AI literacy now. And that's not just a teacher. That's the, you know, school counselor, student support, or here we have paraprofessionals. You all have student aides. Like, it's the whole ecosystem of educators that need to understand and be able to interrogate this technology, its inputs, its outputs, and how they're using it in their practice. And then it's parents and communities. So much of, I think where we dropped the ball in the US on social media is, you know, and I'm, I'm part of that. I was running a School in 2014 and I remember saying to my staff, and it's sort of one of those things where I just like think about it now and I'm just embarrassed about it, where I said, we block social media in this school with our E rate funding, we have wifiI filters and so on and so forth. This is something that parents need to address at home. I abdicated responsibility. I should have never done that. And I think actually many of us did that around social media. And it only wasn't until a little bit later that we saw how social media was coming into our classrooms. Behavior that was happening out of school was deeply affecting what was happening, relationships in school that we, that I said, okay, like we have to do something about media, digital literacy and social media. But it was a couple years later, in the spirit of radical candor. And what I should have done differently is what I want us to learn from that. I want us to say, okay, the school and home divide is not a divide at all. We need to be educating our parents. We need to be calling in our caregivers. We need to be really helping them understand about these tools because what happens at home will come into the school. And if our parents aren't equipped and our students aren't equipped with the ability to be critical consumers of this technology, then we're going to repeat the mistake we made in social media. And so the good news is we can learn. We can be in that learning posture. We can be curious about what mistakes we made before and, and we can fix it. And we can fix it right now. And so I would start with AI literacy. We in the US have National AI Literacy Day. We lead it here at InnovateEDU. I know you all have a similar sort of activation day in the UK. Lots of resources available out there that are free. Folks can go to ailiteracyday.org and grab free lessons, free professional development, so on and so forth. And while it is US focused, I will say that I think the curriculum, the lessons, so on and so forth are sort of boundaryless. But they do still say math. Not maths, just in full disclosure, everyone. They are not culturally responsive to the math versus maths debate.
Daniel Emmerson 40:01
I've been pulled up on that. I have used math in seminars with teachers in the UK and being chastised. So. Okay.
Erin Mote 40:11
Me too. So I'm very aware that I waded into a cultural debate there that I shouldn't have long ago. So. But, you know, I think you, I think the UK with the Big AI project, with the resources the DfE is putting out with the stuff, you know, that I think Oak has developed, even there's, you know, the test bed work. And that investment is going to just yield, I think, some really important shared public infrastructure that I know we're going to learn from here in the US And I think can be a model for the world. And we hope we can share what we're learning in terms of best practices. Again, not a fixed pie. Really. Like, how do we create and run up this moment together?
Daniel Emmerson 40:58
Well, I'm looking forward to sharing this episode very, very much with our listeners and indeed all of the resources that you mentioned. Erin, thank you so, so much for being with us today. It's been an absolute pleasure speaking with you.
Erin Mote 41:08
Oh, Daniel, thanks for having me.
