Erin Mote: The AI Research to Classroom Gap No One is Talking About

April 21, 2026

Videos

Transcript

Daniel Emmerson 00:01
Welcome to Foundational Impact, a podcast series that focuses on education and artificial intelligence from a nonprofit perspective. My name is Daniel Emmerson and I'm the Executive Director of Good Future Foundation, a nonprofit whose mission is to equip educators to confidently prepare all students, regardless of their background, to benefit from and succeed in an AI infused world.
Daniel Emmerson 00:28
Welcome everybody, to another edition of Foundational Impact. Thank you so much for being here and thank you for joining us. And also a huge thank you to our guest for today. We have with us Erin Mote, who is or seems to belong to so many different projects and organisations, it's hard to pick out which one is going to be the key for us to address here. Although InnovateEDU, I think, is the primary one. Erin, thank you so much for being with us on the show today. I'd love it if we could start by just sort of unpacking some of the roles and the projects that you're responsible for. I think that a number of our listeners will be familiar with Project Unicorn in particular, but maybe if we take a step back and look at InnovateEDU and the work that you do there, can you tell us a bit about what it is and how it relates to AI in schools?
Uncommon alliances for system change
Erin Mote 01:23
Yeah, absolutely. And thanks for having me. I'm so excited to just be in conversation with you today and thanks so much to the listeners for joining as well. So at InnovateEDU, I really talk about it as a house of brands, not a branded house. Because the fundamental architecture for action at InnovateEDU is this idea of uncommon alliances. How do we bring people together across policy, practice, technology, including industry, to work at system level change in education and, you know, for your listeners, and I'm happy to share in the show notes, you know, we have a manifesto as an organisation around the types of change we look at and architecture for action. So when we choose a product or project to take on, why do we do it? And it's really focused on how do we bring folks together to find common ground to advance systemic levels for change in education. 
Explanation of data interoperability challenges in education 
Erin Mote
Our oldest project is Project Unicorn. It's celebrating its 10th birthday. If you could imagine, as an alliance, a 10 year alliance working on data interoperability this year. Don't worry, we're going to get a nice unicorn cake for those spirit committee members. We'll have a whole party. But you know, that project really started because as someone who built and led schools in New York City, I was experiencing what it was like to have a ton of different types of technology applications that my teachers were using and that the school was using, from things like multi tiered systems of support, to the, you know, student information system we were using to take attendance, to the, you know, formative assessment tool that my science teacher was using for hinge questions to check for understanding. And these tools never worked together. And what was challenging about that is I was watching so many of my educators spend their Sundays rostering tools. I was watching them bring printouts of data from these platforms to their team meetings, their grade level meetings, our whole faculty meetings. And as an enterprise architect, I just was like, this is stupid. And so, you know, I have a community of practice that we also shepherd and found called the Data Wiz Crew. So we did some mapping of what tools people were using. What did that look like? And from there we really discovered a couple things. One, that so many of us were developing and building interoperability ourselves. So we were like the glue that was holding these tools together. And maybe that was spreadsheets and maybe that was some like, airtable code. But, you know, by and large, we were the interoperability engineers and we were each doing it ourselves, which also felt really stupid. And some of us were paying, in some cases, some school districts and school networks were paying six figures to get their data out of curriculum tools.
Daniel Emmerson 04:32
This is the connectivity of data between different platforms and systems.
Erin Mote 04:39
Yeah. So think about being able to just move data safely and securely between systems or, you know, a teacher having the insight of being able to see what's happening in a science class when they're working in a math class. And is this student maybe struggling with some of the same conceptual understanding in science as they might be struggling with in math and really giving that whole picture of a student. And so field based insight has always strived when we think something is right for action. And so that's how Project Unicorn was born. That's how the EDSAFE Alliance was born. 
EDUSAFE AI Alliance formation
Erin Mote
In 2020, before the consumer breakthrough of generative AI and ChatGPT. We formed EDSAFE because we were very concerned about the things we were seeing around machine learning and algorithms in education tools. Doing things where, you know, it was frankly, like tracking kids and really restricting access and opportunity. And we thought it was important that we come up with a framework. Now, the S.A.F.E framework is used globally, that focuses on safety, accountability, fairness, transparency and the efficacy of AI tools. So everything at InnovateEDU, even though it feels like they're distinct projects, are things that are A, both right for systems change and B, really came from educators in the field and saying we need to move in order to achieve this vision, which is access and opportunity for all learners to be able to really engage in education in a way that moves the needle for them.
Learning Personalization vs. Tracking
Daniel Emmerson 06:14
Can we unpack that a little bit? The tracking kids in particular that you mentioned and restricting opportunity. I want to pick up that thread, if I may, just because when we look at a number of the AI focused solutions that are out there today and how they're promoted, those things are. They don't quite go together. Right. The tracking is seen as something that increases potential or opportunity for learners. Can you help me sort of explore a little bit more what that restrictive element looks like and why that's important?
Erin Mote 06:50
Well, I think the opportunity for personalisation in education with AI tools is extremely high. But we have to understand that what we know from the science of learning and development is that learning is jagged, that not all kids are proficient at the same time, in the same way, in the same skill strand. And in fact, we've built an education system, a schooling system that's focused on the average. And I think what we're understanding from the learning sciences and the development, that there actually is no average when it comes to how we learn and what that looks like. And so we have a system that is asymmetric actually to, I think, the research and evidence base around learning that's emerged, you know, as we think about how much we've learned about the brain and learning sciences over the last 20 years. You know, we used to have this idea that kids, if they weren't on track to read by third grade, it was sort of all over in the United States when we thought about reading and literacy. And what we now understand about the brain is that there's really two significant times for neuroplasticity in a child's life. And it's not at third grade, it's zero to two. So think about all the evolution a baby does to get from what they are able to do at zero to what they can do by two walking, talking, relating, emoting, all those things. And the second time is actually as a child goes through puberty and so generally like 11, 12, 13. And so these are times of enormous neuroplasticity. And so we used to say, well, if you're not on track by third grade to read, we have to, we're sort of, you're sort of lost in our learning architecture. And the, and the reality is that's actually not true, that there is a way to use data and personalisation to backfill literacy gaps and math gaps in order to keep kids on grade level and accelerate them even beyond grade level to proficiency. I experienced that all the time at Brooklyn Lab. But there's also a dark side to this personalisation, which is in some, in some educational technology software. It doesn't give young people the ability to sort of prove what they know beyond a really sort of stratified sort of grade band. So because we know that learning is jagged, you might not know fractions in sixth grade because you didn't get your third grade number line, but you're able to do some other 6th grade skill strands. But what can be really challenging about some ways that education technology is built in terms of tracking is it doesn't sort of have allowances for that neuroplastic for that jaggedness. And so one of the things I think that we see in the best types of edtech tools is tools that allow students to both do what is classically known as skill remediation, but also what is known as sort of skill expansion or zooming ahead. And the vast majority of tools don't allow for that type of differentiation and that student driven agency. And so one of the things we were seeing in 2020 when we formed the EDSAFE AI Alliance, was the vast majority of tools were really tracking students and breaking through those like sort of barriers to content access and opportunity. The ability to demonstrate what, you know, maybe above grade level, maybe the ability to sort of have agency in the way that you're choosing content and curriculum, whether you're an educator or student, was incredibly restrictive. And so we saw also in a number of tools, frankly like student profiling happening, we were seeing in the behavior space, so we were seeing things like race and ethnicity, homeless status, disability status, triggering a different type of response when it came to student discipline because of how some of these systems were using that demographic profiling versus a student who might not have those characteristics. And so data can be so powerful, but we also have to be critical and interrogate the outputs of what that data is giving us, because so much of how data is served up really matters what is sort of the underlying both data source you're considering, but also some of those data fields. So we don't want to, I think we want to make sure that all students have that opportunity to punch above their weight. And that for me is really, really important. And so we have to be deliberate in the design of our tools in order to get there.
Establishing standards for AI tools using the SAFE framework
Daniel Emmerson 11:53
So when you were investigating that, I mean going back to 2020, of course, this is way, way before the mainstream gen AI tools that everyone is very familiar with today. Was that research initially conducted in, in the US or did that span internationally? What sort of geographies were you operating in at that point?
Erin Mote 12:13
Yeah, so there was some global research happening, but mostly in the US and there's some pretty landmark RCTs, randomized control trials and that were happening in Florida and Texas and New York that were starting to show these disturbing patterns that were emerging from machine learning and from algorithmic bias. And so it's really important for folks to know that while generative AI feels like this whole brand new amazing thing, AI has existed since World War II. And the use of AI in education around machine learning and algorithms is something that's been happening for decades. And so the arrival technology of generative AI, particularly in the consumer space, took me by surprise and I know so many others. And really the game has changed now with the ubiquity of these tools in classrooms. But the sort of standards around safety and accountability, fairness, transparency and efficacy remain, I think, a constant and steadfast way for us to think about should we be using AI in education and under what circumstances. And for me, safety is incredibly important as table stakes in this conversation.
Daniel Emmerson 13:33
That's your, your acronym, right? For the. It is where that comes from. And I'd love to talk to you more about that in a moment. But you mentioned your manifesto earlier and the reason that I was asking about where, where the research is happening, I think this is, this is where I picked this up. But you were talking about scarcity as a, as a problem, and you refer to a set pie and a percentage of the pie that education or systems might be looking to take and that there's a limit on what people can learn from and develop themselves. And in fact, we need to be thinking about these problems, particularly around AI, from a very different perspective when it comes to sharing what we know and understand and collaborating more. I was wondering if we could investigate that as to whether or not that that still holds true from the time you wrote the manifesto, or if that looks a little differently today.
Rejection of scarcity mindset in favor of collaboration
Erin Mote 14:26
Yeah, I mean, I think this idea of a fixed pie is really. So this is a like economic concept. I did not invent the fixed pie, but this is an idea that in order for me to get something, you have to lose something that, you know, we are in this situation where, whether that's organisations who might be competing for resources or whether that's students who, you know, are in two different districts or systems or schools or networks. It's this idea that someone has to lose for me to gain something. I just reject that as a, as a premise. I think that the type of systems change and movement building we need to be doing in education means that we need to be thinking through and thinking about how we collaborate together, how we share resources, how we think about best practices. And I think there's a bit of a clarion call in this moment around AI because for me, I talk about research and evidence all the time. And I think evidence is the sort of needs to be the undergirding of every decision that we make in education. And yet it takes time to generate randomised control trials or quasi experimental designs or the type of really sort of high level research that, you know, can help us make these decisions over the long term. And if you're a teacher, you need to know if something's not working in November, not five years from now. And so, you know, I think we need to be thinking about the evidence enterprise in, in ways that preserve those types of things. Quasi experimental designs and RCTs, they definitely have their place. They're really, really important. And, and we need to be thinking about, you know, does every tool that we're bringing into our classroom at least have a logic model, an educated hypothesis about what they're trying to do and are they working up an evidence chain? Are we as organisations and practitioners confident that if we're sharing best practices and collaborating together, we're moving sort of that evidence chain? I think the other thing that's really important is that I think we have far more in common than we do. That's different. And so a lot of times I'll be engaged in conversations across international lines or state lines or district lines, and I'll talk to folks and they'll say, well, that worked for that school, but it's not going to work for my school because of this, this and this. And that I think is really challenging because while I love that every community is unique and has a special context, I think there are things that we can learn from each other if we're in a learning posture. And it's always ironic to me that in education, which is basically the business of learning, that I see so many organisations and folks who aren't in a learning posture, who aren't thinking about curiosity and challenge, discernment and pushing on each other. And so that's so much of the culture at InnovateEDU is to not think we have it all figured out. But how do we build a table that brings together diverse perspectives who oftentimes don't work together. And frankly, in an increasingly polarised political situation here in the United States, they might not have ever spoken to each other before. And so how do you build trust? How do you find common ground? How do you remove this idea that for you to get this, I have to lose something? And how do we create the right incentives to build movements and push people to action? And I think there's kind of no greater time for us to be in a learning posture right now. When we look at how much AI is in our classrooms, no matter what country you're in, it's in the high 70s, 80s in terms of use. And we are not going to be able to have an evidence based that that takes five years to generate educated hypothesis about what we should be doing. And so this is really, I think, a clarion call for all of us to come together and to really think about how we share those best practices, how we find inspiration in each other. And I'm certainly inspired by some of the work that's happening in the UK right now around test beds, around shared infrastructure. I've always been such a huge fan, I'm such a fan girl of the Education Endowment Foundation and how they really try to think about making research more discernible to practitioners. And those are the things that I think we need to be looking at and thinking about. How do we rapidly build those public infrastructures for understanding, but also for collaboration?
Daniel Emmerson 19:18
Has recent genAI tools pushed that level of eagerness to share? Because I suppose everyone was at a similar starting point. When we're thinking about either schools or districts, everyone was thinking, crikey, you know, in November 2022, ChatGPT 3.5, everyone was at a similar starting point. Even though of course, AI existed long before that. This was the main sort of mainstreamification of the technology. Did that encourage people to share more? Did that encourage school districts to maybe reframe what they were thinking about in terms of maybe we could collaborate more, maybe we could share more around our practice? I'm wondering if that was a catalyst at all in the US. In the UK, I think it probably was, yeah.
Building national collaboration infrastructure through policy labs
Erin Mote 20:05
I think in some ways it was. In some ways it wasn't. I mean, I think there's, you know, one of the things I think that the UK has that the US doesn't is a set of regulatory undergirding around child privacy and data privacy that sits at the national level. And we don't have that in the US. We have some pieces of the puzzle, but not with the level of sophistication that, frankly, other parts of the world do. And I think what that often means is that collaboration often happens in states here in the US rather than sort of nationally. But this is a place where at EDSAFE, we built structures right away to try to build that national collaboration. And so we have a group of policy labs that started and actually started with the biggest school district in the United States. New York City Public Schools is how our policy lab started, was our first policy lab. And we started working with them the day after they banned generative AI. So they banned generative AI, and they said, we're not ready. And then they said, but we're going to get there. And so, you know, we've worked with them over the last three years in order to really put together policy and guidance. And they actually just released it into the world two weeks ago. And so I'm happy to share it for the shownotes. But it took a really deliberative process and a lot of internal work around their procurement systems and making sure that they could look parents in the face and say, we're not putting any tools in front of kids that haven't met our privacy, security and interoperability standards. They have a funny acronym for it. It's called ERMA, Not a person. But that's literally the name of the process, the ERMA process. And they did a huge amount of refinement in the ERMA process to really be able to make that promise to parents, teachers, communities, and students. And so we built this national network of policy labs. Cohort one, I would say, were the fast runners. The Gwinnett, you know, Canon City, Colorado, El Segundo, Santa Ana, so California school districts, Georgia school districts, so on and so forth. And we did it with only 12, and they were their own cohort, and they did a ton of sharing. But one of the essential elements of it is whatever they developed became open source and open science so that we would publish it, and then other school districts could take it as starter dough. We don't want them to copy it holistically, but we want them to be able to have a starting point. And then EDSAFE built a series of resources, from a glossary to planning resources, to things you could print out as worksheets if you needed to for your staff meeting around generative AI. That network has now grown to 29 districts and 10 states here in the US and so now I think we have, over the last three years, really built this national collaboration infrastructure that's not just about policy and not just around having a unified policy stack and practice, but it's really about building human connections with people to share expertise, to share best practices so they can pick up the phone and call each other. It's just like very beautiful thing that I love so much is when I am talking to our policy lab, so state leaders or district leaders, and they're like, oh yeah, I just talked to XYZ and I'm like, makes me so happy you picked up the phone and you called somebody that's a state away or that's, you know, a neighboring district and, and, and had a conversation about a challenge. You were having an opportunity and that you problem solved together. And actually not to get too like philosophical, but that's actually like the origin of learning. Like that is the Socratic method that has existed in ancient Greece for long before any of us existed was this idea of folks coming together collaboratively in person or virtually, and thinking through the future they want to imagine for education, for AI, for young people and for themselves. So it makes me feel really good about building that collaboration infrastructure, but you have to be deliberate about building it because people aren't inclined to do it naturally. There has to be structures and systems and trust that underpins this type of network.
Daniel Emmerson 24:37
And how did you go about developing that when you started?
Erin Mote 24:40
Yeah, I mean, part of it, we have a whole architecture for action, which I'll share. But part of it is like the pain was really, really acute. So people were willing to collaborate in different ways. Folks were facing questions and concerns. They were, you know, wrestling with hard things that they knew they couldn't do themselves. They might have lacked the technical expertise and knowledge to understand AI and how AI worked. And so, you know, EDSAFE provides so much technical assistance and materials and resources. I do it ask me anything. Every month with all of our policy lab leaders. We don't really have an agenda in these meetings. Like there's sort of this like loose agenda, but it really is just this hour of what is top of mind and what are the things that folks are thinking about. And then we think at EDSAFE about is this at a national level that we need to build resources or materials or lean in in terms of policy. And so there's all these types of structures that are both like what I would consider like proactive infrastructures for collaboration, trust building and communication and then reactive infrastructures so that if someone says, this is a serious problem for me, we have capacity to sprint to meet it and I'll, I'll name, you know.

Addressing Chatbot safety and making policy impact
We in February released a policy paper around the safe framework and chatbots and companion use in education. And that was entirely responsive to what we were seeing as demonstrable harms that were happening in schools and with young people around the use of AI companions in chat bots that were, you know, consumer tools not properly offloading students, aligned with mandated reporter practices, the use of companions to form intimate relationships with young people. And so what we sort of scaled and ran fast, put together a task team that released this policy paper in February, and we were going to release mandated reporter guides guidance in February aligned to it. And frankly, our policy labs and our state ed chiefs and even some governors said, I need you to release this asap. And so, because they were wrestling with really important legal concerns, considerations and questions about what happens if a young person ideates about suicide or is in mental health crisis in a chatbot or companion that they're accessing through school technology. What is our responsibility? How do we put forward a process? How do we put forward the work? And so we released that guidance and then one of our school districts in the policy lab just took on open sourcing their chatbots and companions policy by January. So even before the policy paper is released, we have released guidance, given legal instructions, and we have a model policy from a school district on the ground so that as school districts read this paper, they're not just worried about this and aware of this, there's actually some action steps and what to do and how to move forward. So again, like the proactive infrastructures and the reactive infrastructures. And then if you look at some of the things that have happened in the US since state legislatures have introduced almost 90 bills around prohibiting the types of behavior that we call out in the policy paper. So we know our state policymakers are picking that policy paper up and they're writing legislation and policy to regulate foreseeable harms. And then in late March or mid March, Congress actually got a bill through committee called the SafeBots bill that's aligned to the research agenda and the policy paper and now is for consideration in the full House and there's a companion bill in the Senate. So what's really important is where we are aligned on safety, where we have found common ground, where we have done this together, we can move fast to protect young people's learning experience, but also their foundational safety when it comes to these tools. So I'm really proud of the coalition that's come together to do that work and all the sort of reactive and proactive work that school districts and leaders have done to say, not on our watch.
Daniel Emmerson 29:12
It's an incredible achievement. Huge, huge. Congratulations. I know it's, you know, an ongoing piece of work.
Erin Mote 29:18
Oh, yeah. We'll be in Congress on 28 April for a briefing. So stay tuned, y'all. This is not. This is not the end. But, you know, I think in a. In a place where we're also finding community with your, with the DfE in the UK around standards to protect young people, around chatbots and companions. So how do we bring some of that really great work that's happening internationally to inform our work here in the US and vice versa? We see governments all over the world wrestling with this question around where there is demonstrable, foreseeable harms. How do we move faster to protect young people?
Procurement as an expression of values and the policy stack approach
Daniel Emmerson 29:59
Well, your SAFE is the safe, accountable, fair and efficacious. Right? That's the principles that drive the EDSAFE initiative. I'm wondering, are there some core or key takeaways, I should say, for folks who aren't familiar with this work, particularly those that are overseas, here in the UK or elsewhere, to get people thinking about procurement and deployment and what they should be and what they shouldn't be accessing in a school environment?
Erin Mote 30:30
Procurement is like the sexiest thing, I think, in our ecosystem because it is the way we express our values, it is the way we express what we hope for young people. And so, you know, we have a policy stack where it's really funny when I, when I work with our policy labs and they sort of first come to us along with our team. A woman named Andrea Klaber leads it at EDSAFE, all rta. They want to, like, start with procurement. And I'm like, I love you that you want to start with procurement, but there's a whole set of things that we need to do before procurement that are about understanding what our intention is. So the very bottom step of the policy stack is a board and district school, district school network vision statement. In the case of states, it's often a state vision statement about what they hope for AI in education. So we're really clear about what we want the outcome to be. And then the top of the stack is actually procurement. 
Quality indicators for EdTech
And New York is a great example of how long it took to get there so that the procurement system was aligned. But we think that there are and have put a stake in the ground around what we think are five quality indicators for ed tech use in general. We're really focused on driving a distinction between purpose built education technology tools built for use in schools and consumer tools. And the reality is in the US I know also in the UK it's a bit of a mixed bag about what's being used in classrooms. There's consumer tools being used in classrooms that were not intended for educational purposes. And same in the U.S. and so first, how do we draw that distinction between purpose built educational tools and tools designed for learning, tools designed for Socratic thinking rather than sycophancy? How do we draw that distinction versus consumer tools? And then when we're thinking about the procurement process, along with a consortium of organisations, we've developed five quality indicators. And then just in March with instructure, a for profit company that has Canva as one of their major offerings, but it’s a huge for profit edtech company, we released the 2026 evidence report that uses these quality indicators, contrast consumer tools versus edtech tools, the top 150 used in the US and tells folks where they are vis a vis these quality indicators around interoperability, privacy, security, accessibility, efficacy and inclusion. And so really so universal design for learning is the measure and inclusion and uses market certifications in order to start to give people some idea of how this tool stacks up against an evidence base around how this tool stacks up around privacy and security. And we can't leave that to every tech director, educator and school. That is not fair. And so this is really a clarion call. We hope for the ed tech industry to prioritise evidence and to understand that we need to give clear market signals with this distinction around consumer tech versus ed tech. So I really will share the Instructure report with all of you and in just a week we'll release that list of 150, how we classified them and specifically what certifications they each have. But I think for us these are the ecosystem levers that organisations like InnovateEDU and others need to be pulling in order to make it easier and more transparent for parents, communities, students and educators to be using edtech in the way it should be used, which is in support of a human centered learning experience.
Guidance for school leaders on addressing AI implementation
Daniel Emmerson 34:19
I suppose just to wrap up Erin on a, on a similar point, of course, procurement is, is where a lot of, a lot of decisions get left to those that are responsible for the DPIA. It's the final part of the process. Right. Thinking this through a process lens is the way to think about it. What is the purpose of the technology that you're deploying? Yeah, as a head of school though, you're confronted with this seeming need to do something about AI because of how prevalent it is in the headlines, how frequently parents are talking about it, you know, that your students are using it, what might you say to a head of school who's really struggling to find that first rung on the ladder at the moment?
Erin Mote 35:00
As a former head of school who led a middle and high school in New York City, I have such deep empathy for this question and for folks who are in that situation. And, you know, I think you first have. You do have to do something. That's the first thing I want to say. You can't sort of walk away right now and say, well, we're just not going to deal with that, so first do something. And I think that something has to be centered around building AI literacy with your students, with your educators, and with your communities. And I'm deliberate about calling those three things out because I think the set of developmentally appropriate things we're doing to build AI literacy with students can look really different. It can, you know, in the K123 space, be about sorting and categorising and the principles that underline computational thinking and AI. And then it can get to use and actual tool use as you sort of gradually go up. And we have a paper, which I'll share with you, which is a whole blueprint for AI literacy that thinks about anchoring in the science of learning and developmentally appropriate AI literacy tools. And then the other thing I would say is educators are another place that we really need to have a calling in.

Learning from Social Media Mistakes

When I see here in the US that there's only 30% of districts and states who have provided guidance, not even policy, guidance about AI use, and 86%, almost 90% of educators, saying they're using AI at least once a week in their classroom, that gap is not acceptable. That is what happened with social media. We are on that path right now. So as an ecosystem, we need to lean in and think about how do we build the capacity, knowledge and expertise of our frontline educators with AI literacy to be able to understand these tools and to be critical consumers of this technology. I'm not an accelerationist. I think we need to be scouts. We need to balance the promise and peril of this technology. But we must equip frontline educators with AI literacy now. And that's not just a teacher. That's the, you know, school counselor, student support, or here we have paraprofessionals. You all have student aides. Like, it's the whole ecosystem of educators that need to understand and be able to interrogate this technology, its inputs, its outputs, and how they're using it in their practice. And then it's parents and communities. So much of, I think where we dropped the ball in the US on social media is, you know, and I'm, I'm part of that. I was running a School in 2014 and I remember saying to my staff, and it's sort of one of those things where I just like think about it now and I'm just embarrassed about it, where I said, we block social media in this school with our E rate funding, we have wifiI filters and so on and so forth. This is something that parents need to address at home. I abdicated responsibility. I should have never done that. And I think actually many of us did that around social media. And it only wasn't until a little bit later that we saw how social media was coming into our classrooms. Behavior that was happening out of school was deeply affecting what was happening, relationships in school that we, that I said, okay, like we have to do something about media, digital literacy and social media. But it was a couple years later, in the spirit of radical candor. And what I should have done differently is what I want us to learn from that. I want us to say, okay, the school and home divide is not a divide at all. We need to be educating our parents. We need to be calling in our caregivers. We need to be really helping them understand about these tools because what happens at home will come into the school. And if our parents aren't equipped and our students aren't equipped with the ability to be critical consumers of this technology, then we're going to repeat the mistake we made in social media. And so the good news is we can learn. We can be in that learning posture. We can be curious about what mistakes we made before and, and we can fix it. And we can fix it right now. And so I would start with AI literacy. We in the US have National AI Literacy Day. We lead it here at InnovateEDU. I know you all have a similar sort of activation day in the UK. Lots of resources available out there that are free. Folks can go to ailiteracyday.org and grab free lessons, free professional development, so on and so forth. And while it is US focused, I will say that I think the curriculum, the lessons, so on and so forth are sort of boundaryless. But they do still say math. Not maths, just in full disclosure, everyone. They are not culturally responsive to the math versus maths debate.
Daniel Emmerson 40:01
I've been pulled up on that. I have used math in seminars with teachers in the UK and being chastised. So. Okay.
Erin Mote 40:11
Me too. So I'm very aware that I waded into a cultural debate there that I shouldn't have long ago. So. But, you know, I think you, I think the UK with the Big AI project, with the resources the DfE is putting out with the stuff, you know, that I think Oak has developed, even there's, you know, the test bed work. And that investment is going to just yield, I think, some really important shared public infrastructure that I know we're going to learn from here in the US And I think can be a model for the world. And we hope we can share what we're learning in terms of best practices. Again, not a fixed pie. Really. Like, how do we create and run up this moment together?
Daniel Emmerson 40:58
Well, I'm looking forward to sharing this episode very, very much with our listeners and indeed all of the resources that you mentioned. Erin, thank you so, so much for being with us today. It's been an absolute pleasure speaking with you.
Erin Mote 41:08
Oh, Daniel, thanks for having me. 

About this Episode

Erin Mote: The AI Research to Classroom Gap No One is Talking About

In this episode, Daniel sits down with Erin Mote of InnovateEDU about how education systems are responding to AI and where current approaches are falling short. Erin challenges the assumption that progress in education operates within fixed limits. She argues that system-level change depends on collaboration, shared practice, and open infrastructure rather than competition between schools, organisations, or regions. This approach underpins the work of the EDSAFE AI Alliance, which brings together policymakers, educators, and industry to define practical standards for AI use. Its SAFE framework focuses on safety, accountability, fairness, transparency and efficacy, with direct implications for procurement, policy and classroom practice. The conversation addresses the tension between the pace of AI adoption and the slower development of traditional evidence. Schools are already using these tools at scale, while formal research remains limited. Erin outlines the need for informed, iterative decision making supported by shared insight across systems. There is also a detailed discussion of risk. AI-driven personalisation has potential, but current implementations can narrow opportunity through rigid progression models, limited student agency and the use of sensitive data in ways that affect outcomes. These issues require closer scrutiny of how tools are designed and deployed. For school leaders, the priority is to act with intent. Building AI literacy across students, staff and parents is identified as the most immediate and practical step. Current usage levels among educators are high, while formal guidance remains inconsistent, creating a gap that needs to be addressed quickly. Erin also shares resources from InnovateEDU, including policy frameworks, planning tools and AI literacy materials designed to support schools in making informed decisions. The discussion returns throughout to the role of shared standards and coordinated action. Where systems align on safety and implementation, progress becomes more consistent and risks are easier to manage.

Erin Mote

Chief Executive Officer @ InnovateEDU

Related Episodes

May 5, 2026

Calvin Eden: When Your Students Trusts AI Over You

Our guest for this episode is Calvin Eden, the founder of LoudSpeaker who works with students across the UK through high-energy and interactive workshops on topics like resilience, emotional intelligence, and healthy relationships. Calvin meets young people in his everyday work, and seeing AI tools become a growing part of students' social and emotional lives reinforces his belief in the urgent need for schools to strengthen young people's confidence, communication skills, and sense of belonging. One major theme in his conversation with Daniel is the importance of human connection. While AI can be a useful tool in different ways, young people need to practice communication, build relationships with others, and learn to speak about their own vulnerability and ask teachers and parents for help when needed. He warns that if AI becomes a substitute for human interaction, students may become less resilient and more isolated. Their conversation also explores the student voice. Daniel shares Good Future Foundation's belief that students should help shape responsible AI policies in schools. Calvin agrees and describes how his work supports schools to build student voice strategies, run student conferences, and create opportunities for young people to be heard. Calvin also encourages school leaders to create a culture where staff and students connect as people, not just through formal roles. He wraps up the conversation by inviting educators to share stories, talk honestly about challenges and failures, and celebrate what they are proud of with their students to build a school environment where young people feel seen, safe, and valued.
March 26, 2026

Dr. Biljana Scott: Language as Our Defining Asset

What makes human communication unique in an age of increasingly sophisticated AI? Daniel Emmerson invites Dr. Biljana Scott, a linguist with expertise in diplomatic communication and language analysis, to explore this question in depth. With her multilingual background and extensive experience in teaching the nuances of communication, Biljana probes the complex interplay between human language and AI interaction. Their conversation illuminates whether our increasing reliance on AI might reshape how we think and express ourselves, unpacks linguistic concepts like “presuppositions” in everyday speech, and reveals how the terminology we use to describe AI carries powerful connotations that fundamentally shape our relationship with technology.
February 6, 2026

Claire Archibald: Creating Effective AI Governance Structures in Schools

Is having an AI policy enough to protect your school? In this episode, Daniel Emmerson speaks with Claire Archibald, Legal Director at Brown Jacobson and former Data Protection Officer, about what effective AI governance in schools looks like. Their conversation covers essential topics including what makes a good Data Protection Impact Assessment (DPIA), the importance of vendor due diligence, and why schools need robust governance structures beyond just having an AI policy. Claire emphasises the critical role of incident reporting, creating transparent cultures around AI use, and the need for collaborative approaches involving all stakeholders. She also shares a six-step governance framework and practical advice for schools starting their AI journey.
January 14, 2026

Setting Visible Boundaries to Safeguard our Students in an AI-infused World

Daniel's conversation with Gemma Gwilliam, Portsmouth's Head of Digital Learning, Education and Innovation, explores transparency, privacy and safeguarding in AI education. The discussion takes a dramatic turn when Gemma puts on a pair of AI-enabled glasses which she purchased easily for under £10 right in the middle of the recording, bringing theoretical concerns into stark reality. This jaw-dropping demonstration underscores the urgent challenges teachers face as sophisticated AI wearables become increasingly accessible to students. While we may debate whether AI belongs in classrooms, we cannot ignore the significant risks these technologies present to young people. This episode reveals how Portsmouth supports its schools and teachers in approaching AI responsibly to strike a balance between innovation and essential safeguarding measures.
December 9, 2025

Hult Prize Accelerator Startups: How the Next Generation is Solving Global Problems with AI

What skills will our students genuinely need to thrive in a future driven by AI? To find the answer, Daniel Emmerson goes straight to the source and sits down with brilliant young minds behind seven teams from the Hult Prize Global Accelerator, one of the final stages of the world’s largest student startup competition.
November 11, 2025

Muireann Hendriksen: Adapting AI Tools Based on Learning Science

In this episode, Daniel speaks with Muireann Hendriksen, the Principal Research Scientist at Pearson, about her team's recent research study called "Asking to Learn" The study analysed 128,000 AI queries from 9,000 student users to gain deeper insights into how students learn when they interact with AI study tools. Their key finding revealed that approximately one-third of student queries demonstrated higher-order thinking skills. Their conversation also explores important themes around trust, student engagement, accessibility, and inclusivity, as well as how AI tools can promote active learning behaviours.
October 13, 2025

Leena, Alicia and Swati: Embracing AI in GEMS Winchester School Dubai

Leena, Alicia and Swati from GEMS Winchester School Dubai, share their remarkable journey to achieving AI Quality Mark gold status. Over 12 months, they developed a school-wide AI strategy by establishing an AI core team, working party, and champions across both primary and secondary divisions. Their systematic approach also included AI tool evaluation through detailed risk assessments, and the creation of a bespoke AI literacy programme for their teachers. Their conversation reveals how they engage all stakeholders, including teachers, students, and parents, to cope with the challenges of this rapidly evolving technology and prepare students for an AI-infused world.
September 29, 2025

Matthew Pullen: Purposeful Technology and AI Deployment in Education

This episode features Matthew Pullen from Jamf, who talks about what thoughtful integration of technology and AI looks like in educational settings. Drawing from his experience working in the education division of a company that serves more than 40,000 schools globally, Mat has seen numerous use cases. He distinguishes between the purposeful application of technology to dismantle learning barriers and the less effective approach of adopting technology for its own sake. He also asserts that finding the correct balance between IT needs and pedagogical objectives is crucial for successful implementation.
September 15, 2025

Matt King: Creating a Culture of AI Literacy Through Conversation at Brentwood School

Many schools begin their AI journey by formulating AI policies. However, Matt King, Director of Innovative Learning at Brentwood School, reveals their preference for establishing guiding principles over rigid policies considering AI’s rapidly evolving nature.
September 1, 2025

Alex More: Preserving Humanity in an AI-Enhanced Education

Alex was genuinely fascinated when reviewing transcripts from his research interviews and noticed that students consistently referred to AI as "they," while adults, including teachers, used "it." This small but meaningful linguistic difference revealed a fundamental variation in how different generations perceive artificial intelligence. As a teacher, senior leader, and STEM Learning consultant, Alex developed his passion for educational technology through creating the award-winning "Future Classroom", a space designed to make students owners rather than consumers of knowledge. In this episode, he shares insights from his research on student voice, explores the race toward Artificial General Intelligence (AGI), and unpacks the concept of AI "glazing". While he touches on various topics around AI during his conversation with Daniel, the key theme that shines through is the importance of approaching AI thoughtfully and deliberately balancing technological progress with human connection.
June 16, 2025

David Leonard, Steve Lancaster: Approaching AI with cautious optimism at Watergrove Trust

This podcast episode was recorded during the Watergrove Trust AI professional development workshop, delivered by Good Future Foundation and Educate Ventures. Dave Leonard, the Strategic IT Director, and Steve Lancaster, a member of their AI Steering Group, shared how they led the Trust's exploration and discussion of AI with a thoughtful, cautious optimism. With strong support from leadership and voluntary participation from staff across the Trust forming the AI working group, they've been able to foster a trust-wide commitment to responsible AI use and harness AI to support their priority of staff wellbeing.
June 2, 2025

Thomas Sparrow: Navigating AI and the disinformation landscape

This episode features Thomas Sparrow, a correspondent and fact checker, who helps us differentiate misinformation and disinformation, and understand the evolving landscape of information dissemination, particularly through social media and the challenges posed by generative AI. He is also very passionate about equipping teachers and students with practical fact checking techniques and encourages educators to incorporate discussions about disinformation into their curricula.
May 19, 2025

Bukky Yusuf: Responsible technology integration in educational settings

With her extensive teaching experience in both mainstream and special schools, Bukky Yusuf shares how purposeful and strategic use of technology can unlock learning opportunities for students. She also equally emphasises the ethical dimensions of AI adoption, raising important concerns about data representation, societal inequalities, and the risks of widening digital divides and unequal access.
May 6, 2025

Dr Lulu Shi: A Sociological Lens on Educational Technology

In this enlightening episode, Dr Lulu Shi from the University of Oxford, shares technology’s role in education and society through a sociological lens. She examines how edtech companies shape learning environments and policy, while challenging the notion that technological progress is predetermined. Instead, Dr. Shi argues that our collective choices and actions actively shape technology's future and emphasises the importance of democratic participation in technological development.
April 26, 2025

George Barlow and Ricky Bridge: AI Implementation at Belgrave St Bartholomew’s Academy

In this podcast episode, Daniel, George, and Ricky discuss the integration of AI and technology in education, particularly at Belgrave St Bartholomew's Academy. They explore the local context of the school, the impact of technology on teaching and learning, and how AI is being utilised to enhance student engagement and learning outcomes. The conversation also touches on the importance of community involvement, parent engagement, and the challenges and opportunities presented by AI in the classroom. They emphasise the need for effective professional development for staff and the importance of understanding the purpose behind using technology in education.
April 2, 2025

Becci Peters and Ben Davies: AI Teaching Support from Computing at School

In this episode, Becci Peters and Ben Davies discuss their work with Computing at School (CAS), an initiative backed by BCS, The Chartered Institute for IT, which boasts 27,000 dedicated members who support computing teachers. Through their efforts with CAS, they've noticed that many teachers still feel uncomfortable about AI technology, and many schools are grappling with uncertainty around AI policies and how to implement them. There's also a noticeable digital divide based on differing school budgets for AI tools. Keeping these challenges in mind, their efforts don’t just focus on technical skills; they aim to help more teachers grasp AI principles and understand important ethical considerations like data bias and the limitations of training models. They also work to equip educators with a critical mindset, enabling them to make informed decisions about AI usage.
March 17, 2025

Student Council: Students Perspectives on AI and the Future of Learning

In this episode, four members of our Student Council, Conrado, Kerem, Felicitas and Victoria, who are between 17 and 20 years old, share their personal experiences and observations about using generative AI, both for themselves and their peers. They also talk about why it’s so crucial for teachers to confront and familiarize themselves with this new technology.
March 3, 2025

Suzy Madigan: AI and Civil Society in the Global South

AI’s impact spans globally across sectors, yet attention and voices aren’t equally distributed across impacted communities. This week, the Foundational Impact presents a humanitarian perspective as Daniel Emmerson speaks with Suzy Madigan, Responsible AI Lead at CARE International, to shine a light on those often left out of the AI narrative. The heart of their discussion centers on “AI and the Global South, Exploring the Role of Civil Society in AI Decision-Making”, a recent report that Suzy co-authored with Accentures, a multinational tech company. They discuss how critical challenges including digital infrastructure gaps, data representation, and ethical frameworks, perpetuate existing inequalities. Increasing civil society participation in AI governance has become more important than ever to ensure an inclusive and ethical AI development.
February 17, 2025

Liz Robinson: Leading Through the AI Unknown for Students

In this episode, Liz opens up about her path and reflects on her own "conscious incompetence" with AI - that pivotal moment when she understood that if she, as a leader of a forward-thinking trust, feels overwhelmed by AI's implications, many other school leaders must feel the same. Rather than shying away from this challenge, she chose to lean in, launching an exciting new initiative to help school leaders navigate the AI landscape.
February 3, 2025

Lori van Dam: Nurturing Students into Social Entrepreneurs

In this episode, Hult Prize CEO Lori van Dam pulls back the curtain on the global competition empowering student innovators into social entrepreneurs across 100+ countries. She believes in sustainable models that combine social good with financial viability. Lori also explores how AI is becoming a powerful ally in this space, while stressing that human creativity and cross-cultural collaboration remain at the heart of meaningful innovation.
January 20, 2025

Laura Knight: A Teacher’s Journey into AI Education

From decoding languages to decoding the future of education: Laura Knight takes us on her fascinating journey from a linguist to a computer science teacher, then Director of Digital Learning, and now a consultant specialising in digital strategy in education. With two decades of classroom wisdom under her belt, Laura has witnessed firsthand how AI is reshaping education and she’s here to help make sense of it all.
January 6, 2025

Richard Culatta: Understand AI's Capabilities and Limitations

Richard Culatta, former Government advisor, speaks about flying planes as an analogy to explain the perils of taking a haphazard approach to AI in education. Using aviation as an illustration, he highlights the most critical tech skills that teachers need today. The CEO of ISTE and ASCD draws a clear parallel: just as planes don't fly by magic, educators must deeply understand AI's capabilities and limitations.
December 16, 2024

Prof Anselmo Reyes: AI in Legal Education and Justice

Professor Anselmo Reyes, an international arbitrator and legal expert, discusses the potential of AI in making legal services more accessible to underserved communities. He notes that while AI works well for standardised legal matters, it faces limitations in areas requiring emotional intelligence or complex human judgment. Prof Reyes advocates for teaching law students to use AI critically as an assistive tool, emphasising that human oversight remains essential in legal decision making.
December 2, 2024

Esen Tümer: AI’s Role from Classrooms to Operating Rooms

Healthcare and technology leader Esen Tümer discusses how AI and emerging trends in technology are transforming medical settings and doctor-patient interactions. She encourages teachers not to shy away from technology, but rather understand how it’s reshaping society and prepare their students for this tech-enabled future.
November 19, 2024

Julie Carson: AI Integration Journey of Woodland Academy Trust

A forward-thinking educational trust shows what's possible when AI meets strategic implementation. From personalised learning platforms to innovative administrative solutions, Julie Carson, Director of Education at Woodland Academy Trust, reveals how they're enhancing teaching and learning across five primary schools through technology and AI to serve both classroom and operational needs.
November 4, 2024

Joseph Lin: AI Use Cases in Hong Kong Classrooms

In this conversation, Joseph Lin, an education technology consultant, discusses how some Hong Kong schools are exploring artificial intelligence and their implementation challenges. He emphasises the importance of data ownership, responsible use of AI, and the need for schools to adapt slowly to these technologies. Joseph also shares some successful AI implementation cases and how some of the AI tools may enhance creative learning experiences.
October 21, 2024

Sarah Brook: Rethinking Charitable Approaches to Tech and Sustainability

In our latest episode, we speak with Sarah Brook, Founder and CEO of the Sparkle Foundation, currently supporting 20,000 lives in Malawi. Sarah shares how education is evolving in Malawi and the role of AI plays to young people and international NGOs. She also provides a candid look at the challenges facing the charity sector, drawing from her daily work at Sparkle.
October 7, 2024

Rohan Light: Assurance and Oversight in the Age of AI

Join Rohan Light, Principal Analyst of Data Governance at Health New Zealand, as he discusses the critical need for accountability, transparency, and clear explanations of system behaviour. Discover the the government's role in regulation, and the crucial importance of strong data privacy practices.
September 23, 2024

Yom Fox: Leading Schools in an AI-infused World

With the rapid pace of technological change, Yom Fox, the high school principal at Georgetown Day School shares her insights on the importance of creating collaborative spaces where students and faculty learn together and teaching digital citizenship.
September 5, 2024

Debra Wilson: NAIS Perspectives on AI Professional Development

Join Debra Wilson, President of National Association of Independent Schools (NAIS) as she shares her insights on taking an incremental approach to exploring AI. Discover how to find the best solutions for your school, ensure responsible adoption at every stage, and learn about the ways AI can help tackle teacher burnout.
April 18, 2024

Steven Chan and Minh Tran: Preparing Students for AI and New Technologies

Discuss the importance of preparing students for AI and new technologies, the role of the Good Future Foundation in bridging the gap between technology and education, and the potential impact of AI on the future of work.

Erin Mote: The AI Research to Classroom Gap No One is Talking About

Published on
April 21, 2026

Erin Mote is the CEO and Founder of InnovateEDU. In this role, Erin leads the organization and its major projects, including its policy and strategy portfolio.  She leads the organization’s work on creating uncommon alliances to create systems change - in special education, talent development, artificial intelligence, and data modernization.  An enterprise architect, she created, alongside her team, two of InnovateEDU’s signature technology products -  Cortex, a next-generation personalized learning platform, and Landing Zone - a cutting-edge infrastructure as a service data product.

Videos

Transcript

Daniel Emmerson 00:01
Welcome to Foundational Impact, a podcast series that focuses on education and artificial intelligence from a nonprofit perspective. My name is Daniel Emmerson and I'm the Executive Director of Good Future Foundation, a nonprofit whose mission is to equip educators to confidently prepare all students, regardless of their background, to benefit from and succeed in an AI infused world.
Daniel Emmerson 00:28
Welcome everybody, to another edition of Foundational Impact. Thank you so much for being here and thank you for joining us. And also a huge thank you to our guest for today. We have with us Erin Mote, who is or seems to belong to so many different projects and organisations, it's hard to pick out which one is going to be the key for us to address here. Although InnovateEDU, I think, is the primary one. Erin, thank you so much for being with us on the show today. I'd love it if we could start by just sort of unpacking some of the roles and the projects that you're responsible for. I think that a number of our listeners will be familiar with Project Unicorn in particular, but maybe if we take a step back and look at InnovateEDU and the work that you do there, can you tell us a bit about what it is and how it relates to AI in schools?
Uncommon alliances for system change
Erin Mote 01:23
Yeah, absolutely. And thanks for having me. I'm so excited to just be in conversation with you today and thanks so much to the listeners for joining as well. So at InnovateEDU, I really talk about it as a house of brands, not a branded house. Because the fundamental architecture for action at InnovateEDU is this idea of uncommon alliances. How do we bring people together across policy, practice, technology, including industry, to work at system level change in education and, you know, for your listeners, and I'm happy to share in the show notes, you know, we have a manifesto as an organisation around the types of change we look at and architecture for action. So when we choose a product or project to take on, why do we do it? And it's really focused on how do we bring folks together to find common ground to advance systemic levels for change in education. 
Explanation of data interoperability challenges in education 
Erin Mote
Our oldest project is Project Unicorn. It's celebrating its 10th birthday. If you could imagine, as an alliance, a 10 year alliance working on data interoperability this year. Don't worry, we're going to get a nice unicorn cake for those spirit committee members. We'll have a whole party. But you know, that project really started because as someone who built and led schools in New York City, I was experiencing what it was like to have a ton of different types of technology applications that my teachers were using and that the school was using, from things like multi tiered systems of support, to the, you know, student information system we were using to take attendance, to the, you know, formative assessment tool that my science teacher was using for hinge questions to check for understanding. And these tools never worked together. And what was challenging about that is I was watching so many of my educators spend their Sundays rostering tools. I was watching them bring printouts of data from these platforms to their team meetings, their grade level meetings, our whole faculty meetings. And as an enterprise architect, I just was like, this is stupid. And so, you know, I have a community of practice that we also shepherd and found called the Data Wiz Crew. So we did some mapping of what tools people were using. What did that look like? And from there we really discovered a couple things. One, that so many of us were developing and building interoperability ourselves. So we were like the glue that was holding these tools together. And maybe that was spreadsheets and maybe that was some like, airtable code. But, you know, by and large, we were the interoperability engineers and we were each doing it ourselves, which also felt really stupid. And some of us were paying, in some cases, some school districts and school networks were paying six figures to get their data out of curriculum tools.
Daniel Emmerson 04:32
This is the connectivity of data between different platforms and systems.
Erin Mote 04:39
Yeah. So think about being able to just move data safely and securely between systems or, you know, a teacher having the insight of being able to see what's happening in a science class when they're working in a math class. And is this student maybe struggling with some of the same conceptual understanding in science as they might be struggling with in math and really giving that whole picture of a student. And so field based insight has always strived when we think something is right for action. And so that's how Project Unicorn was born. That's how the EDSAFE Alliance was born. 
EDUSAFE AI Alliance formation
Erin Mote
In 2020, before the consumer breakthrough of generative AI and ChatGPT. We formed EDSAFE because we were very concerned about the things we were seeing around machine learning and algorithms in education tools. Doing things where, you know, it was frankly, like tracking kids and really restricting access and opportunity. And we thought it was important that we come up with a framework. Now, the S.A.F.E framework is used globally, that focuses on safety, accountability, fairness, transparency and the efficacy of AI tools. So everything at InnovateEDU, even though it feels like they're distinct projects, are things that are A, both right for systems change and B, really came from educators in the field and saying we need to move in order to achieve this vision, which is access and opportunity for all learners to be able to really engage in education in a way that moves the needle for them.
Learning Personalization vs. Tracking
Daniel Emmerson 06:14
Can we unpack that a little bit? The tracking kids in particular that you mentioned and restricting opportunity. I want to pick up that thread, if I may, just because when we look at a number of the AI focused solutions that are out there today and how they're promoted, those things are. They don't quite go together. Right. The tracking is seen as something that increases potential or opportunity for learners. Can you help me sort of explore a little bit more what that restrictive element looks like and why that's important?
Erin Mote 06:50
Well, I think the opportunity for personalisation in education with AI tools is extremely high. But we have to understand that what we know from the science of learning and development is that learning is jagged, that not all kids are proficient at the same time, in the same way, in the same skill strand. And in fact, we've built an education system, a schooling system that's focused on the average. And I think what we're understanding from the learning sciences and the development, that there actually is no average when it comes to how we learn and what that looks like. And so we have a system that is asymmetric actually to, I think, the research and evidence base around learning that's emerged, you know, as we think about how much we've learned about the brain and learning sciences over the last 20 years. You know, we used to have this idea that kids, if they weren't on track to read by third grade, it was sort of all over in the United States when we thought about reading and literacy. And what we now understand about the brain is that there's really two significant times for neuroplasticity in a child's life. And it's not at third grade, it's zero to two. So think about all the evolution a baby does to get from what they are able to do at zero to what they can do by two walking, talking, relating, emoting, all those things. And the second time is actually as a child goes through puberty and so generally like 11, 12, 13. And so these are times of enormous neuroplasticity. And so we used to say, well, if you're not on track by third grade to read, we have to, we're sort of, you're sort of lost in our learning architecture. And the, and the reality is that's actually not true, that there is a way to use data and personalisation to backfill literacy gaps and math gaps in order to keep kids on grade level and accelerate them even beyond grade level to proficiency. I experienced that all the time at Brooklyn Lab. But there's also a dark side to this personalisation, which is in some, in some educational technology software. It doesn't give young people the ability to sort of prove what they know beyond a really sort of stratified sort of grade band. So because we know that learning is jagged, you might not know fractions in sixth grade because you didn't get your third grade number line, but you're able to do some other 6th grade skill strands. But what can be really challenging about some ways that education technology is built in terms of tracking is it doesn't sort of have allowances for that neuroplastic for that jaggedness. And so one of the things I think that we see in the best types of edtech tools is tools that allow students to both do what is classically known as skill remediation, but also what is known as sort of skill expansion or zooming ahead. And the vast majority of tools don't allow for that type of differentiation and that student driven agency. And so one of the things we were seeing in 2020 when we formed the EDSAFE AI Alliance, was the vast majority of tools were really tracking students and breaking through those like sort of barriers to content access and opportunity. The ability to demonstrate what, you know, maybe above grade level, maybe the ability to sort of have agency in the way that you're choosing content and curriculum, whether you're an educator or student, was incredibly restrictive. And so we saw also in a number of tools, frankly like student profiling happening, we were seeing in the behavior space, so we were seeing things like race and ethnicity, homeless status, disability status, triggering a different type of response when it came to student discipline because of how some of these systems were using that demographic profiling versus a student who might not have those characteristics. And so data can be so powerful, but we also have to be critical and interrogate the outputs of what that data is giving us, because so much of how data is served up really matters what is sort of the underlying both data source you're considering, but also some of those data fields. So we don't want to, I think we want to make sure that all students have that opportunity to punch above their weight. And that for me is really, really important. And so we have to be deliberate in the design of our tools in order to get there.
Establishing standards for AI tools using the SAFE framework
Daniel Emmerson 11:53
So when you were investigating that, I mean going back to 2020, of course, this is way, way before the mainstream gen AI tools that everyone is very familiar with today. Was that research initially conducted in, in the US or did that span internationally? What sort of geographies were you operating in at that point?
Erin Mote 12:13
Yeah, so there was some global research happening, but mostly in the US and there's some pretty landmark RCTs, randomized control trials and that were happening in Florida and Texas and New York that were starting to show these disturbing patterns that were emerging from machine learning and from algorithmic bias. And so it's really important for folks to know that while generative AI feels like this whole brand new amazing thing, AI has existed since World War II. And the use of AI in education around machine learning and algorithms is something that's been happening for decades. And so the arrival technology of generative AI, particularly in the consumer space, took me by surprise and I know so many others. And really the game has changed now with the ubiquity of these tools in classrooms. But the sort of standards around safety and accountability, fairness, transparency and efficacy remain, I think, a constant and steadfast way for us to think about should we be using AI in education and under what circumstances. And for me, safety is incredibly important as table stakes in this conversation.
Daniel Emmerson 13:33
That's your, your acronym, right? For the. It is where that comes from. And I'd love to talk to you more about that in a moment. But you mentioned your manifesto earlier and the reason that I was asking about where, where the research is happening, I think this is, this is where I picked this up. But you were talking about scarcity as a, as a problem, and you refer to a set pie and a percentage of the pie that education or systems might be looking to take and that there's a limit on what people can learn from and develop themselves. And in fact, we need to be thinking about these problems, particularly around AI, from a very different perspective when it comes to sharing what we know and understand and collaborating more. I was wondering if we could investigate that as to whether or not that that still holds true from the time you wrote the manifesto, or if that looks a little differently today.
Rejection of scarcity mindset in favor of collaboration
Erin Mote 14:26
Yeah, I mean, I think this idea of a fixed pie is really. So this is a like economic concept. I did not invent the fixed pie, but this is an idea that in order for me to get something, you have to lose something that, you know, we are in this situation where, whether that's organisations who might be competing for resources or whether that's students who, you know, are in two different districts or systems or schools or networks. It's this idea that someone has to lose for me to gain something. I just reject that as a, as a premise. I think that the type of systems change and movement building we need to be doing in education means that we need to be thinking through and thinking about how we collaborate together, how we share resources, how we think about best practices. And I think there's a bit of a clarion call in this moment around AI because for me, I talk about research and evidence all the time. And I think evidence is the sort of needs to be the undergirding of every decision that we make in education. And yet it takes time to generate randomised control trials or quasi experimental designs or the type of really sort of high level research that, you know, can help us make these decisions over the long term. And if you're a teacher, you need to know if something's not working in November, not five years from now. And so, you know, I think we need to be thinking about the evidence enterprise in, in ways that preserve those types of things. Quasi experimental designs and RCTs, they definitely have their place. They're really, really important. And, and we need to be thinking about, you know, does every tool that we're bringing into our classroom at least have a logic model, an educated hypothesis about what they're trying to do and are they working up an evidence chain? Are we as organisations and practitioners confident that if we're sharing best practices and collaborating together, we're moving sort of that evidence chain? I think the other thing that's really important is that I think we have far more in common than we do. That's different. And so a lot of times I'll be engaged in conversations across international lines or state lines or district lines, and I'll talk to folks and they'll say, well, that worked for that school, but it's not going to work for my school because of this, this and this. And that I think is really challenging because while I love that every community is unique and has a special context, I think there are things that we can learn from each other if we're in a learning posture. And it's always ironic to me that in education, which is basically the business of learning, that I see so many organisations and folks who aren't in a learning posture, who aren't thinking about curiosity and challenge, discernment and pushing on each other. And so that's so much of the culture at InnovateEDU is to not think we have it all figured out. But how do we build a table that brings together diverse perspectives who oftentimes don't work together. And frankly, in an increasingly polarised political situation here in the United States, they might not have ever spoken to each other before. And so how do you build trust? How do you find common ground? How do you remove this idea that for you to get this, I have to lose something? And how do we create the right incentives to build movements and push people to action? And I think there's kind of no greater time for us to be in a learning posture right now. When we look at how much AI is in our classrooms, no matter what country you're in, it's in the high 70s, 80s in terms of use. And we are not going to be able to have an evidence based that that takes five years to generate educated hypothesis about what we should be doing. And so this is really, I think, a clarion call for all of us to come together and to really think about how we share those best practices, how we find inspiration in each other. And I'm certainly inspired by some of the work that's happening in the UK right now around test beds, around shared infrastructure. I've always been such a huge fan, I'm such a fan girl of the Education Endowment Foundation and how they really try to think about making research more discernible to practitioners. And those are the things that I think we need to be looking at and thinking about. How do we rapidly build those public infrastructures for understanding, but also for collaboration?
Daniel Emmerson 19:18
Has recent genAI tools pushed that level of eagerness to share? Because I suppose everyone was at a similar starting point. When we're thinking about either schools or districts, everyone was thinking, crikey, you know, in November 2022, ChatGPT 3.5, everyone was at a similar starting point. Even though of course, AI existed long before that. This was the main sort of mainstreamification of the technology. Did that encourage people to share more? Did that encourage school districts to maybe reframe what they were thinking about in terms of maybe we could collaborate more, maybe we could share more around our practice? I'm wondering if that was a catalyst at all in the US. In the UK, I think it probably was, yeah.
Building national collaboration infrastructure through policy labs
Erin Mote 20:05
I think in some ways it was. In some ways it wasn't. I mean, I think there's, you know, one of the things I think that the UK has that the US doesn't is a set of regulatory undergirding around child privacy and data privacy that sits at the national level. And we don't have that in the US. We have some pieces of the puzzle, but not with the level of sophistication that, frankly, other parts of the world do. And I think what that often means is that collaboration often happens in states here in the US rather than sort of nationally. But this is a place where at EDSAFE, we built structures right away to try to build that national collaboration. And so we have a group of policy labs that started and actually started with the biggest school district in the United States. New York City Public Schools is how our policy lab started, was our first policy lab. And we started working with them the day after they banned generative AI. So they banned generative AI, and they said, we're not ready. And then they said, but we're going to get there. And so, you know, we've worked with them over the last three years in order to really put together policy and guidance. And they actually just released it into the world two weeks ago. And so I'm happy to share it for the shownotes. But it took a really deliberative process and a lot of internal work around their procurement systems and making sure that they could look parents in the face and say, we're not putting any tools in front of kids that haven't met our privacy, security and interoperability standards. They have a funny acronym for it. It's called ERMA, Not a person. But that's literally the name of the process, the ERMA process. And they did a huge amount of refinement in the ERMA process to really be able to make that promise to parents, teachers, communities, and students. And so we built this national network of policy labs. Cohort one, I would say, were the fast runners. The Gwinnett, you know, Canon City, Colorado, El Segundo, Santa Ana, so California school districts, Georgia school districts, so on and so forth. And we did it with only 12, and they were their own cohort, and they did a ton of sharing. But one of the essential elements of it is whatever they developed became open source and open science so that we would publish it, and then other school districts could take it as starter dough. We don't want them to copy it holistically, but we want them to be able to have a starting point. And then EDSAFE built a series of resources, from a glossary to planning resources, to things you could print out as worksheets if you needed to for your staff meeting around generative AI. That network has now grown to 29 districts and 10 states here in the US and so now I think we have, over the last three years, really built this national collaboration infrastructure that's not just about policy and not just around having a unified policy stack and practice, but it's really about building human connections with people to share expertise, to share best practices so they can pick up the phone and call each other. It's just like very beautiful thing that I love so much is when I am talking to our policy lab, so state leaders or district leaders, and they're like, oh yeah, I just talked to XYZ and I'm like, makes me so happy you picked up the phone and you called somebody that's a state away or that's, you know, a neighboring district and, and, and had a conversation about a challenge. You were having an opportunity and that you problem solved together. And actually not to get too like philosophical, but that's actually like the origin of learning. Like that is the Socratic method that has existed in ancient Greece for long before any of us existed was this idea of folks coming together collaboratively in person or virtually, and thinking through the future they want to imagine for education, for AI, for young people and for themselves. So it makes me feel really good about building that collaboration infrastructure, but you have to be deliberate about building it because people aren't inclined to do it naturally. There has to be structures and systems and trust that underpins this type of network.
Daniel Emmerson 24:37
And how did you go about developing that when you started?
Erin Mote 24:40
Yeah, I mean, part of it, we have a whole architecture for action, which I'll share. But part of it is like the pain was really, really acute. So people were willing to collaborate in different ways. Folks were facing questions and concerns. They were, you know, wrestling with hard things that they knew they couldn't do themselves. They might have lacked the technical expertise and knowledge to understand AI and how AI worked. And so, you know, EDSAFE provides so much technical assistance and materials and resources. I do it ask me anything. Every month with all of our policy lab leaders. We don't really have an agenda in these meetings. Like there's sort of this like loose agenda, but it really is just this hour of what is top of mind and what are the things that folks are thinking about. And then we think at EDSAFE about is this at a national level that we need to build resources or materials or lean in in terms of policy. And so there's all these types of structures that are both like what I would consider like proactive infrastructures for collaboration, trust building and communication and then reactive infrastructures so that if someone says, this is a serious problem for me, we have capacity to sprint to meet it and I'll, I'll name, you know.

Addressing Chatbot safety and making policy impact
We in February released a policy paper around the safe framework and chatbots and companion use in education. And that was entirely responsive to what we were seeing as demonstrable harms that were happening in schools and with young people around the use of AI companions in chat bots that were, you know, consumer tools not properly offloading students, aligned with mandated reporter practices, the use of companions to form intimate relationships with young people. And so what we sort of scaled and ran fast, put together a task team that released this policy paper in February, and we were going to release mandated reporter guides guidance in February aligned to it. And frankly, our policy labs and our state ed chiefs and even some governors said, I need you to release this asap. And so, because they were wrestling with really important legal concerns, considerations and questions about what happens if a young person ideates about suicide or is in mental health crisis in a chatbot or companion that they're accessing through school technology. What is our responsibility? How do we put forward a process? How do we put forward the work? And so we released that guidance and then one of our school districts in the policy lab just took on open sourcing their chatbots and companions policy by January. So even before the policy paper is released, we have released guidance, given legal instructions, and we have a model policy from a school district on the ground so that as school districts read this paper, they're not just worried about this and aware of this, there's actually some action steps and what to do and how to move forward. So again, like the proactive infrastructures and the reactive infrastructures. And then if you look at some of the things that have happened in the US since state legislatures have introduced almost 90 bills around prohibiting the types of behavior that we call out in the policy paper. So we know our state policymakers are picking that policy paper up and they're writing legislation and policy to regulate foreseeable harms. And then in late March or mid March, Congress actually got a bill through committee called the SafeBots bill that's aligned to the research agenda and the policy paper and now is for consideration in the full House and there's a companion bill in the Senate. So what's really important is where we are aligned on safety, where we have found common ground, where we have done this together, we can move fast to protect young people's learning experience, but also their foundational safety when it comes to these tools. So I'm really proud of the coalition that's come together to do that work and all the sort of reactive and proactive work that school districts and leaders have done to say, not on our watch.
Daniel Emmerson 29:12
It's an incredible achievement. Huge, huge. Congratulations. I know it's, you know, an ongoing piece of work.
Erin Mote 29:18
Oh, yeah. We'll be in Congress on 28 April for a briefing. So stay tuned, y'all. This is not. This is not the end. But, you know, I think in a. In a place where we're also finding community with your, with the DfE in the UK around standards to protect young people, around chatbots and companions. So how do we bring some of that really great work that's happening internationally to inform our work here in the US and vice versa? We see governments all over the world wrestling with this question around where there is demonstrable, foreseeable harms. How do we move faster to protect young people?
Procurement as an expression of values and the policy stack approach
Daniel Emmerson 29:59
Well, your SAFE is the safe, accountable, fair and efficacious. Right? That's the principles that drive the EDSAFE initiative. I'm wondering, are there some core or key takeaways, I should say, for folks who aren't familiar with this work, particularly those that are overseas, here in the UK or elsewhere, to get people thinking about procurement and deployment and what they should be and what they shouldn't be accessing in a school environment?
Erin Mote 30:30
Procurement is like the sexiest thing, I think, in our ecosystem because it is the way we express our values, it is the way we express what we hope for young people. And so, you know, we have a policy stack where it's really funny when I, when I work with our policy labs and they sort of first come to us along with our team. A woman named Andrea Klaber leads it at EDSAFE, all rta. They want to, like, start with procurement. And I'm like, I love you that you want to start with procurement, but there's a whole set of things that we need to do before procurement that are about understanding what our intention is. So the very bottom step of the policy stack is a board and district school, district school network vision statement. In the case of states, it's often a state vision statement about what they hope for AI in education. So we're really clear about what we want the outcome to be. And then the top of the stack is actually procurement. 
Quality indicators for EdTech
And New York is a great example of how long it took to get there so that the procurement system was aligned. But we think that there are and have put a stake in the ground around what we think are five quality indicators for ed tech use in general. We're really focused on driving a distinction between purpose built education technology tools built for use in schools and consumer tools. And the reality is in the US I know also in the UK it's a bit of a mixed bag about what's being used in classrooms. There's consumer tools being used in classrooms that were not intended for educational purposes. And same in the U.S. and so first, how do we draw that distinction between purpose built educational tools and tools designed for learning, tools designed for Socratic thinking rather than sycophancy? How do we draw that distinction versus consumer tools? And then when we're thinking about the procurement process, along with a consortium of organisations, we've developed five quality indicators. And then just in March with instructure, a for profit company that has Canva as one of their major offerings, but it’s a huge for profit edtech company, we released the 2026 evidence report that uses these quality indicators, contrast consumer tools versus edtech tools, the top 150 used in the US and tells folks where they are vis a vis these quality indicators around interoperability, privacy, security, accessibility, efficacy and inclusion. And so really so universal design for learning is the measure and inclusion and uses market certifications in order to start to give people some idea of how this tool stacks up against an evidence base around how this tool stacks up around privacy and security. And we can't leave that to every tech director, educator and school. That is not fair. And so this is really a clarion call. We hope for the ed tech industry to prioritise evidence and to understand that we need to give clear market signals with this distinction around consumer tech versus ed tech. So I really will share the Instructure report with all of you and in just a week we'll release that list of 150, how we classified them and specifically what certifications they each have. But I think for us these are the ecosystem levers that organisations like InnovateEDU and others need to be pulling in order to make it easier and more transparent for parents, communities, students and educators to be using edtech in the way it should be used, which is in support of a human centered learning experience.
Guidance for school leaders on addressing AI implementation
Daniel Emmerson 34:19
I suppose just to wrap up Erin on a, on a similar point, of course, procurement is, is where a lot of, a lot of decisions get left to those that are responsible for the DPIA. It's the final part of the process. Right. Thinking this through a process lens is the way to think about it. What is the purpose of the technology that you're deploying? Yeah, as a head of school though, you're confronted with this seeming need to do something about AI because of how prevalent it is in the headlines, how frequently parents are talking about it, you know, that your students are using it, what might you say to a head of school who's really struggling to find that first rung on the ladder at the moment?
Erin Mote 35:00
As a former head of school who led a middle and high school in New York City, I have such deep empathy for this question and for folks who are in that situation. And, you know, I think you first have. You do have to do something. That's the first thing I want to say. You can't sort of walk away right now and say, well, we're just not going to deal with that, so first do something. And I think that something has to be centered around building AI literacy with your students, with your educators, and with your communities. And I'm deliberate about calling those three things out because I think the set of developmentally appropriate things we're doing to build AI literacy with students can look really different. It can, you know, in the K123 space, be about sorting and categorising and the principles that underline computational thinking and AI. And then it can get to use and actual tool use as you sort of gradually go up. And we have a paper, which I'll share with you, which is a whole blueprint for AI literacy that thinks about anchoring in the science of learning and developmentally appropriate AI literacy tools. And then the other thing I would say is educators are another place that we really need to have a calling in.

Learning from Social Media Mistakes

When I see here in the US that there's only 30% of districts and states who have provided guidance, not even policy, guidance about AI use, and 86%, almost 90% of educators, saying they're using AI at least once a week in their classroom, that gap is not acceptable. That is what happened with social media. We are on that path right now. So as an ecosystem, we need to lean in and think about how do we build the capacity, knowledge and expertise of our frontline educators with AI literacy to be able to understand these tools and to be critical consumers of this technology. I'm not an accelerationist. I think we need to be scouts. We need to balance the promise and peril of this technology. But we must equip frontline educators with AI literacy now. And that's not just a teacher. That's the, you know, school counselor, student support, or here we have paraprofessionals. You all have student aides. Like, it's the whole ecosystem of educators that need to understand and be able to interrogate this technology, its inputs, its outputs, and how they're using it in their practice. And then it's parents and communities. So much of, I think where we dropped the ball in the US on social media is, you know, and I'm, I'm part of that. I was running a School in 2014 and I remember saying to my staff, and it's sort of one of those things where I just like think about it now and I'm just embarrassed about it, where I said, we block social media in this school with our E rate funding, we have wifiI filters and so on and so forth. This is something that parents need to address at home. I abdicated responsibility. I should have never done that. And I think actually many of us did that around social media. And it only wasn't until a little bit later that we saw how social media was coming into our classrooms. Behavior that was happening out of school was deeply affecting what was happening, relationships in school that we, that I said, okay, like we have to do something about media, digital literacy and social media. But it was a couple years later, in the spirit of radical candor. And what I should have done differently is what I want us to learn from that. I want us to say, okay, the school and home divide is not a divide at all. We need to be educating our parents. We need to be calling in our caregivers. We need to be really helping them understand about these tools because what happens at home will come into the school. And if our parents aren't equipped and our students aren't equipped with the ability to be critical consumers of this technology, then we're going to repeat the mistake we made in social media. And so the good news is we can learn. We can be in that learning posture. We can be curious about what mistakes we made before and, and we can fix it. And we can fix it right now. And so I would start with AI literacy. We in the US have National AI Literacy Day. We lead it here at InnovateEDU. I know you all have a similar sort of activation day in the UK. Lots of resources available out there that are free. Folks can go to ailiteracyday.org and grab free lessons, free professional development, so on and so forth. And while it is US focused, I will say that I think the curriculum, the lessons, so on and so forth are sort of boundaryless. But they do still say math. Not maths, just in full disclosure, everyone. They are not culturally responsive to the math versus maths debate.
Daniel Emmerson 40:01
I've been pulled up on that. I have used math in seminars with teachers in the UK and being chastised. So. Okay.
Erin Mote 40:11
Me too. So I'm very aware that I waded into a cultural debate there that I shouldn't have long ago. So. But, you know, I think you, I think the UK with the Big AI project, with the resources the DfE is putting out with the stuff, you know, that I think Oak has developed, even there's, you know, the test bed work. And that investment is going to just yield, I think, some really important shared public infrastructure that I know we're going to learn from here in the US And I think can be a model for the world. And we hope we can share what we're learning in terms of best practices. Again, not a fixed pie. Really. Like, how do we create and run up this moment together?
Daniel Emmerson 40:58
Well, I'm looking forward to sharing this episode very, very much with our listeners and indeed all of the resources that you mentioned. Erin, thank you so, so much for being with us today. It's been an absolute pleasure speaking with you.
Erin Mote 41:08
Oh, Daniel, thanks for having me. 

JOIN OUR MAILING LIST

Be the first to find out more about our programs and have the opportunity to work with us
If you use Microsoft email services, please whitelist the domain goodfuture.foundation in your email settings to ensure you receive our emails.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.