Muireann Hendriksen: Adapting AI Tools Based on Learning Science

November 11, 2025

Video Recap

Summary

In this episode, Daniel speaks with Muireann Hendriksen, the Principal Research Scientist at Pearson, about her team's recent research study called "Asking to Learn" The study analysed 128,000 AI queries from 9,000 student users to gain deeper insights into how students learn when they interact with AI study tools. Their key finding revealed that approximately one-third of student queries demonstrated higher-order thinking skills. Their conversation also explores important themes around trust, student engagement, accessibility, and inclusivity, as well as how AI tools can promote active learning behaviours.

You can find the full research report at https://plc.pearson.com/sites/pearson-corp/files/asking-to-learn.pdf

Transcript

Daniel Emmerson 00:02

Welcome to Foundational Impact, a podcast series that focuses on education and artificial intelligence from a non profit perspective. My name is Daniel Emmerson and I'm the Executive Director of Good Future Foundation, a nonprofit whose mission is to equip educators to confidently prepare all students, regardless of their background, to benefit from and succeed in an AI infused world.

Thank you everybody once again for joining us for another episode of Foundational Impact. Our podcast series focuses on ways that we're looking at artificial intelligence in schools across the education sector and beyond. And I'm absolutely delighted to have Muireann with us today from Pearson. A very warm welcome to you. It's wonderful to have you with us. There are so many things that I'd like to ask you about based on your work and your research. But before I do that, would you mind just giving a quick introduction about who you are and what you do for our audience?

Muireann Hendriksen 01:03

Sure, yeah. It's a pleasure to be here. Thank you for having me. So I'm Muireann Hendrickson, I'm a qualitative researcher and I work as part of Pearson's research and development and thought leadership team, which is a bit of a mouthful. I'm a principal research scientist there and what that means is I work with teams across our business on product development and improvement, but I also work on thought leadership. So kind of working across the big cross cutting questions around education and learning of our time, like for example, the impact of AI on learning, which I'm sure we'll be chatting loads about today. So yeah, I kind of have those dual product focus and thought leadership focus, which keeps things interesting.

Daniel Emmerson 01:45

I'm sure it does. How were you drawn into that AI space in particular?

Muireann Hendriksen 01:49

Yeah, I think, so our team has been involved with the development of our AI study tools at Pearson from the very get go. So we started working on this, I think summer of 2022 when the kind of more mainstream AI tools started to be, you know, available to the public and people were starting to experiment with them. And we knew at Pearson that we wanted to design something that, that could be trustworthy, that could be accurate, that could be reliable. So our team was really involved, you know, kind of got in on the ground floor as part of this cross functional group. We had subject matter experts, UX product developers working to build something that would take advantage of the capabilities that generative AI brings, but would, you know, pull on trusted content and be built kind of in accordance with the principles of good learning science. So yeah, it's been quite a journey and obviously it's changing all the time. It's a lot to keep up with, but it's been a really exciting thing to be working on.

Daniel Emmerson 02:49

You mentioned trust a couple of times there in, in different contexts. That's obviously at the forefront of development as far as Pearson is concerned. It might seem like a strange question, but putting that front and center was a strategic decision. Was there any discussion around why that might be important, particularly through the lens of education?

Muireann Hendriksen 03:10

Yeah, of course. So when we design for learners, we want it to be, you know, helping them to achieve their learning goals or the learning outcomes they have in mind. And I think as part of that, they have to feel secure that the content they're getting from us and that they're working with is something that they can trust and something that they can rely on, whether it's to answer homework, complete an assignment, prepare for an exam. And so our design of these tools prioritises kind of expert, vetted, publisher approved content, but also we're talking to instructors and learners across the world all the time. And I was heartened to hear it in your recent episode with the members of your student council recently when they were talking about when you use these general purpose AI tools, sometimes they can serve you back like a real flood of information. And that can be super confusing for a learner because it doesn't, it maybe doesn't exactly match what your textbook is saying or what your teacher has said in class. It can maybe be too much or too little detail. You, you know when you're looking at it, that maybe it doesn't exactly match what you're expecting to receive. And so it's not just about having trusted content to pull on, but also making sure that the outputs are contextualized in that learning process. And so because we're basing everything from the textbook first, it really places the output from our AI study tools in context for the learner. It gives it to them in a very kind of scaffolded way with kind of positive encouragement, really trying to guide the learning process for them. So it was about, you know, really trying not to overwhelm anyone and just make this another useful tool that could be part of their study arsenal, if you will.

Daniel Emmerson 04:52

Because I suppose one of the implications of that mainstream introduction of generative AI across every sector, near enough. But certainly education meant that we've gone from a culture of believing what we read to a culture of, or trying to become a culture that needs to question everything we read because of where it comes from, how it's generated, what sources it's pulling from and who's behind it. That's something that we're looking at, particularly from a disinformation or a misinformation perspective. I'm interested to know if that's something that you've come across in your work or your research. I know you've spoken to tons of students about different areas of AI in education, but was this one of them? And if so, what sort of response did you get?

Muireann Hendriksen 05:39

I think when we talk to students, the responses that we hear about how they feel about AI in their learning doesn't really feel that different to how they feel about other learning tools. What they want is a positive, safe, collaborative interaction that ultimately is going to help them get to where they want to be. I think it's still very early days of mainstream AI release, if you will, and I think the research base around AI and learning is still crystallizing. You know, it does seem to be starting to point towards AI tools being most effective when they are kind of incorporated meaningfully into the learning process. And I think that's what we're always trying to get at, that kind of meaningful incorporation that is really, you know, encouraging active engagement rather than just kind of passively receiving an answer.

Daniel Emmerson 06:27

It'd be great to speak with you a little bit about asking to learn and the research that you conducted there. 130,000 AI queries. Is that right from students? Could you give the audience an idea about what the research was focused on and perhaps what surprised you most about your finding?

Muireann Hendriksen 06:47

Yeah, sure. So we knew since the launch of our AI study tools that this was going to really open up new insights for us into the learning process and the kind of learner interactions with our tools that we could see things that we'd never been able to see before. And I guess the feature that I, as a qualitative researcher, that I was most interested in was our explain feature, which is basically where learners ask questions in their own words. That was interesting to me because you would get to see, you know, how are they naturally framing their questions? What are they curious about? What are they confused about? What kind of language are they using? You know, are they, you know, mirroring the textbook? Are they putting things into their own words? And as we started to have these discussions in our team, we were starting to think about, you know, what can we tell from these interactions? Are they just kind of using the tool for a very quick, like, you know, I'm writing something, let me check the definition of this quickly. Or is it a kind of more meaningful, engaged Interaction where they're trying to go deeper on something, they're trying to, you know, ask a more complex question. So to give a little more detail on the asking to learn research, like you said, we looked at approximately, I think it was about 128,000 inputs from about 9,000 users. So it was yeah, no small undertaking. And we looked at inputs across one year to an intro level biology textbook. So this is the kind of a textbook that would be used in a typical kind of first year biology course.

Daniel Emmerson 08:15

This is a Campbell Biology.

Muireann Hendriksen 08:17

Correct. It's one of our best selling titles, first year of university. So really kind of building that foundational understanding kind of intro biology level courses. What surprised me about the research, I think, you know, this was exploratory research to begin with because we were working with these new AI tools, we really didn't know what we were going to see in there. And with such a huge data set, we needed to structure it around something because we just an overwhelming set of data to work with. So we structured it around Bloom's taxonomy, the revised Bloom's taxonomy for a number of reasons. One, because it's a very recognisable structure to anyone working in teaching and learning, but also because it allows us to focus on. We didn't just want to look at the topic of what students were asking about. So like, you know, okay, great, they have loads of questions about the cell that doesn't really tell us that much. You know, a good teacher would say, okay, but what do they want to know about the cell would be your immediate follow up. Right. So we wanted to understand what level is this question happening? You know, how deep are they getting into it? Is it something about experimentation is about a law they've come across? So this revised Bloom's allows us to kind of look at the inputs in these two different dimensions. So what are they asking about and how complex or challenging is the thinking behind that question? And what was surprising to us is we found that so about 80% of their inputs were at the kind of lower levels of Bloom's. So that kind of remembering conceptual, you know, basically nothing that you wouldn't expect from an intro level course because obviously the focus is, you know, introducing new topics, building foundational knowledge. What was exciting to see was that around a third of a time students were, you know, asking questions at higher levels of cognitive complexity. So if you're familiar with Bloom's, it's like working at that apply level and above. And then around 20% of the time they were really going to analyse and above. So, you know, asking questions that we know are consistent with the development of higher order thinking skills, showing evidence of critical thinking, which was super encouraging and exciting to see.

Daniel Emmerson 10:24

What does it tell us about how students naturally engage with AI, do you think? Particularly if we're looking at deeper level thinking?

Muireann Hendriksen 10:34

I think first of all, it was really surprising to me to see that kind of level of deeper thinking. And you know, on a personal level, I loved the level of creativity that they were showing in their questions. So really trying to make topics their own, trying to have it explained to them in ways that they understood or made sense in their world. You know, can you relate this to a game of tennis? Or you know, how would Taylor Swift write the lyrics to this biology topic? That kind of. So to me that shows, you know, someone who's really, really trying to understand something in terms that are familiar to them. That was really exciting to see.

Daniel Emmerson 11:09

Can I just ask Muireann, were there prompts around this is something that you could do or were these things that were naturally occurring?

Muireann Hendriksen 11:16

These were naturally occurring. So when the Explain feature opens it's a kind of like a little window in their e-textbook, it just, I think it has a very generic prompt like if you would like to ask a question or something like that. The initial interaction is completely, you know, prompted by the student. It's. It's coming from them, which is also, yeah, good to see. It was encouraging to see that deeper thinking and I think it shows that, you know, when the tool is built in accordance with how we know learning happens, if it's coming, if it's being used in an appropriate place, i.e. It's like in the flow of their learning, in their interaction with the textbook, it was really encouraging to see how it can be used to kind of scaffold them towards that higher order thinking. And I think something we mentioned briefly in the paper is that actually based on the findings from this work, we worked with the product development team on the development of a new feature which we're calling Go Deeper. And that actually will be trying to nudge them and encourage them towards more of that higher order thinking.

Daniel Emmerson 12:19

Can you tell us a bit more about how Go Deeper was designed from the research insights?

Muireann Hendriksen 12:24

Yeah, sure. Because we had those really encouraging findings where they were going into those higher order questions some of the time. Basically what we've done is added this new enhancement to the Explain feature so that when a learner now puts something into Explain, they'll see three kind of relevant follow-ups to their original question at the bottom of the reply that they can explore. And to come back to my earlier point when I was talking about not wanting to overwhelm people with lots of information, we have deliberately designed it so that the questions, the follow up questions that they see and go deeper, will never go more than one or two levels of Bloom's levels higher than what they originally asked. Because you don't want to cause confusion and overwhelm people if you have a simple definitional question, you don't want something that's asking you about designing an experiment and confusing you even further. So the idea is that we could kind of begin to transform. If a learner comes along with a single query, it kind of becomes a more guided pathway trying to actively scaffold that higher order thinking in a way that isn't overwhelming. So that's been a really encouraging development from there.

Daniel Emmerson 13:34

Can we try and paint a picture of what that experience looks like?

Muireann Hendriksen 13:37

Sure.

Daniel Emmerson 13:37

From a user. So what happens? You open your platform, you're looking at Campbell Biology. I don't know, you might be looking at cell structure or whatsoever. What happens then? How do you get to that? Go deeper?

Muireann Hendriksen 13:49

Yeah, let me talk you through it. So I'm going to use my prepared example because in doing this work I've been working at the absolute limits of my biology knowledge, let me tell you. So I'm not great at going too off the cuff on this, but say I'm a learner, I'm in my biology textbook and I get a little bit confused. So I see the little AI study tool Chatbot, I open it up on my page, I and I go to the Explain feature and I ask can you define a polar molecule for me? So that would be something that's at Bloom's level, remember and know. Because it's a very kind of basic factual definition. So my explanation will come up and then underneath it will be three follow up questions that might reflect so understanding which would be the next highest level. So you might get a question like describe partial changes in polar molecules. The level after that will be apply. So it could say, can you apply the polar molecule concept to a solubility scenario? So you're going to get two questions one level above and one question two levels above to try and scaffold you and build your curiosity and see like how you could expand on this topic.

Daniel Emmerson 15:01

I'm trying to think this through as well from the perspective of younger learners. Right. The majority of our exploration is in the K12 space. I suppose we're looking here specifically at university. Does there need to be a level of AI literacy at the outset in order for a learner to engage with something like this, or is that something that they can just readily fall into? What are your thoughts around that?

Muireann Hendriksen 15:28

I think that, you know, there is a basic level of AI literacy that anyone needs when they're interacting with AI tools, I think. But I think because these tools have been designed within a very boundaried learning environment, the hope is, you know, that it's not going to take them somewhere. It's pulling on very trusted content that's already been kind of publisher approved, expert vetted, and there is only so many things they can do within our tools. It's not kind of, you know, opening up the world to a younger learner in that way.

Daniel Emmerson 15:58

What about when it comes to inclusive design considerations of a tool like this? We talked about trust at the beginning and how important that is. What about when it comes to accessibility and ensuring that it's possible to access from lots of different student perspectives.

Muireann Hendriksen 16:16

So we always build with kind of universal principles in mind for accessibility. That's just a standard part of our product development process. I think starting from this baseline of getting something out there, we've iterated on the tools over time. So including things like building in that learners can incorporate images and videos still within that kind of protected environment and vetted content, but just to try and expand to a wider range of learning preferences or learning needs. And now that the AI study tools are being introduced to textbook for a wider kind of global audience, we'll have tools available in languages other than English where people can input things in their native language and have it translated for them and kind of work with it on that basis.

Daniel Emmerson 17:04

When you're looking at the success of the tool, then again, going back to the importance of the research behind it, what are the main signals that indicate real learning transformation and not just being something that's convenient and easy to access, but something that's really having a positive impact there.

Muireann Hendriksen 17:23

Yeah. So we try to think about learning activities and behaviors along a continuum from sort of passively listening to something or reading something and doing nothing else with it, just kind of ingesting content, to very active learning behavior where you're really making the work your own. You're trying to reformulate things in your own words. You're kind of actively manipulating information. And so we look at our AI study tools in conjunction with everything else that the learner might be doing within this space. So what notes are they taking? Are they using flashcards? Are they trying practice questions. We know that those more active study behaviors are associated with better outcomes in the real world, like better grade performance, better, you know, exam scores and so on. So we try to look at it in that context. Obviously, you know, we're not in the classroom with the learner. We are just, we have access to what we have access to. So that's where we can, where we can go. We know that we are seeing this study behaviors that point to better outcomes in the real world. In time, we would love to obviously develop that into a more formal research where we could, you know, understand that kind of real world outcomes. But for the moment, it's more about understanding are we encouraging and hopefully seeing more of those active study behaviors.

Daniel Emmerson 18:41

And is there anything that stemmed from the research that you think will impact perhaps future designs or future iterations of the product?

Muireann Hendriksen 18:49

I think, you know, it's an ongoing conversation, both internally from the data that we're seeing, but also, you know, you know as well as anyone working in AI, it's changing all the time, right? So you're constantly keeping up with things that are changing in terms of weeks and months, not years. So it's balancing those two things while also, I guess, trying to protect the integrity of the learning and study process. You know, at the end of the day, it's always a consideration. There's no need to throw, you know, 20 shiny features if they have no proven impact on learning. We need to be designing in accordance with learning science and really thinking carefully and thoughtfully about what we're putting in front of students. So that's always kind of the guiding principle. So I think where we're heading is trying to think about how we can, how we can use the capabilities that AI affords us and the kind of insights that we're having from things like the AI study tools to improve that study experience. So, for example, hypothetically, could you explain inputs? Can we use that internally to help us see? Well, students always get confused. A lot of inputs are at this place in the textbook. Maybe we need to revisit this content or bring it to life for them in a different way. It's this wonderful feedback loop that we've never had before that, that is so telling of where students are in their process and it really kind of turns it into a dialogue. So, yeah, it's super exciting and interesting to be a part of.

Daniel Emmerson 20:17

I'm wondering also if there's anything from the research that might indicate an increase in over reliance on AI for learning. Is that something that you've seen patterns emerging in or. Or not so much. And what is over reliance, I suppose.

Muireann Hendriksen 20:33

Right, yeah, true. I think, yeah, when we see over reliance, you know, depicted in, I guess, you know, you see media articles about it. And this idea of, I think for me, over reliance is where AI is used to completely bypass the learning process. And there is no real evidence that any understanding or learning has taken place. It has just been kind of straight from A to Z. So that's, I guess, how I would think about over reliance. I think if we were seeing that, we would see huge drop offs in the other kind of product features that we have to support active study practices. We're not seeing that. So that kind of suggests that this is one of a suite of tools that students are using and not necessarily they're putting in everything. And also the tool has been designed not to give them the answer, is the first thing I should have said. The idea is that it will scaffold them through the process and encourage them to return to why are you, you know, asking this question? Why are you confused about this rather than immediately giving them the answer?

Daniel Emmerson 21:37

Reflecting then on asking to learn. Is there a piece of advice that you might give to educators when it comes to embedding AI tools in a way that's thoughtful and responsible?

Muireann Hendriksen 21:50

You know, we're talking to educators and learners a lot, and I've seen a real shift in the conversation around generative AI. I think when these tools first came out and first started to be discussed, a lot of the conversation with educators was about should we use AI or not. And now that kind of discourse seems to really have moved on to more how and where can we use AI effectively? Pearson's recent the UK Schools report show that there's such a strong demand for more training around AI, both from teachers and students. So I think there's real appetite there and certainly a real awareness from educators that they need to know more about this because they're trying to prepare students for a world where a lot of things will be working with AI. And so I think that as this understanding is growing about how we can use AI to scale excellence in teaching and learning, a lot of it is about building connections between the technology and how it can be used to help achieve these goals. I think there's a real, maybe psychological factor in this where you need to build confidence, you need to overcome fear, overcome overwhelm, because it is, you know, changing really quickly. It can be overwhelming to engage with these things. So it's just about seeing how they can be used meaningfully and it doesn't have to be always, you know, short circuiting learning. They can be part of a meaningful incorporation that works towards better cognitive engagement.

Daniel Emmerson 23:15

Asking to Learn is a fascinating project and a really wonderful read. Muireann, I'm wondering what next you have lined up on the research side. Is there anything that you can share with us?

Muireann Hendriksen 23:27

I think our plan is just to expand on this work. I mean, this was an exploratory study. We weren't expecting to have the findings that we were. It was a super encouraging thing to see, so we'll hope to replicate it in other disciplines. And you know, as I mentioned, our AI study tools are now available to a more global audience. And so it would be interesting to see how this plays out in other kind of contexts, other ages and stages.

Daniel Emmerson 23:50

Looking forward to seeing where this goes. Muireann, thank you so, so very much for sharing your thoughts and reflections. It's been wonderful speaking with you today. Look forward to staying in touch and speaking with you again very soon.

Muireann Hendriksen 24:03

Thank you so much for having me. As a final step, I would just encourage any listeners to read our Asking to Learn report as a good example of what can happen when AI is built for learning.

Daniel Emmerson 24:13

A shining example. I'll make sure we share it in the notes, that's for sure.

Muireann Hendriksen 24:17

Brilliant. Thank you.

Voiceover 24:19

That's it for this episode. Don't forget, the next episode is coming out soon, so make sure you click that option to follow or subscribe. It just means you won't miss it. But in the meantime, thank you for being here and we'll see you next time.

About this Episode

Muireann Hendriksen: Adapting AI Tools Based on Learning Science

In this episode, Daniel speaks with Muireann Hendriksen, the Principal Research Scientist at Pearson, about her team's recent research study called "Asking to Learn" The study analysed 128,000 AI queries from 9,000 student users to gain deeper insights into how students learn when they interact with AI study tools. Their key finding revealed that approximately one-third of student queries demonstrated higher-order thinking skills. Their conversation also explores important themes around trust, student engagement, accessibility, and inclusivity, as well as how AI tools can promote active learning behaviours.

Dr. Muireann Hendriksen

Principal Research Scientist at Parson

Related Episodes

October 13, 2025

Embracing AI in GEMS Winchester School Dubai

Leena, Alicia and Swati from GEMS Winchester School Dubai, share their remarkable journey to achieving AI Quality Mark gold status. Over 12 months, they developed a school-wide AI strategy by establishing an AI core team, working party, and champions across both primary and secondary divisions. Their systematic approach also included AI tool evaluation through detailed risk assessments, and the creation of a bespoke AI literacy programme for their teachers. Their conversation reveals how they engage all stakeholders, including teachers, students, and parents, to cope with the challenges of this rapidly evolving technology and prepare students for an AI-infused world.
September 29, 2025

Matthew Pullen: Purposeful Technology and AI Deployment in Education

This episode features Matthew Pullen from Jamf, who talks about what thoughtful integration of technology and AI looks like in educational settings. Drawing from his experience working in the education division of a company that serves more than 40,000 schools globally, Mat has seen numerous use cases. He distinguishes between the purposeful application of technology to dismantle learning barriers and the less effective approach of adopting technology for its own sake. He also asserts that finding the correct balance between IT needs and pedagogical objectives is crucial for successful implementation.
September 15, 2025

Matt King: Creating a Culture of AI Literacy Through Conversation at Brentwood School

Many schools begin their AI journey by formulating AI policies. However, Matt King, Director of Innovative Learning at Brentwood School, reveals their preference for establishing guiding principles over rigid policies considering AI’s rapidly evolving nature.
September 1, 2025

Alex More: Preserving Humanity in an AI-Enhanced Education

Alex was genuinely fascinated when reviewing transcripts from his research interviews and noticed that students consistently referred to AI as "they," while adults, including teachers, used "it." This small but meaningful linguistic difference revealed a fundamental variation in how different generations perceive artificial intelligence. As a teacher, senior leader, and STEM Learning consultant, Alex developed his passion for educational technology through creating the award-winning "Future Classroom", a space designed to make students owners rather than consumers of knowledge. In this episode, he shares insights from his research on student voice, explores the race toward Artificial General Intelligence (AGI), and unpacks the concept of AI "glazing". While he touches on various topics around AI during his conversation with Daniel, the key theme that shines through is the importance of approaching AI thoughtfully and deliberately balancing technological progress with human connection.
June 16, 2025

David Leonard, Steve Lancaster: Approaching AI with cautious optimism at Watergrove Trust

This podcast episode was recorded during the Watergrove Trust AI professional development workshop, delivered by Good Future Foundation and Educate Ventures. Dave Leonard, the Strategic IT Director, and Steve Lancaster, a member of their AI Steering Group, shared how they led the Trust's exploration and discussion of AI with a thoughtful, cautious optimism. With strong support from leadership and voluntary participation from staff across the Trust forming the AI working group, they've been able to foster a trust-wide commitment to responsible AI use and harness AI to support their priority of staff wellbeing.
June 2, 2025

Thomas Sparrow: Navigating AI and the disinformation landscape

This episode features Thomas Sparrow, a correspondent and fact checker, who helps us differentiate misinformation and disinformation, and understand the evolving landscape of information dissemination, particularly through social media and the challenges posed by generative AI. He is also very passionate about equipping teachers and students with practical fact checking techniques and encourages educators to incorporate discussions about disinformation into their curricula.
May 19, 2025

Bukky Yusuf: Responsible technology integration in educational settings

With her extensive teaching experience in both mainstream and special schools, Bukky Yusuf shares how purposeful and strategic use of technology can unlock learning opportunities for students. She also equally emphasises the ethical dimensions of AI adoption, raising important concerns about data representation, societal inequalities, and the risks of widening digital divides and unequal access.
May 6, 2025

Dr Lulu Shi: A Sociological Lens on Educational Technology

In this enlightening episode, Dr Lulu Shi from the University of Oxford, shares technology’s role in education and society through a sociological lens. She examines how edtech companies shape learning environments and policy, while challenging the notion that technological progress is predetermined. Instead, Dr. Shi argues that our collective choices and actions actively shape technology's future and emphasises the importance of democratic participation in technological development.
April 26, 2025

George Barlow and Ricky Bridge: AI Implementation at Belgrave St Bartholomew’s Academy

In this podcast episode, Daniel, George, and Ricky discuss the integration of AI and technology in education, particularly at Belgrave St Bartholomew's Academy. They explore the local context of the school, the impact of technology on teaching and learning, and how AI is being utilised to enhance student engagement and learning outcomes. The conversation also touches on the importance of community involvement, parent engagement, and the challenges and opportunities presented by AI in the classroom. They emphasise the need for effective professional development for staff and the importance of understanding the purpose behind using technology in education.
April 2, 2025

Becci Peters and Ben Davies: AI Teaching Support from Computing at School

In this episode, Becci Peters and Ben Davies discuss their work with Computing at School (CAS), an initiative backed by BCS, The Chartered Institute for IT, which boasts 27,000 dedicated members who support computing teachers. Through their efforts with CAS, they've noticed that many teachers still feel uncomfortable about AI technology, and many schools are grappling with uncertainty around AI policies and how to implement them. There's also a noticeable digital divide based on differing school budgets for AI tools. Keeping these challenges in mind, their efforts don’t just focus on technical skills; they aim to help more teachers grasp AI principles and understand important ethical considerations like data bias and the limitations of training models. They also work to equip educators with a critical mindset, enabling them to make informed decisions about AI usage.
March 17, 2025

Student Council: Students Perspectives on AI and the Future of Learning

In this episode, four members of our Student Council, Conrado, Kerem, Felicitas and Victoria, who are between 17 and 20 years old, share their personal experiences and observations about using generative AI, both for themselves and their peers. They also talk about why it’s so crucial for teachers to confront and familiarize themselves with this new technology.
March 3, 2025

Suzy Madigan: AI and Civil Society in the Global South

AI’s impact spans globally across sectors, yet attention and voices aren’t equally distributed across impacted communities. This week, the Foundational Impact presents a humanitarian perspective as Daniel Emmerson speaks with Suzy Madigan, Responsible AI Lead at CARE International, to shine a light on those often left out of the AI narrative. The heart of their discussion centers on “AI and the Global South, Exploring the Role of Civil Society in AI Decision-Making”, a recent report that Suzy co-authored with Accentures, a multinational tech company. They discuss how critical challenges including digital infrastructure gaps, data representation, and ethical frameworks, perpetuate existing inequalities. Increasing civil society participation in AI governance has become more important than ever to ensure an inclusive and ethical AI development.
February 17, 2025

Liz Robinson: Leading Through the AI Unknown for Students

In this episode, Liz opens up about her path and reflects on her own "conscious incompetence" with AI - that pivotal moment when she understood that if she, as a leader of a forward-thinking trust, feels overwhelmed by AI's implications, many other school leaders must feel the same. Rather than shying away from this challenge, she chose to lean in, launching an exciting new initiative to help school leaders navigate the AI landscape.
February 3, 2025

Lori van Dam: Nurturing Students into Social Entrepreneurs

In this episode, Hult Prize CEO Lori van Dam pulls back the curtain on the global competition empowering student innovators into social entrepreneurs across 100+ countries. She believes in sustainable models that combine social good with financial viability. Lori also explores how AI is becoming a powerful ally in this space, while stressing that human creativity and cross-cultural collaboration remain at the heart of meaningful innovation.
January 20, 2025

Laura Knight: A Teacher’s Journey into AI Education

From decoding languages to decoding the future of education: Laura Knight takes us on her fascinating journey from a linguist to a computer science teacher, then Director of Digital Learning, and now a consultant specialising in digital strategy in education. With two decades of classroom wisdom under her belt, Laura has witnessed firsthand how AI is reshaping education and she’s here to help make sense of it all.
January 6, 2025

Richard Culatta: Understand AI's Capabilities and Limitations

Richard Culatta, former Government advisor, speaks about flying planes as an analogy to explain the perils of taking a haphazard approach to AI in education. Using aviation as an illustration, he highlights the most critical tech skills that teachers need today. The CEO of ISTE and ASCD draws a clear parallel: just as planes don't fly by magic, educators must deeply understand AI's capabilities and limitations.
December 16, 2024

Prof Anselmo Reyes: AI in Legal Education and Justice

Professor Anselmo Reyes, an international arbitrator and legal expert, discusses the potential of AI in making legal services more accessible to underserved communities. He notes that while AI works well for standardised legal matters, it faces limitations in areas requiring emotional intelligence or complex human judgment. Prof Reyes advocates for teaching law students to use AI critically as an assistive tool, emphasising that human oversight remains essential in legal decision making.
December 2, 2024

Esen Tümer: AI’s Role from Classrooms to Operating Rooms

Healthcare and technology leader Esen Tümer discusses how AI and emerging trends in technology are transforming medical settings and doctor-patient interactions. She encourages teachers not to shy away from technology, but rather understand how it’s reshaping society and prepare their students for this tech-enabled future.
November 19, 2024

Julie Carson: AI Integration Journey of Woodland Academy Trust

A forward-thinking educational trust shows what's possible when AI meets strategic implementation. From personalised learning platforms to innovative administrative solutions, Julie Carson, Director of Education at Woodland Academy Trust, reveals how they're enhancing teaching and learning across five primary schools through technology and AI to serve both classroom and operational needs.
November 4, 2024

Joseph Lin: AI Use Cases in Hong Kong Classrooms

In this conversation, Joseph Lin, an education technology consultant, discusses how some Hong Kong schools are exploring artificial intelligence and their implementation challenges. He emphasises the importance of data ownership, responsible use of AI, and the need for schools to adapt slowly to these technologies. Joseph also shares some successful AI implementation cases and how some of the AI tools may enhance creative learning experiences.
October 21, 2024

Sarah Brook: Rethinking Charitable Approaches to Tech and Sustainability

In our latest episode, we speak with Sarah Brook, Founder and CEO of the Sparkle Foundation, currently supporting 20,000 lives in Malawi. Sarah shares how education is evolving in Malawi and the role of AI plays to young people and international NGOs. She also provides a candid look at the challenges facing the charity sector, drawing from her daily work at Sparkle.
October 7, 2024

Rohan Light: Assurance and Oversight in the Age of AI

Join Rohan Light, Principal Analyst of Data Governance at Health New Zealand, as he discusses the critical need for accountability, transparency, and clear explanations of system behaviour. Discover the the government's role in regulation, and the crucial importance of strong data privacy practices.
September 23, 2024

Yom Fox: Leading Schools in an AI-infused World

With the rapid pace of technological change, Yom Fox, the high school principal at Georgetown Day School shares her insights on the importance of creating collaborative spaces where students and faculty learn together and teaching digital citizenship.
September 5, 2024

Debra Wilson: NAIS Perspectives on AI Professional Development

Join Debra Wilson, President of National Association of Independent Schools (NAIS) as she shares her insights on taking an incremental approach to exploring AI. Discover how to find the best solutions for your school, ensure responsible adoption at every stage, and learn about the ways AI can help tackle teacher burnout.
April 18, 2024

Steven Chan and Minh Tran: Preparing Students for AI and New Technologies

Discuss the importance of preparing students for AI and new technologies, the role of the Good Future Foundation in bridging the gap between technology and education, and the potential impact of AI on the future of work.

Muireann Hendriksen: Adapting AI Tools Based on Learning Science

Published on
November 11, 2025

Dr. Muireann Hendriksen is a Principal Research Scientist on the R&D and Thought Leadership team at Pearson, where she leads cross-functional qualitative research to improve learner outcomes. With a background spanning academia and the public health sector, Muireann specializes in impact evaluation and behaviour change, bringing deep expertise in qualitative methodologies and data storytelling to drive better product and business decisions.

Video Recap

Summary

In this episode, Daniel speaks with Muireann Hendriksen, the Principal Research Scientist at Pearson, about her team's recent research study called "Asking to Learn" The study analysed 128,000 AI queries from 9,000 student users to gain deeper insights into how students learn when they interact with AI study tools. Their key finding revealed that approximately one-third of student queries demonstrated higher-order thinking skills. Their conversation also explores important themes around trust, student engagement, accessibility, and inclusivity, as well as how AI tools can promote active learning behaviours.

You can find the full research report at https://plc.pearson.com/sites/pearson-corp/files/asking-to-learn.pdf

Transcript

Daniel Emmerson 00:02

Welcome to Foundational Impact, a podcast series that focuses on education and artificial intelligence from a non profit perspective. My name is Daniel Emmerson and I'm the Executive Director of Good Future Foundation, a nonprofit whose mission is to equip educators to confidently prepare all students, regardless of their background, to benefit from and succeed in an AI infused world.

Thank you everybody once again for joining us for another episode of Foundational Impact. Our podcast series focuses on ways that we're looking at artificial intelligence in schools across the education sector and beyond. And I'm absolutely delighted to have Muireann with us today from Pearson. A very warm welcome to you. It's wonderful to have you with us. There are so many things that I'd like to ask you about based on your work and your research. But before I do that, would you mind just giving a quick introduction about who you are and what you do for our audience?

Muireann Hendriksen 01:03

Sure, yeah. It's a pleasure to be here. Thank you for having me. So I'm Muireann Hendrickson, I'm a qualitative researcher and I work as part of Pearson's research and development and thought leadership team, which is a bit of a mouthful. I'm a principal research scientist there and what that means is I work with teams across our business on product development and improvement, but I also work on thought leadership. So kind of working across the big cross cutting questions around education and learning of our time, like for example, the impact of AI on learning, which I'm sure we'll be chatting loads about today. So yeah, I kind of have those dual product focus and thought leadership focus, which keeps things interesting.

Daniel Emmerson 01:45

I'm sure it does. How were you drawn into that AI space in particular?

Muireann Hendriksen 01:49

Yeah, I think, so our team has been involved with the development of our AI study tools at Pearson from the very get go. So we started working on this, I think summer of 2022 when the kind of more mainstream AI tools started to be, you know, available to the public and people were starting to experiment with them. And we knew at Pearson that we wanted to design something that, that could be trustworthy, that could be accurate, that could be reliable. So our team was really involved, you know, kind of got in on the ground floor as part of this cross functional group. We had subject matter experts, UX product developers working to build something that would take advantage of the capabilities that generative AI brings, but would, you know, pull on trusted content and be built kind of in accordance with the principles of good learning science. So yeah, it's been quite a journey and obviously it's changing all the time. It's a lot to keep up with, but it's been a really exciting thing to be working on.

Daniel Emmerson 02:49

You mentioned trust a couple of times there in, in different contexts. That's obviously at the forefront of development as far as Pearson is concerned. It might seem like a strange question, but putting that front and center was a strategic decision. Was there any discussion around why that might be important, particularly through the lens of education?

Muireann Hendriksen 03:10

Yeah, of course. So when we design for learners, we want it to be, you know, helping them to achieve their learning goals or the learning outcomes they have in mind. And I think as part of that, they have to feel secure that the content they're getting from us and that they're working with is something that they can trust and something that they can rely on, whether it's to answer homework, complete an assignment, prepare for an exam. And so our design of these tools prioritises kind of expert, vetted, publisher approved content, but also we're talking to instructors and learners across the world all the time. And I was heartened to hear it in your recent episode with the members of your student council recently when they were talking about when you use these general purpose AI tools, sometimes they can serve you back like a real flood of information. And that can be super confusing for a learner because it doesn't, it maybe doesn't exactly match what your textbook is saying or what your teacher has said in class. It can maybe be too much or too little detail. You, you know when you're looking at it, that maybe it doesn't exactly match what you're expecting to receive. And so it's not just about having trusted content to pull on, but also making sure that the outputs are contextualized in that learning process. And so because we're basing everything from the textbook first, it really places the output from our AI study tools in context for the learner. It gives it to them in a very kind of scaffolded way with kind of positive encouragement, really trying to guide the learning process for them. So it was about, you know, really trying not to overwhelm anyone and just make this another useful tool that could be part of their study arsenal, if you will.

Daniel Emmerson 04:52

Because I suppose one of the implications of that mainstream introduction of generative AI across every sector, near enough. But certainly education meant that we've gone from a culture of believing what we read to a culture of, or trying to become a culture that needs to question everything we read because of where it comes from, how it's generated, what sources it's pulling from and who's behind it. That's something that we're looking at, particularly from a disinformation or a misinformation perspective. I'm interested to know if that's something that you've come across in your work or your research. I know you've spoken to tons of students about different areas of AI in education, but was this one of them? And if so, what sort of response did you get?

Muireann Hendriksen 05:39

I think when we talk to students, the responses that we hear about how they feel about AI in their learning doesn't really feel that different to how they feel about other learning tools. What they want is a positive, safe, collaborative interaction that ultimately is going to help them get to where they want to be. I think it's still very early days of mainstream AI release, if you will, and I think the research base around AI and learning is still crystallizing. You know, it does seem to be starting to point towards AI tools being most effective when they are kind of incorporated meaningfully into the learning process. And I think that's what we're always trying to get at, that kind of meaningful incorporation that is really, you know, encouraging active engagement rather than just kind of passively receiving an answer.

Daniel Emmerson 06:27

It'd be great to speak with you a little bit about asking to learn and the research that you conducted there. 130,000 AI queries. Is that right from students? Could you give the audience an idea about what the research was focused on and perhaps what surprised you most about your finding?

Muireann Hendriksen 06:47

Yeah, sure. So we knew since the launch of our AI study tools that this was going to really open up new insights for us into the learning process and the kind of learner interactions with our tools that we could see things that we'd never been able to see before. And I guess the feature that I, as a qualitative researcher, that I was most interested in was our explain feature, which is basically where learners ask questions in their own words. That was interesting to me because you would get to see, you know, how are they naturally framing their questions? What are they curious about? What are they confused about? What kind of language are they using? You know, are they, you know, mirroring the textbook? Are they putting things into their own words? And as we started to have these discussions in our team, we were starting to think about, you know, what can we tell from these interactions? Are they just kind of using the tool for a very quick, like, you know, I'm writing something, let me check the definition of this quickly. Or is it a kind of more meaningful, engaged Interaction where they're trying to go deeper on something, they're trying to, you know, ask a more complex question. So to give a little more detail on the asking to learn research, like you said, we looked at approximately, I think it was about 128,000 inputs from about 9,000 users. So it was yeah, no small undertaking. And we looked at inputs across one year to an intro level biology textbook. So this is the kind of a textbook that would be used in a typical kind of first year biology course.

Daniel Emmerson 08:15

This is a Campbell Biology.

Muireann Hendriksen 08:17

Correct. It's one of our best selling titles, first year of university. So really kind of building that foundational understanding kind of intro biology level courses. What surprised me about the research, I think, you know, this was exploratory research to begin with because we were working with these new AI tools, we really didn't know what we were going to see in there. And with such a huge data set, we needed to structure it around something because we just an overwhelming set of data to work with. So we structured it around Bloom's taxonomy, the revised Bloom's taxonomy for a number of reasons. One, because it's a very recognisable structure to anyone working in teaching and learning, but also because it allows us to focus on. We didn't just want to look at the topic of what students were asking about. So like, you know, okay, great, they have loads of questions about the cell that doesn't really tell us that much. You know, a good teacher would say, okay, but what do they want to know about the cell would be your immediate follow up. Right. So we wanted to understand what level is this question happening? You know, how deep are they getting into it? Is it something about experimentation is about a law they've come across? So this revised Bloom's allows us to kind of look at the inputs in these two different dimensions. So what are they asking about and how complex or challenging is the thinking behind that question? And what was surprising to us is we found that so about 80% of their inputs were at the kind of lower levels of Bloom's. So that kind of remembering conceptual, you know, basically nothing that you wouldn't expect from an intro level course because obviously the focus is, you know, introducing new topics, building foundational knowledge. What was exciting to see was that around a third of a time students were, you know, asking questions at higher levels of cognitive complexity. So if you're familiar with Bloom's, it's like working at that apply level and above. And then around 20% of the time they were really going to analyse and above. So, you know, asking questions that we know are consistent with the development of higher order thinking skills, showing evidence of critical thinking, which was super encouraging and exciting to see.

Daniel Emmerson 10:24

What does it tell us about how students naturally engage with AI, do you think? Particularly if we're looking at deeper level thinking?

Muireann Hendriksen 10:34

I think first of all, it was really surprising to me to see that kind of level of deeper thinking. And you know, on a personal level, I loved the level of creativity that they were showing in their questions. So really trying to make topics their own, trying to have it explained to them in ways that they understood or made sense in their world. You know, can you relate this to a game of tennis? Or you know, how would Taylor Swift write the lyrics to this biology topic? That kind of. So to me that shows, you know, someone who's really, really trying to understand something in terms that are familiar to them. That was really exciting to see.

Daniel Emmerson 11:09

Can I just ask Muireann, were there prompts around this is something that you could do or were these things that were naturally occurring?

Muireann Hendriksen 11:16

These were naturally occurring. So when the Explain feature opens it's a kind of like a little window in their e-textbook, it just, I think it has a very generic prompt like if you would like to ask a question or something like that. The initial interaction is completely, you know, prompted by the student. It's. It's coming from them, which is also, yeah, good to see. It was encouraging to see that deeper thinking and I think it shows that, you know, when the tool is built in accordance with how we know learning happens, if it's coming, if it's being used in an appropriate place, i.e. It's like in the flow of their learning, in their interaction with the textbook, it was really encouraging to see how it can be used to kind of scaffold them towards that higher order thinking. And I think something we mentioned briefly in the paper is that actually based on the findings from this work, we worked with the product development team on the development of a new feature which we're calling Go Deeper. And that actually will be trying to nudge them and encourage them towards more of that higher order thinking.

Daniel Emmerson 12:19

Can you tell us a bit more about how Go Deeper was designed from the research insights?

Muireann Hendriksen 12:24

Yeah, sure. Because we had those really encouraging findings where they were going into those higher order questions some of the time. Basically what we've done is added this new enhancement to the Explain feature so that when a learner now puts something into Explain, they'll see three kind of relevant follow-ups to their original question at the bottom of the reply that they can explore. And to come back to my earlier point when I was talking about not wanting to overwhelm people with lots of information, we have deliberately designed it so that the questions, the follow up questions that they see and go deeper, will never go more than one or two levels of Bloom's levels higher than what they originally asked. Because you don't want to cause confusion and overwhelm people if you have a simple definitional question, you don't want something that's asking you about designing an experiment and confusing you even further. So the idea is that we could kind of begin to transform. If a learner comes along with a single query, it kind of becomes a more guided pathway trying to actively scaffold that higher order thinking in a way that isn't overwhelming. So that's been a really encouraging development from there.

Daniel Emmerson 13:34

Can we try and paint a picture of what that experience looks like?

Muireann Hendriksen 13:37

Sure.

Daniel Emmerson 13:37

From a user. So what happens? You open your platform, you're looking at Campbell Biology. I don't know, you might be looking at cell structure or whatsoever. What happens then? How do you get to that? Go deeper?

Muireann Hendriksen 13:49

Yeah, let me talk you through it. So I'm going to use my prepared example because in doing this work I've been working at the absolute limits of my biology knowledge, let me tell you. So I'm not great at going too off the cuff on this, but say I'm a learner, I'm in my biology textbook and I get a little bit confused. So I see the little AI study tool Chatbot, I open it up on my page, I and I go to the Explain feature and I ask can you define a polar molecule for me? So that would be something that's at Bloom's level, remember and know. Because it's a very kind of basic factual definition. So my explanation will come up and then underneath it will be three follow up questions that might reflect so understanding which would be the next highest level. So you might get a question like describe partial changes in polar molecules. The level after that will be apply. So it could say, can you apply the polar molecule concept to a solubility scenario? So you're going to get two questions one level above and one question two levels above to try and scaffold you and build your curiosity and see like how you could expand on this topic.

Daniel Emmerson 15:01

I'm trying to think this through as well from the perspective of younger learners. Right. The majority of our exploration is in the K12 space. I suppose we're looking here specifically at university. Does there need to be a level of AI literacy at the outset in order for a learner to engage with something like this, or is that something that they can just readily fall into? What are your thoughts around that?

Muireann Hendriksen 15:28

I think that, you know, there is a basic level of AI literacy that anyone needs when they're interacting with AI tools, I think. But I think because these tools have been designed within a very boundaried learning environment, the hope is, you know, that it's not going to take them somewhere. It's pulling on very trusted content that's already been kind of publisher approved, expert vetted, and there is only so many things they can do within our tools. It's not kind of, you know, opening up the world to a younger learner in that way.

Daniel Emmerson 15:58

What about when it comes to inclusive design considerations of a tool like this? We talked about trust at the beginning and how important that is. What about when it comes to accessibility and ensuring that it's possible to access from lots of different student perspectives.

Muireann Hendriksen 16:16

So we always build with kind of universal principles in mind for accessibility. That's just a standard part of our product development process. I think starting from this baseline of getting something out there, we've iterated on the tools over time. So including things like building in that learners can incorporate images and videos still within that kind of protected environment and vetted content, but just to try and expand to a wider range of learning preferences or learning needs. And now that the AI study tools are being introduced to textbook for a wider kind of global audience, we'll have tools available in languages other than English where people can input things in their native language and have it translated for them and kind of work with it on that basis.

Daniel Emmerson 17:04

When you're looking at the success of the tool, then again, going back to the importance of the research behind it, what are the main signals that indicate real learning transformation and not just being something that's convenient and easy to access, but something that's really having a positive impact there.

Muireann Hendriksen 17:23

Yeah. So we try to think about learning activities and behaviors along a continuum from sort of passively listening to something or reading something and doing nothing else with it, just kind of ingesting content, to very active learning behavior where you're really making the work your own. You're trying to reformulate things in your own words. You're kind of actively manipulating information. And so we look at our AI study tools in conjunction with everything else that the learner might be doing within this space. So what notes are they taking? Are they using flashcards? Are they trying practice questions. We know that those more active study behaviors are associated with better outcomes in the real world, like better grade performance, better, you know, exam scores and so on. So we try to look at it in that context. Obviously, you know, we're not in the classroom with the learner. We are just, we have access to what we have access to. So that's where we can, where we can go. We know that we are seeing this study behaviors that point to better outcomes in the real world. In time, we would love to obviously develop that into a more formal research where we could, you know, understand that kind of real world outcomes. But for the moment, it's more about understanding are we encouraging and hopefully seeing more of those active study behaviors.

Daniel Emmerson 18:41

And is there anything that stemmed from the research that you think will impact perhaps future designs or future iterations of the product?

Muireann Hendriksen 18:49

I think, you know, it's an ongoing conversation, both internally from the data that we're seeing, but also, you know, you know as well as anyone working in AI, it's changing all the time, right? So you're constantly keeping up with things that are changing in terms of weeks and months, not years. So it's balancing those two things while also, I guess, trying to protect the integrity of the learning and study process. You know, at the end of the day, it's always a consideration. There's no need to throw, you know, 20 shiny features if they have no proven impact on learning. We need to be designing in accordance with learning science and really thinking carefully and thoughtfully about what we're putting in front of students. So that's always kind of the guiding principle. So I think where we're heading is trying to think about how we can, how we can use the capabilities that AI affords us and the kind of insights that we're having from things like the AI study tools to improve that study experience. So, for example, hypothetically, could you explain inputs? Can we use that internally to help us see? Well, students always get confused. A lot of inputs are at this place in the textbook. Maybe we need to revisit this content or bring it to life for them in a different way. It's this wonderful feedback loop that we've never had before that, that is so telling of where students are in their process and it really kind of turns it into a dialogue. So, yeah, it's super exciting and interesting to be a part of.

Daniel Emmerson 20:17

I'm wondering also if there's anything from the research that might indicate an increase in over reliance on AI for learning. Is that something that you've seen patterns emerging in or. Or not so much. And what is over reliance, I suppose.

Muireann Hendriksen 20:33

Right, yeah, true. I think, yeah, when we see over reliance, you know, depicted in, I guess, you know, you see media articles about it. And this idea of, I think for me, over reliance is where AI is used to completely bypass the learning process. And there is no real evidence that any understanding or learning has taken place. It has just been kind of straight from A to Z. So that's, I guess, how I would think about over reliance. I think if we were seeing that, we would see huge drop offs in the other kind of product features that we have to support active study practices. We're not seeing that. So that kind of suggests that this is one of a suite of tools that students are using and not necessarily they're putting in everything. And also the tool has been designed not to give them the answer, is the first thing I should have said. The idea is that it will scaffold them through the process and encourage them to return to why are you, you know, asking this question? Why are you confused about this rather than immediately giving them the answer?

Daniel Emmerson 21:37

Reflecting then on asking to learn. Is there a piece of advice that you might give to educators when it comes to embedding AI tools in a way that's thoughtful and responsible?

Muireann Hendriksen 21:50

You know, we're talking to educators and learners a lot, and I've seen a real shift in the conversation around generative AI. I think when these tools first came out and first started to be discussed, a lot of the conversation with educators was about should we use AI or not. And now that kind of discourse seems to really have moved on to more how and where can we use AI effectively? Pearson's recent the UK Schools report show that there's such a strong demand for more training around AI, both from teachers and students. So I think there's real appetite there and certainly a real awareness from educators that they need to know more about this because they're trying to prepare students for a world where a lot of things will be working with AI. And so I think that as this understanding is growing about how we can use AI to scale excellence in teaching and learning, a lot of it is about building connections between the technology and how it can be used to help achieve these goals. I think there's a real, maybe psychological factor in this where you need to build confidence, you need to overcome fear, overcome overwhelm, because it is, you know, changing really quickly. It can be overwhelming to engage with these things. So it's just about seeing how they can be used meaningfully and it doesn't have to be always, you know, short circuiting learning. They can be part of a meaningful incorporation that works towards better cognitive engagement.

Daniel Emmerson 23:15

Asking to Learn is a fascinating project and a really wonderful read. Muireann, I'm wondering what next you have lined up on the research side. Is there anything that you can share with us?

Muireann Hendriksen 23:27

I think our plan is just to expand on this work. I mean, this was an exploratory study. We weren't expecting to have the findings that we were. It was a super encouraging thing to see, so we'll hope to replicate it in other disciplines. And you know, as I mentioned, our AI study tools are now available to a more global audience. And so it would be interesting to see how this plays out in other kind of contexts, other ages and stages.

Daniel Emmerson 23:50

Looking forward to seeing where this goes. Muireann, thank you so, so very much for sharing your thoughts and reflections. It's been wonderful speaking with you today. Look forward to staying in touch and speaking with you again very soon.

Muireann Hendriksen 24:03

Thank you so much for having me. As a final step, I would just encourage any listeners to read our Asking to Learn report as a good example of what can happen when AI is built for learning.

Daniel Emmerson 24:13

A shining example. I'll make sure we share it in the notes, that's for sure.

Muireann Hendriksen 24:17

Brilliant. Thank you.

Voiceover 24:19

That's it for this episode. Don't forget, the next episode is coming out soon, so make sure you click that option to follow or subscribe. It just means you won't miss it. But in the meantime, thank you for being here and we'll see you next time.

JOIN OUR MAILING LIST

Be the first to find out more about our programs and have the opportunity to work with us
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.