Muireann Hendriksen: Adapting AI Tools Based on Learning Science

Video Recap
Summary
In this episode, Daniel speaks with Muireann Hendriksen, the Principal Research Scientist at Pearson, about her team's recent research study called "Asking to Learn" The study analysed 128,000 AI queries from 9,000 student users to gain deeper insights into how students learn when they interact with AI study tools. Their key finding revealed that approximately one-third of student queries demonstrated higher-order thinking skills. Their conversation also explores important themes around trust, student engagement, accessibility, and inclusivity, as well as how AI tools can promote active learning behaviours.
You can find the full research report at https://plc.pearson.com/sites/pearson-corp/files/asking-to-learn.pdf
Transcript
Daniel Emmerson 00:02
Welcome to Foundational Impact, a podcast series that focuses on education and artificial intelligence from a non profit perspective. My name is Daniel Emmerson and I'm the Executive Director of Good Future Foundation, a nonprofit whose mission is to equip educators to confidently prepare all students, regardless of their background, to benefit from and succeed in an AI infused world.
Thank you everybody once again for joining us for another episode of Foundational Impact. Our podcast series focuses on ways that we're looking at artificial intelligence in schools across the education sector and beyond. And I'm absolutely delighted to have Muireann with us today from Pearson. A very warm welcome to you. It's wonderful to have you with us. There are so many things that I'd like to ask you about based on your work and your research. But before I do that, would you mind just giving a quick introduction about who you are and what you do for our audience?
Muireann Hendriksen 01:03
Sure, yeah. It's a pleasure to be here. Thank you for having me. So I'm Muireann Hendrickson, I'm a qualitative researcher and I work as part of Pearson's research and development and thought leadership team, which is a bit of a mouthful. I'm a principal research scientist there and what that means is I work with teams across our business on product development and improvement, but I also work on thought leadership. So kind of working across the big cross cutting questions around education and learning of our time, like for example, the impact of AI on learning, which I'm sure we'll be chatting loads about today. So yeah, I kind of have those dual product focus and thought leadership focus, which keeps things interesting.
Daniel Emmerson 01:45
I'm sure it does. How were you drawn into that AI space in particular?
Muireann Hendriksen 01:49
Yeah, I think, so our team has been involved with the development of our AI study tools at Pearson from the very get go. So we started working on this, I think summer of 2022 when the kind of more mainstream AI tools started to be, you know, available to the public and people were starting to experiment with them. And we knew at Pearson that we wanted to design something that, that could be trustworthy, that could be accurate, that could be reliable. So our team was really involved, you know, kind of got in on the ground floor as part of this cross functional group. We had subject matter experts, UX product developers working to build something that would take advantage of the capabilities that generative AI brings, but would, you know, pull on trusted content and be built kind of in accordance with the principles of good learning science. So yeah, it's been quite a journey and obviously it's changing all the time. It's a lot to keep up with, but it's been a really exciting thing to be working on.
Daniel Emmerson 02:49
You mentioned trust a couple of times there in, in different contexts. That's obviously at the forefront of development as far as Pearson is concerned. It might seem like a strange question, but putting that front and center was a strategic decision. Was there any discussion around why that might be important, particularly through the lens of education?
Muireann Hendriksen 03:10
Yeah, of course. So when we design for learners, we want it to be, you know, helping them to achieve their learning goals or the learning outcomes they have in mind. And I think as part of that, they have to feel secure that the content they're getting from us and that they're working with is something that they can trust and something that they can rely on, whether it's to answer homework, complete an assignment, prepare for an exam. And so our design of these tools prioritises kind of expert, vetted, publisher approved content, but also we're talking to instructors and learners across the world all the time. And I was heartened to hear it in your recent episode with the members of your student council recently when they were talking about when you use these general purpose AI tools, sometimes they can serve you back like a real flood of information. And that can be super confusing for a learner because it doesn't, it maybe doesn't exactly match what your textbook is saying or what your teacher has said in class. It can maybe be too much or too little detail. You, you know when you're looking at it, that maybe it doesn't exactly match what you're expecting to receive. And so it's not just about having trusted content to pull on, but also making sure that the outputs are contextualized in that learning process. And so because we're basing everything from the textbook first, it really places the output from our AI study tools in context for the learner. It gives it to them in a very kind of scaffolded way with kind of positive encouragement, really trying to guide the learning process for them. So it was about, you know, really trying not to overwhelm anyone and just make this another useful tool that could be part of their study arsenal, if you will.
Daniel Emmerson 04:52
Because I suppose one of the implications of that mainstream introduction of generative AI across every sector, near enough. But certainly education meant that we've gone from a culture of believing what we read to a culture of, or trying to become a culture that needs to question everything we read because of where it comes from, how it's generated, what sources it's pulling from and who's behind it. That's something that we're looking at, particularly from a disinformation or a misinformation perspective. I'm interested to know if that's something that you've come across in your work or your research. I know you've spoken to tons of students about different areas of AI in education, but was this one of them? And if so, what sort of response did you get?
Muireann Hendriksen 05:39
I think when we talk to students, the responses that we hear about how they feel about AI in their learning doesn't really feel that different to how they feel about other learning tools. What they want is a positive, safe, collaborative interaction that ultimately is going to help them get to where they want to be. I think it's still very early days of mainstream AI release, if you will, and I think the research base around AI and learning is still crystallizing. You know, it does seem to be starting to point towards AI tools being most effective when they are kind of incorporated meaningfully into the learning process. And I think that's what we're always trying to get at, that kind of meaningful incorporation that is really, you know, encouraging active engagement rather than just kind of passively receiving an answer.
Daniel Emmerson 06:27
It'd be great to speak with you a little bit about asking to learn and the research that you conducted there. 130,000 AI queries. Is that right from students? Could you give the audience an idea about what the research was focused on and perhaps what surprised you most about your finding?
Muireann Hendriksen 06:47
Yeah, sure. So we knew since the launch of our AI study tools that this was going to really open up new insights for us into the learning process and the kind of learner interactions with our tools that we could see things that we'd never been able to see before. And I guess the feature that I, as a qualitative researcher, that I was most interested in was our explain feature, which is basically where learners ask questions in their own words. That was interesting to me because you would get to see, you know, how are they naturally framing their questions? What are they curious about? What are they confused about? What kind of language are they using? You know, are they, you know, mirroring the textbook? Are they putting things into their own words? And as we started to have these discussions in our team, we were starting to think about, you know, what can we tell from these interactions? Are they just kind of using the tool for a very quick, like, you know, I'm writing something, let me check the definition of this quickly. Or is it a kind of more meaningful, engaged Interaction where they're trying to go deeper on something, they're trying to, you know, ask a more complex question. So to give a little more detail on the asking to learn research, like you said, we looked at approximately, I think it was about 128,000 inputs from about 9,000 users. So it was yeah, no small undertaking. And we looked at inputs across one year to an intro level biology textbook. So this is the kind of a textbook that would be used in a typical kind of first year biology course.
Daniel Emmerson 08:15
This is a Campbell Biology.
Muireann Hendriksen 08:17
Correct. It's one of our best selling titles, first year of university. So really kind of building that foundational understanding kind of intro biology level courses. What surprised me about the research, I think, you know, this was exploratory research to begin with because we were working with these new AI tools, we really didn't know what we were going to see in there. And with such a huge data set, we needed to structure it around something because we just an overwhelming set of data to work with. So we structured it around Bloom's taxonomy, the revised Bloom's taxonomy for a number of reasons. One, because it's a very recognisable structure to anyone working in teaching and learning, but also because it allows us to focus on. We didn't just want to look at the topic of what students were asking about. So like, you know, okay, great, they have loads of questions about the cell that doesn't really tell us that much. You know, a good teacher would say, okay, but what do they want to know about the cell would be your immediate follow up. Right. So we wanted to understand what level is this question happening? You know, how deep are they getting into it? Is it something about experimentation is about a law they've come across? So this revised Bloom's allows us to kind of look at the inputs in these two different dimensions. So what are they asking about and how complex or challenging is the thinking behind that question? And what was surprising to us is we found that so about 80% of their inputs were at the kind of lower levels of Bloom's. So that kind of remembering conceptual, you know, basically nothing that you wouldn't expect from an intro level course because obviously the focus is, you know, introducing new topics, building foundational knowledge. What was exciting to see was that around a third of a time students were, you know, asking questions at higher levels of cognitive complexity. So if you're familiar with Bloom's, it's like working at that apply level and above. And then around 20% of the time they were really going to analyse and above. So, you know, asking questions that we know are consistent with the development of higher order thinking skills, showing evidence of critical thinking, which was super encouraging and exciting to see.
Daniel Emmerson 10:24
What does it tell us about how students naturally engage with AI, do you think? Particularly if we're looking at deeper level thinking?
Muireann Hendriksen 10:34
I think first of all, it was really surprising to me to see that kind of level of deeper thinking. And you know, on a personal level, I loved the level of creativity that they were showing in their questions. So really trying to make topics their own, trying to have it explained to them in ways that they understood or made sense in their world. You know, can you relate this to a game of tennis? Or you know, how would Taylor Swift write the lyrics to this biology topic? That kind of. So to me that shows, you know, someone who's really, really trying to understand something in terms that are familiar to them. That was really exciting to see.
Daniel Emmerson 11:09
Can I just ask Muireann, were there prompts around this is something that you could do or were these things that were naturally occurring?
Muireann Hendriksen 11:16
These were naturally occurring. So when the Explain feature opens it's a kind of like a little window in their e-textbook, it just, I think it has a very generic prompt like if you would like to ask a question or something like that. The initial interaction is completely, you know, prompted by the student. It's. It's coming from them, which is also, yeah, good to see. It was encouraging to see that deeper thinking and I think it shows that, you know, when the tool is built in accordance with how we know learning happens, if it's coming, if it's being used in an appropriate place, i.e. It's like in the flow of their learning, in their interaction with the textbook, it was really encouraging to see how it can be used to kind of scaffold them towards that higher order thinking. And I think something we mentioned briefly in the paper is that actually based on the findings from this work, we worked with the product development team on the development of a new feature which we're calling Go Deeper. And that actually will be trying to nudge them and encourage them towards more of that higher order thinking.
Daniel Emmerson 12:19
Can you tell us a bit more about how Go Deeper was designed from the research insights?
Muireann Hendriksen 12:24
Yeah, sure. Because we had those really encouraging findings where they were going into those higher order questions some of the time. Basically what we've done is added this new enhancement to the Explain feature so that when a learner now puts something into Explain, they'll see three kind of relevant follow-ups to their original question at the bottom of the reply that they can explore. And to come back to my earlier point when I was talking about not wanting to overwhelm people with lots of information, we have deliberately designed it so that the questions, the follow up questions that they see and go deeper, will never go more than one or two levels of Bloom's levels higher than what they originally asked. Because you don't want to cause confusion and overwhelm people if you have a simple definitional question, you don't want something that's asking you about designing an experiment and confusing you even further. So the idea is that we could kind of begin to transform. If a learner comes along with a single query, it kind of becomes a more guided pathway trying to actively scaffold that higher order thinking in a way that isn't overwhelming. So that's been a really encouraging development from there.
Daniel Emmerson 13:34
Can we try and paint a picture of what that experience looks like?
Muireann Hendriksen 13:37
Sure.
Daniel Emmerson 13:37
From a user. So what happens? You open your platform, you're looking at Campbell Biology. I don't know, you might be looking at cell structure or whatsoever. What happens then? How do you get to that? Go deeper?
Muireann Hendriksen 13:49
Yeah, let me talk you through it. So I'm going to use my prepared example because in doing this work I've been working at the absolute limits of my biology knowledge, let me tell you. So I'm not great at going too off the cuff on this, but say I'm a learner, I'm in my biology textbook and I get a little bit confused. So I see the little AI study tool Chatbot, I open it up on my page, I and I go to the Explain feature and I ask can you define a polar molecule for me? So that would be something that's at Bloom's level, remember and know. Because it's a very kind of basic factual definition. So my explanation will come up and then underneath it will be three follow up questions that might reflect so understanding which would be the next highest level. So you might get a question like describe partial changes in polar molecules. The level after that will be apply. So it could say, can you apply the polar molecule concept to a solubility scenario? So you're going to get two questions one level above and one question two levels above to try and scaffold you and build your curiosity and see like how you could expand on this topic.
Daniel Emmerson 15:01
I'm trying to think this through as well from the perspective of younger learners. Right. The majority of our exploration is in the K12 space. I suppose we're looking here specifically at university. Does there need to be a level of AI literacy at the outset in order for a learner to engage with something like this, or is that something that they can just readily fall into? What are your thoughts around that?
Muireann Hendriksen 15:28
I think that, you know, there is a basic level of AI literacy that anyone needs when they're interacting with AI tools, I think. But I think because these tools have been designed within a very boundaried learning environment, the hope is, you know, that it's not going to take them somewhere. It's pulling on very trusted content that's already been kind of publisher approved, expert vetted, and there is only so many things they can do within our tools. It's not kind of, you know, opening up the world to a younger learner in that way.
Daniel Emmerson 15:58
What about when it comes to inclusive design considerations of a tool like this? We talked about trust at the beginning and how important that is. What about when it comes to accessibility and ensuring that it's possible to access from lots of different student perspectives.
Muireann Hendriksen 16:16
So we always build with kind of universal principles in mind for accessibility. That's just a standard part of our product development process. I think starting from this baseline of getting something out there, we've iterated on the tools over time. So including things like building in that learners can incorporate images and videos still within that kind of protected environment and vetted content, but just to try and expand to a wider range of learning preferences or learning needs. And now that the AI study tools are being introduced to textbook for a wider kind of global audience, we'll have tools available in languages other than English where people can input things in their native language and have it translated for them and kind of work with it on that basis.
Daniel Emmerson 17:04
When you're looking at the success of the tool, then again, going back to the importance of the research behind it, what are the main signals that indicate real learning transformation and not just being something that's convenient and easy to access, but something that's really having a positive impact there.
Muireann Hendriksen 17:23
Yeah. So we try to think about learning activities and behaviors along a continuum from sort of passively listening to something or reading something and doing nothing else with it, just kind of ingesting content, to very active learning behavior where you're really making the work your own. You're trying to reformulate things in your own words. You're kind of actively manipulating information. And so we look at our AI study tools in conjunction with everything else that the learner might be doing within this space. So what notes are they taking? Are they using flashcards? Are they trying practice questions. We know that those more active study behaviors are associated with better outcomes in the real world, like better grade performance, better, you know, exam scores and so on. So we try to look at it in that context. Obviously, you know, we're not in the classroom with the learner. We are just, we have access to what we have access to. So that's where we can, where we can go. We know that we are seeing this study behaviors that point to better outcomes in the real world. In time, we would love to obviously develop that into a more formal research where we could, you know, understand that kind of real world outcomes. But for the moment, it's more about understanding are we encouraging and hopefully seeing more of those active study behaviors.
Daniel Emmerson 18:41
And is there anything that stemmed from the research that you think will impact perhaps future designs or future iterations of the product?
Muireann Hendriksen 18:49
I think, you know, it's an ongoing conversation, both internally from the data that we're seeing, but also, you know, you know as well as anyone working in AI, it's changing all the time, right? So you're constantly keeping up with things that are changing in terms of weeks and months, not years. So it's balancing those two things while also, I guess, trying to protect the integrity of the learning and study process. You know, at the end of the day, it's always a consideration. There's no need to throw, you know, 20 shiny features if they have no proven impact on learning. We need to be designing in accordance with learning science and really thinking carefully and thoughtfully about what we're putting in front of students. So that's always kind of the guiding principle. So I think where we're heading is trying to think about how we can, how we can use the capabilities that AI affords us and the kind of insights that we're having from things like the AI study tools to improve that study experience. So, for example, hypothetically, could you explain inputs? Can we use that internally to help us see? Well, students always get confused. A lot of inputs are at this place in the textbook. Maybe we need to revisit this content or bring it to life for them in a different way. It's this wonderful feedback loop that we've never had before that, that is so telling of where students are in their process and it really kind of turns it into a dialogue. So, yeah, it's super exciting and interesting to be a part of.
Daniel Emmerson 20:17
I'm wondering also if there's anything from the research that might indicate an increase in over reliance on AI for learning. Is that something that you've seen patterns emerging in or. Or not so much. And what is over reliance, I suppose.
Muireann Hendriksen 20:33
Right, yeah, true. I think, yeah, when we see over reliance, you know, depicted in, I guess, you know, you see media articles about it. And this idea of, I think for me, over reliance is where AI is used to completely bypass the learning process. And there is no real evidence that any understanding or learning has taken place. It has just been kind of straight from A to Z. So that's, I guess, how I would think about over reliance. I think if we were seeing that, we would see huge drop offs in the other kind of product features that we have to support active study practices. We're not seeing that. So that kind of suggests that this is one of a suite of tools that students are using and not necessarily they're putting in everything. And also the tool has been designed not to give them the answer, is the first thing I should have said. The idea is that it will scaffold them through the process and encourage them to return to why are you, you know, asking this question? Why are you confused about this rather than immediately giving them the answer?
Daniel Emmerson 21:37
Reflecting then on asking to learn. Is there a piece of advice that you might give to educators when it comes to embedding AI tools in a way that's thoughtful and responsible?
Muireann Hendriksen 21:50
You know, we're talking to educators and learners a lot, and I've seen a real shift in the conversation around generative AI. I think when these tools first came out and first started to be discussed, a lot of the conversation with educators was about should we use AI or not. And now that kind of discourse seems to really have moved on to more how and where can we use AI effectively? Pearson's recent the UK Schools report show that there's such a strong demand for more training around AI, both from teachers and students. So I think there's real appetite there and certainly a real awareness from educators that they need to know more about this because they're trying to prepare students for a world where a lot of things will be working with AI. And so I think that as this understanding is growing about how we can use AI to scale excellence in teaching and learning, a lot of it is about building connections between the technology and how it can be used to help achieve these goals. I think there's a real, maybe psychological factor in this where you need to build confidence, you need to overcome fear, overcome overwhelm, because it is, you know, changing really quickly. It can be overwhelming to engage with these things. So it's just about seeing how they can be used meaningfully and it doesn't have to be always, you know, short circuiting learning. They can be part of a meaningful incorporation that works towards better cognitive engagement.
Daniel Emmerson 23:15
Asking to Learn is a fascinating project and a really wonderful read. Muireann, I'm wondering what next you have lined up on the research side. Is there anything that you can share with us?
Muireann Hendriksen 23:27
I think our plan is just to expand on this work. I mean, this was an exploratory study. We weren't expecting to have the findings that we were. It was a super encouraging thing to see, so we'll hope to replicate it in other disciplines. And you know, as I mentioned, our AI study tools are now available to a more global audience. And so it would be interesting to see how this plays out in other kind of contexts, other ages and stages.
Daniel Emmerson 23:50
Looking forward to seeing where this goes. Muireann, thank you so, so very much for sharing your thoughts and reflections. It's been wonderful speaking with you today. Look forward to staying in touch and speaking with you again very soon.
Muireann Hendriksen 24:03
Thank you so much for having me. As a final step, I would just encourage any listeners to read our Asking to Learn report as a good example of what can happen when AI is built for learning.
Daniel Emmerson 24:13
A shining example. I'll make sure we share it in the notes, that's for sure.
Muireann Hendriksen 24:17
Brilliant. Thank you.
Voiceover 24:19
That's it for this episode. Don't forget, the next episode is coming out soon, so make sure you click that option to follow or subscribe. It just means you won't miss it. But in the meantime, thank you for being here and we'll see you next time.
