Thomas Sparrow: Navigating AI and the disinformation landscape

June 2, 2025

Daniel Emmerson​00:02

Welcome to Foundational Impact, a podcast series that focuses on education and artificial intelligence from a non profit perspective. My name is Daniel Emmerson and I'm the Executive Director of Good Future Foundation, a nonprofit whose mission is to equip educators to confidently prepare all students, regardless of their background, to benefit from and succeed in an AI infused world.

Absolutely wonderful, Thomas Sparrow, to have you here with us today as part of Foundational Impact. Thank you so much for being a guest on this series. Thomas, as one of our guests that is most frequently in the media, I think it would be wonderful for our audience to know a little bit about your role, what you do on a day to day basis, and perhaps an example of something you've reported on recently.

Thomas Sparrow​00:52

So it's a pleasure to be joining you on this and I'm really happy to be discussing all these different topics that we've got ahead of us in the next few minutes. I really appreciate the invitation. My name's Thomas Sparrow and I work as a correspondent in Germany. I've been here in Germany now for 10 years, mostly working for Germany's international broadcaster, Deutsche Welle. So basically the equivalent to the BBC World Service. Before I came to Germany, I was actually working for the BBC as a correspondent in the United States, in Washington D.C. and in Florida. And since 2016 or 2017 I've been combining my work as a correspondent with media literacy work in schools. So basically traveling to schools in many different parts of the world to help students and more broadly the school community to learn about disinformation and also learn techniques on howto deal with disinformation. I am also a fact checker for Deutsche Welle. So that basically means that on a daily basis I am verifying disinformation that we find on social media platforms like X, or like TikTok or like Facebook or like Snapchat or other platforms as well. We do that based on reach. So if something has become viral or is particularly relevant in one region of the world and that is linked, as I'm mentioning already, to relevance. In other words, if the claim has any relevance, either political relevance, social relevance, economic relevance. So we use basically open source intelligence tools. So from geolocation to satellite technology, to reverse image searches, to tracking, to face recognition in certain cases to verify claims and provide elements that can help viewers, listeners, readers identify what's true and what's not true. So I divide my between my work as a correspondent, so on camera or in the radio as well, reporting on what's happening here in Germany and then working on the issue of disinformation, be it traveling to schools or fact checking claims that we find online.

Daniel Emmerson​03:05

That's a lot for our audience to take in, I think, Thomas, in terms of the scope of your work, I mean, it's absolutely fascinating, the era that we're living in, conversations around how people that go to social media, for example, for their news, are able to discern what is true and what is not. Maybe let's start by breaking that down. So you mentioned misinformation and disinformation. So for our audience, could you just give us a really quick overview of what those two things are and what is the difference between those two things?

Thomas Sparrow​03:38

Misinformation with an M is misleading or false information that does not have harmful intent. So imagine if I'm writing an article for a newspaper paper and I've got a picture with a certain politician that I want to publish, and by mistake I write the wrong name of the politician and that gets printed. So it's a mistake and I need to apologise. The media outlet needs to apologise. But the intent is not to deceive. It's simply something that happened as part of the editorial process and that you then obviously apologise and repair.

Daniel Emmerson​04:10

Before we go for the disinformation, just to come back to you, if I may, with a quick question on that, because you're talking about that from the perspective of a broadcaster, but what about if that happens because so many people go to social media and you have influencers, for example, that share stories on current affairs and opinion pieces, does that same misinformation principle apply to them?

Thomas Sparrow​04:35

Yes.

Daniel Emmerson​04:35

Or are we getting into a different area?

Thomas Sparrow​04:38

So imagine there's an earthquake somewhere around the world and you receive something on WhatsApp or Signal or Telegram, and you don't know if it's true, but it's shocking. And then you send it to your relatives or, I don't know, to your friends around the world, and then three or four hours later you realise that what you've shared with them is actually false. So the intent was not to deceive your friends or family. The intent was to share it with them. But it was not a misleading intent. It was not something where you want to say, oh, I'm harming them. I'm going to send them something so they believe something that didn't actually happen. So the principle of misinformation works irrespective of whether it's for me as a journalist or for someone who's just sharing information online.

Daniel Emmerson​05:23

Okay, got it. Thank you. Let's go to disinformation.

Thomas Sparrow​05:27

Disinformation, again from both perspectives. So for me as a journalist, but also let's say for someone who's just sharing information online, is when something does have the intent to deceive, to manipulate, to cause harm. So it can be a disinformation campaign by a foreign actor by creating websites that share false stories, or it can be again, a specific actor that uses automated bots and AI generated disinformation to spread a false claim about a politician. So that clearly goes beyond the scope of just spreading information that is not accurate by mistake. Because what's important there is the intent. So what's behind it, why it is actually being shared and what do they want to achieve by doing so? And that is mostly the focus of our work as fact checkers. So it's not only the mistakes. Let's say you share something about an earthquake that is not accurate. It's more about trying to identify narratives that can affect public discourse. And why I'm saying this because I've heard often when I'm, for example, in schools, students can say they don't see any problem in sharing something that's fake. If it's funny, they share it with their friends and they can just joke about it and they don't see any problem with it. And I say actually there is a problem and actually there is a very serious problem because fact based information, reliable, trustworthy information is at the core of how we communicate in a democracy. So what kind of information we have as a base for our decisions in a democracy, be it when we vote, if we can vote, or if we make a decision in our own neighborhood or in a local community, or any kind of decision that you make as part of a democratic process, you need to have reliable information. And if we are spreading, with or without intent, information that is not accurate, that ultimately is going to poison that public discourse that is so relevant to our democratic institutions.

Daniel Emmerson​07:34

So is it right to say that as a journalist, your concern with both misinformation and disinformation is more relevant today in terms of how people communicate and how people access their news than it was, I don't know, 10 or 20 years ago? Is this something that needs more attention paid towards it?

Thomas Sparrow​07:51

Absolutely. I mean, absolutely. There's no doubt about that. Disinformation is not something new. I mean, we've had disinformation for decades and decades. There have been actors in the past that have tried to spread information that is not accurate. The difference is how information and disinformation spread in today's world. So basically how we access information, whereas in the past, let's say three, four, five decades ago, your main source of information would be the newspaper or would be the television, a news program in the evening or the main radio station in the morning while you're driving to work. Now basically how we're getting our information is through social media platforms. And those social media platforms do not have the same kind of safeguards that the newspaper, the TV channel or the radio station do or did have in the past. So whereas in the past, and I guess even also now, for those radio stations and television channels and newspapers, you have journalists that are there to basically ask politicians and hold them accountable whether you have journalists there that are trying to identify whether something is accurate or not before publishing. On social media platform, you have a massive amount of information, a massive amount of data, and it is very difficult for anyone, a journalist or a non journalist, to identify in a reliable way whether something is true. So that's where the problem lies today. In addition to that, a separate issue that we have today compared to, let's say 2016, 2017, when I began talking about disinformation in schools, is the rise of generative artificial intelligence that is either helping to spread disinformation or enabling the spread of disinformation. We'll get into that in a second. But that is creating a whole set of new challenges for us as fact checkers when it comes to, to A. identifying disinformation and B. providing also tools to our listeners, viewers, readers, so they can ultimately do that themselves. I'll give you a concrete example to make this a little bit more specific. I was giving a training this morning virtually for people who are in Lesotho, Zimbabwe, South Africa, Namibia and Malawi. And I was helping them, teaching them basically tools that they can use themselves in their own countries, in their own contexts to verify information.

Daniel Emmerson​10:24

These are teachers or these are journalists or what was your audience here?

Thomas Sparrow​10:27

These are journalists. Okay, but basically they work, let's say for the local radio station in Malawi and they want to verify information that they get at the radio station in Malawi. And I'm helping them, providing them some free AI based tools that they can use, for example to geolocate something or to verify whether, whether they're receiving an audio deepfake or to check whether some pictures that they're receiving have been manipulated in one way or another. And just as I've been doing that with journalists in Southern Africa, we're also trying to provide similar kinds of tools and experiences to users that are not journalists. Because the understanding is that fact checking is not something that should just be exclusive to the world of journalists and fact checkers, but that fact checking should ultimately be something available to everyone in their daily lives as well.

Daniel Emmerson​11:28

For sure. Particularly teachers, I would have said, who are working with young people, the majority of which are going to be accessing their news content on social media platforms. And a lot of that is going to be AI generated. And as you said, we'll come to that in just a moment. I'm interested, though, in some of the things that you mentioned earlier, Thomas, around what it means to be a fact checker. And you mentioned geolocating as well as one or two other techniques that you employ as a journalist. Can you give us a sort of layman's overview as to what that might look like? Just to give people a bit of context, I think before we go into the AI side of things.

Thomas Sparrow​12:08

I can give you plenty of examples. One, there was a picture that was spread online about an explosion in Beirut, and allegedly that explosion had happened last year. So we tried to verify whether it had actually happened last year.

Daniel Emmerson​12:26

So you saw this on social media or someone sent it to you?

Thomas Sparrow​12:30

On social media. So it was viral on social media. In fact, I think some, if I'm not mistaken, it was last year. I can't remember exactly. It was also spread by some official Israeli sites.

Daniel Emmerson​12:40

Okay.

Thomas Sparrow​12:41

So we tried to verify it. How did we go about it? We checked. We did a reverse image search on Google Images or on TinEye, the two tools that we normally use. And we realised that the BBC had published the same photo four or five years earlier.

Daniel Emmerson​12:56

Can you just talk us through that reverse image search, what that means?

Thomas Sparrow​13:00

Yes, I'll give you another example in a minute. Basically, a reverse image search is instead of using words or sentences for your search on Google and we use Google or Tineye, what you do is you search with an image. So you upload an image and then what the reverse image search tool gives you is whether that image was used in the past. And in the specific case of Tineye, it provides you. You can filter it to see when Tineye first found that picture.

Daniel Emmerson​13:36

Okay.

Thomas Sparrow​13:36

And that is a hint for you to say, okay, if some politician or someone on social media is claiming that it actually happened now, then there is a likelihood that didn't happen. If we found the same picture published six years ago.

Daniel Emmerson​13:50

Right.

Thomas Sparrow​13:51

Last year, last week, actually, when I was doing some fact checking, there was a viral story about a big protest in London in support of Donald Trump. And you could see this video published on X that was actually also reposted by Elon Musk in which people were chanting, we love Trump. We love Trump. The first thing that we did was, can this actually be true? And we checked the video in great detail and we realised that a lot of people were wearing shorts and T-shirts. So either the people were not aware of the weather in London in March, or the protest did not happen the first week of March in London. The second thing we did, after observation, I mentioned this not because it's funny, but because observation is the first thing we need to develop and enhance if we're going to be fact checkers.

Daniel Emmerson​14:43

Sure.

Thomas Sparrow​14:44

But the second is you take a screenshot of the video where you see key elements of it.

Daniel Emmerson​14:50

Oh, and what would a key element be? Someone in that case holding a sign or someone.

Thomas Sparrow​14:54

Someone holding a sign or with a T-shirt. Or if you see a specific element in, let's say, a lamppost or a building or someone speaking in the front of the protest or flags, something that you can recognise, you put it through Google Images or through a reverse image search operator. And then very quickly you found that the protest did not happen last week, but that it happened during Donald Trump's first term in office when he was visiting London the first time. And why is this important? It was not only just someone who published it by mistake, the person who was sharing it, which was then reposted by Elon Musk, was actually criticising the current UK government, which, as you know, is a different UK government to the one that was in power during Donald Trump's first term in office. So back then it was Theresa May from the Conservatives. Now it's obviously Keir Starmer from the Labour Party. So basically that helps you to then identify that the claim that you saw on social media is misleading and that the video is not accurate.

Daniel Emmerson​16:00

You've given us some context already about why this is important to get right as far as the audience is concerned. And we're talking here about non-AI generated images and content. You mentioned earlier that now generative AI has come along. This poses a much more significant problem in terms of fact checking and identifying the source of content. Why is that the case, Thomas? And what are you seeing on the ground here? From a GenAI perspective?

Thomas Sparrow​16:36

One of the challenges that we have as journalists is to what extent we use AI to verify AI. So there are lots of AI detectors out there, some of them better than others. And the question is to what extent we can use them or how reliable they are. I'm talking here from a fact checker perspective, but I'll get into the broader perspective in a second. In any case, AI detectors, or in general, using AI to detect AI can be one step in the verification process that we have as fact checkers, but also that someone at home could use, because some of these tools are actually available to anyone. So that's one challenge. A second challenge is to what extent you can use AI. So can we generate an AI picture by using a prompt and then put it, let's say, in our newspaper instead of a photo of a, I don't know, a court case? Consensus in most cases now is that you shouldn't, or if you are going to use it, if you are going to publish it, that it has to be labeled and identified as being AI generated. You can say, yes, that's great. I'll give you the problem. It is very easy for someone to then take that label off and then repost it again. So basically you can take a screenshot of it, eliminate the label, and then you publish it again and say it's true. So basically those are the challenges that we are facing. But if you look at it from the perspective of not a journalist, but let's say someone who's in school, imagine you open your WhatsApp in the morning and you receive an audio, let's say a 10 second, 15 second audio of a politician saying that something very bad has happened in the country. And then you think, I imagine your gut feeling would be to believe what you're listening and to say, wow, something really bad has happened in my country. The problem is now we don't know whether the politician did or didn't say that something bad had happened in the country. It is very easy, especially with audio, to use AI to generate synthetic audio, in other words, to manipulate someone's voice to make him or her say something that he or she didn't say. And it basically means that the challenge is not only for me as a journalist, trying to identify whether politician X, Y or Z did say that or whether it was something generated by AI. Concrete example, in the US election last year, robocall, that's how they call it in the US. So a deep fake audio was spread of Joe Biden telling people not to vote in one of the primary elections. But in reality it was not Joe Biden saying that. It was an AI deepfake audio where they took his voice, cloned it and made him say something that he hadn't said. The impact of that could be that someone decides not to go and vote, because if the president is saying that, then obviously you probably believe it. So this example shows you that disinformation, especially AI generated disinformation, is not something that is just abstract in nature, but that it can have concrete impact in people's lives.

Daniel Emmerson​19:55

And is that something, Thomas, that you would say occurs frequently? So you mentioned WhatsApp earlier. I'm thinking about this from a student perspective and obviously what I'd love to get into a little bit later on is how teachers can better prepare their students for being able to analyse this and navigate this themselves. But, we're talking here about an isolated audio clip that's sent as a WhatsApp message. Is that going to have the same impact as a video that you might see on TikTok or a post you might see from a verified account on another social media platform? What are your thoughts on that?

Thomas Sparrow​20:32

So it can, because in fact, some studies have revealed that if you receive disinformation in the, and in that case it is disinformation from someone you trust, from a school friend or from a relative, then the likelihood that you'll believe it is higher than if you just see it on a random social media platform. In other words, let's say you're in your school WhatsApp group. There are, I don't know, 60, 60 students in it, right? And one of them posts this isolated WhatsApp audio of the president saying something, then the likelihood is that you will end up believing it because it's coming from someone you know. So that's one side of the story. The other side of the story, a different perspective on that is if you use a social media platform and then you see something that has massive reach. So in this case, for example, not an audio deepfake, but a video deepfake, and it's got massive reach and it's well done. So you don't realise at first glance that maybe the voice and the lips are not synced or that the movements are a bit strange. So if it's really, really well done and at first glance you believe it, if you see that it's got massive reach, so hundreds of thousands, if not millions of views, then you will doubt. Then you will doubt in the sense that you may think that it's actually accurate, because if so many people have shared it, then they must be right. So basically there's a lot of psychology behind that and that's why it's so important to actually know how to tackle disinformation or in particular AI generated disinformation because it challenges the basic of what we believe in. Because if you have just a very random shallow fake, so not a deep fake, a shallow fake, one that doesn't use AI, which you can very easily identify as being fake.

Daniel Emmerson​22:26

Can you give us an example of what a shallow fake might be?

Thomas Sparrow​22:29

A shallow fake is if you edit something just with Photoshop and then it's very easily identified as being false. So it doesn't have that AI element behind it where you actually think, because you're seeing it and you're listening it, that it may actually be right when it's not. So deepfake is the opposite to shallow fake. So a shallow fake, for example, can be a video that has not been edited that is actually accurate, but in a different context. So it has been completely taken out of context. Or it can be an absurd claim about, I don't know, a politician that likes to dance this or this or that when he or she doesn't like to dance that. So it doesn't have that actual relevance. So basically it's shallow fake versus deep fake. And the focus is more on deepfakes than on shallow fakes. Although there are many more deep shallow fakes than deepfakes. But the focus is on deep fakes because it really challenges our own beliefs in a much more specific way. Because you're actually seeing, let's say, Obama or Trump saying something in his own voice. And then you still have to ask yourself, is that the President of the United States or is it someone who has just manipulated it, manipulated his voice or his face?

Daniel Emmerson​23:48

This has implications that go way beyond current affairs. Right. If you are able to impact individuals beliefs in how they understand the world to work. If you're a teacher and you're trying to teach a concept, but your students have been exposed to disinformation that's adverse to what they're trying to teach, this could have very problematic consequences. Is that something that you've seen in the schools that you're, you're working with?

Thomas Sparrow​24:20

So if you visit schools, and I'm sure you visit plenty of schools as well, you will realise that probably the most challenging aspect for many schools is not necessarily the big sort of foreign policy or political stories, but actually the deepfakes that may even be created by students themselves against other students. I'm not saying that every school faces that, but it is something that you encounter in some schools. Or disinformation doesn't have to be a deep fake that is spread also on social media, specifically against some of the, of the other students in the classroom. So there's that level, if you move it one level further, then you have the political level as well. And that is something that not only affects what students, what teachers, what the school community might, might end up discussing or even believing in, but it can also affect obviously how well you can talk about issues that are very controversial but that are still necessary in the school curriculum and in the school platform. So if you're talking, for example, about climate change, which is a topic that, at least from the, from the schools that I have visited, is a very important one. One that is discussed in several subjects, there is so much disinformation about climate change, there is so much misleading information about it that it's sometimes very difficult for teachers and for schools more broadly to really focus on information that is fact based, that is trustworthy, that is reliable. Trustworthy and reliable. I mentioned both because there are a little bit of, there's a little bit of a difference there. It's not necessarily always a synonym, but in general, basically it makes it more difficult for teachers to also guide students in a way in which they can end up really focusing on fact based information. And I'm not just saying it's a problem with students. I've had really, really a lot of teachers approach me to say that they are also facing issues when it comes to identifying what's true and what's not. It's not just only something that you see among school students, you see it very clearly among teachers themselves who are concerned about what kind of information they are getting and how they can then transmit it to the students in a responsible way. And that's why when I visit schools, I do not only visit schools to talk to students, but I also visit schools to talk to teachers because I think it's equally important to, to not only provide students with the skills that they need to tackle disinformation, but also to guide teachers respectfully. And I'll mention why respectfully and how they can approach the topic in a, in a better way. And I say respectfully because I don't want to come there to a school and say, look, I am here, the one who knows everything about disinformation and I'm going to tell you how to do it. I am not a teacher, but what I can do is in a very respectful way, suggest issues that they could incorporate in their own school curriculum, suggest. I'm not telling them this is the way forward. I'm just telling them, look, I see a problem. And maybe these are tools that you can implement to guide your students and yourselves better when it comes to the issue of disinformation.

Daniel Emmerson​27:47

So what are the main approaches that you might recommend to a teacher in that situation, particularly if they come from a non technical background, something that's easy to implement.

Thomas Sparrow​27:59

One thing that may work, and that's an initial step, is to have someone external talk about the issue of disinformation. Why? Because when you have a guest, then students, that tends to break a bit the dynamic between the teacher and the pupils. And you have someone there who's external, who has maybe the experience of a journalist or the experience of a fact checker or the experience of a, I don't know, an academic and who can basically present his or her views and his or her analysis. It doesn't have to come specifically from the teacher. So that's one thing. You see that more and more, whether it is project days at the end of school term or whether it is inviting a journalist to spend the day in school, you see that very frequently. So that's one element. A second element is incorporating very small bits and pieces into the school curriculum itself. So if you're talking about democracy as part of a political science class, I don't know the name of the specific class in a UK school, but let's say it's called political science and you're talking about democracy, then instead of just focusing, or as well as just focusing on the theory that you have to talk to students about and on the concrete history of democracy, you could begin by bringing one disinformation case on democracy and then discussing it with the students and then going through the issue with the students and verifying it yourself. Now, the question that I get from many teachers is where do they get the example from? Because they can. Obviously if they have the example, they might use it, but where do they get the example from? And that's where the next step has to come in, namely that they get in touch with those journalists that I mentioned in the first place to provide assistance so that they can have the examples that they need to focus it as part of the school curriculum. Because I don't expect, and I can't expect teachers to then come up with disinformation examples just by the sake of it. But there are also specific tools that are provided by institutions to help teachers also have those elements at hand when they need them, so they can then implement them if necessary. So it's not very difficult. I know it's a challenge, but it's not the most difficult challenge. The third element, which I obviously tell them and it's related to the second one, is there are fact checking sites that they can rely on. So basically you just, if you do an advanced Google search on a topic, democracy, then you can just go to Google, put democracy plus disinformation or false stories, and you see what happens and then you see what fits into the topic that you are discussing with your students. So I guess a little bit more of advanced search could help. But I understand perfectly that teachers are not experts in disinformation, so they're not just going to come up with a viral disinformation case to then discuss with their students.

Daniel Emmerson​31:01

I think it's helpful for folks to be mindful of the implications of misinformation and disinformation, particularly through the lens of generative AI, Thomas. And I hope that this conversation will have been intriguing and hopefully helpful as well to many who are listening. We are working on some research together as well, specifically relating to this, which I'm very excited for us to be publishing in the very near future. Thomas, as always, it's absolutely fascinating speaking with you on this subject. You're doing incredible work and we're very, very grateful to have had you today on foundational impact. Thanks ever so much for your time.

Thomas Sparrow​31:44

Always a pleasure. And as a final sort of note, as a final message, I can only encourage teachers and I can only encourage pupils as well to be curious, but at the same time cautious when they're on social media and to reach out to journalists in wherever they might be to ask for assistance when it comes to dealing with this information. So if the school is in a specific town in England, in Scotland, in Wales, wherever in Germany, in France, go to the local newspaper, call them, say, look, I want someone to come to the school to talk about disinformation, to talk about how journalism works. I'm pretty sure that in most cases someone will be available to go to school. This is something that, by the way, is not only important in the school community. Among journalists, there is a growing understanding that we have to go beyond our television channels and our radio stations and our newspapers, that we have to go where people are actually listening to us, where people are watching us, where people are reading our reports. We have to go and talk to other parts of society. So I'm pretty sure that if the school X, Y or Z reaches out to the newspaper in their local community, that someone would be able to provide assistance and maybe guide students on the issue of disinformation and if not, then they can contact you or they can contact me, and then we will be able to help.

Daniel Emmerson​33:09

A wonderful place to leave off. Thomas, thank you ever so much once again. Really appreciate it.

Thomas Sparrow​33:13

Always a pleasure, Daniel.

Voice Over​33:15

That's it for this episode. Don't forget, the next episode is coming out soon, so make sure you click that option to follow or subscribe. It just means you won't miss it. But in the meantime, thank you for being here and we'll see you next time.

About this Episode

Thomas Sparrow: Navigating AI and the disinformation landscape

This episode features Thomas Sparrow, a correspondent and fact checker, who helps us differentiate misinformation and disinformation, and understand the evolving landscape of information dissemination, particularly through social media and the challenges posed by generative AI. He is also very passionate about equipping teachers and students with practical fact checking techniques and encourages educators to incorporate discussions about disinformation into their curricula.

Daniel Emmerson

Executive Director, Good Future Foundation

Thomas Sparrow

Correspondent in Germany

Related Episodes

May 19, 2025

Bukky Yusuf: Responsible technology integration in educational settings

With her extensive teaching experience in both mainstream and special schools, Bukky Yusuf shares how purposeful and strategic use of technology can unlock learning opportunities for students. She also equally emphasises the ethical dimensions of AI adoption, raising important concerns about data representation, societal inequalities, and the risks of widening digital divides and unequal access.
May 6, 2025

Dr Lulu Shi: A Sociological Lens on Educational Technology

In this enlightening episode, Dr Lulu Shi from the University of Oxford, shares technology’s role in education and society through a sociological lens. She examines how edtech companies shape learning environments and policy, while challenging the notion that technological progress is predetermined. Instead, Dr. Shi argues that our collective choices and actions actively shape technology's future and emphasises the importance of democratic participation in technological development.
April 26, 2025

George Barlow and Ricky Bridge: AI Implementation at Belgrave St Bartholomew’s Academy

In this podcast episode, Daniel, George, and Ricky discuss the integration of AI and technology in education, particularly at Belgrave St Bartholomew's Academy. They explore the local context of the school, the impact of technology on teaching and learning, and how AI is being utilised to enhance student engagement and learning outcomes. The conversation also touches on the importance of community involvement, parent engagement, and the challenges and opportunities presented by AI in the classroom. They emphasise the need for effective professional development for staff and the importance of understanding the purpose behind using technology in education.
April 2, 2025

Becci Peters and Ben Davies: AI Teaching Support from Computing at School

In this episode, Becci Peters and Ben Davies discuss their work with Computing at School (CAS), an initiative backed by BCS, The Chartered Institute for IT, which boasts 27,000 dedicated members who support computing teachers. Through their efforts with CAS, they've noticed that many teachers still feel uncomfortable about AI technology, and many schools are grappling with uncertainty around AI policies and how to implement them. There's also a noticeable digital divide based on differing school budgets for AI tools. Keeping these challenges in mind, their efforts don’t just focus on technical skills; they aim to help more teachers grasp AI principles and understand important ethical considerations like data bias and the limitations of training models. They also work to equip educators with a critical mindset, enabling them to make informed decisions about AI usage.
March 17, 2025

Student Council: Students Perspectives on AI and the Future of Learning

In this episode, four members of our Student Council, Conrado, Kerem, Felicitas and Victoria, who are between 17 and 20 years old, share their personal experiences and observations about using generative AI, both for themselves and their peers. They also talk about why it’s so crucial for teachers to confront and familiarize themselves with this new technology.
March 3, 2025

Suzy Madigan: AI and Civil Society in the Global South

AI’s impact spans globally across sectors, yet attention and voices aren’t equally distributed across impacted communities. This week, the Foundational Impact presents a humanitarian perspective as Daniel Emmerson speaks with Suzy Madigan, Responsible AI Lead at CARE International, to shine a light on those often left out of the AI narrative. The heart of their discussion centers on “AI and the Global South, Exploring the Role of Civil Society in AI Decision-Making”, a recent report that Suzy co-authored with Accentures, a multinational tech company. They discuss how critical challenges including digital infrastructure gaps, data representation, and ethical frameworks, perpetuate existing inequalities. Increasing civil society participation in AI governance has become more important than ever to ensure an inclusive and ethical AI development.
February 17, 2025

Liz Robinson: Leading Through the AI Unknown for Students

In this episode, Liz opens up about her path and reflects on her own "conscious incompetence" with AI - that pivotal moment when she understood that if she, as a leader of a forward-thinking trust, feels overwhelmed by AI's implications, many other school leaders must feel the same. Rather than shying away from this challenge, she chose to lean in, launching an exciting new initiative to help school leaders navigate the AI landscape.
February 3, 2025

Lori van Dam: Nurturing Students into Social Entrepreneurs

In this episode, Hult Prize CEO Lori van Dam pulls back the curtain on the global competition empowering student innovators into social entrepreneurs across 100+ countries. She believes in sustainable models that combine social good with financial viability. Lori also explores how AI is becoming a powerful ally in this space, while stressing that human creativity and cross-cultural collaboration remain at the heart of meaningful innovation.
January 20, 2025

Laura Knight: A Teacher’s Journey into AI Education

From decoding languages to decoding the future of education: Laura Knight takes us on her fascinating journey from a linguist to a computer science teacher, then Director of Digital Learning, and now a consultant specialising in digital strategy in education. With two decades of classroom wisdom under her belt, Laura has witnessed firsthand how AI is reshaping education and she’s here to help make sense of it all.
January 6, 2025

Richard Culatta: Understand AI's Capabilities and Limitations

Richard Culatta, former Government advisor, speaks about flying planes as an analogy to explain the perils of taking a haphazard approach to AI in education. Using aviation as an illustration, he highlights the most critical tech skills that teachers need today. The CEO of ISTE and ASCD draws a clear parallel: just as planes don't fly by magic, educators must deeply understand AI's capabilities and limitations.
December 16, 2024

Prof Anselmo Reyes: AI in Legal Education and Justice

Professor Anselmo Reyes, an international arbitrator and legal expert, discusses the potential of AI in making legal services more accessible to underserved communities. He notes that while AI works well for standardised legal matters, it faces limitations in areas requiring emotional intelligence or complex human judgment. Prof Reyes advocates for teaching law students to use AI critically as an assistive tool, emphasising that human oversight remains essential in legal decision making.
December 2, 2024

Esen Tümer: AI’s Role from Classrooms to Operating Rooms

Healthcare and technology leader Esen Tümer discusses how AI and emerging trends in technology are transforming medical settings and doctor-patient interactions. She encourages teachers not to shy away from technology, but rather understand how it’s reshaping society and prepare their students for this tech-enabled future.
November 19, 2024

Julie Carson: AI Integration Journey of Woodland Academy Trust

A forward-thinking educational trust shows what's possible when AI meets strategic implementation. From personalised learning platforms to innovative administrative solutions, Julie Carson, Director of Education at Woodland Academy Trust, reveals how they're enhancing teaching and learning across five primary schools through technology and AI to serve both classroom and operational needs.
November 4, 2024

Joseph Lin: AI Use Cases in Hong Kong Classrooms

In this conversation, Joseph Lin, an education technology consultant, discusses how some Hong Kong schools are exploring artificial intelligence and their implementation challenges. He emphasises the importance of data ownership, responsible use of AI, and the need for schools to adapt slowly to these technologies. Joseph also shares some successful AI implementation cases and how some of the AI tools may enhance creative learning experiences.
October 21, 2024

Sarah Brook: Rethinking Charitable Approaches to Tech and Sustainability

In our latest episode, we speak with Sarah Brook, Founder and CEO of the Sparkle Foundation, currently supporting 20,000 lives in Malawi. Sarah shares how education is evolving in Malawi and the role of AI plays to young people and international NGOs. She also provides a candid look at the challenges facing the charity sector, drawing from her daily work at Sparkle.
October 7, 2024

Rohan Light: Assurance and Oversight in the Age of AI

Join Rohan Light, Principal Analyst of Data Governance at Health New Zealand, as he discusses the critical need for accountability, transparency, and clear explanations of system behaviour. Discover the the government's role in regulation, and the crucial importance of strong data privacy practices.
September 23, 2024

Yom Fox: Leading Schools in an AI-infused World

With the rapid pace of technological change, Yom Fox, the high school principal at Georgetown Day School shares her insights on the importance of creating collaborative spaces where students and faculty learn together and teaching digital citizenship.
September 5, 2024

Debra Wilson: NAIS Perspectives on AI Professional Development

Join Debra Wilson, President of National Association of Independent Schools (NAIS) as she shares her insights on taking an incremental approach to exploring AI. Discover how to find the best solutions for your school, ensure responsible adoption at every stage, and learn about the ways AI can help tackle teacher burnout.
April 18, 2024

Steven Chan and Minh Tran: Preparing Students for AI and New Technologies

Discuss the importance of preparing students for AI and new technologies, the role of the Good Future Foundation in bridging the gap between technology and education, and the potential impact of AI on the future of work.

Thomas Sparrow: Navigating AI and the disinformation landscape

Published on
June 2, 2025
Speakers

Transcript

Daniel Emmerson​00:02

Welcome to Foundational Impact, a podcast series that focuses on education and artificial intelligence from a non profit perspective. My name is Daniel Emmerson and I'm the Executive Director of Good Future Foundation, a nonprofit whose mission is to equip educators to confidently prepare all students, regardless of their background, to benefit from and succeed in an AI infused world.

Absolutely wonderful, Thomas Sparrow, to have you here with us today as part of Foundational Impact. Thank you so much for being a guest on this series. Thomas, as one of our guests that is most frequently in the media, I think it would be wonderful for our audience to know a little bit about your role, what you do on a day to day basis, and perhaps an example of something you've reported on recently.

Thomas Sparrow​00:52

So it's a pleasure to be joining you on this and I'm really happy to be discussing all these different topics that we've got ahead of us in the next few minutes. I really appreciate the invitation. My name's Thomas Sparrow and I work as a correspondent in Germany. I've been here in Germany now for 10 years, mostly working for Germany's international broadcaster, Deutsche Welle. So basically the equivalent to the BBC World Service. Before I came to Germany, I was actually working for the BBC as a correspondent in the United States, in Washington D.C. and in Florida. And since 2016 or 2017 I've been combining my work as a correspondent with media literacy work in schools. So basically traveling to schools in many different parts of the world to help students and more broadly the school community to learn about disinformation and also learn techniques on howto deal with disinformation. I am also a fact checker for Deutsche Welle. So that basically means that on a daily basis I am verifying disinformation that we find on social media platforms like X, or like TikTok or like Facebook or like Snapchat or other platforms as well. We do that based on reach. So if something has become viral or is particularly relevant in one region of the world and that is linked, as I'm mentioning already, to relevance. In other words, if the claim has any relevance, either political relevance, social relevance, economic relevance. So we use basically open source intelligence tools. So from geolocation to satellite technology, to reverse image searches, to tracking, to face recognition in certain cases to verify claims and provide elements that can help viewers, listeners, readers identify what's true and what's not true. So I divide my between my work as a correspondent, so on camera or in the radio as well, reporting on what's happening here in Germany and then working on the issue of disinformation, be it traveling to schools or fact checking claims that we find online.

Daniel Emmerson​03:05

That's a lot for our audience to take in, I think, Thomas, in terms of the scope of your work, I mean, it's absolutely fascinating, the era that we're living in, conversations around how people that go to social media, for example, for their news, are able to discern what is true and what is not. Maybe let's start by breaking that down. So you mentioned misinformation and disinformation. So for our audience, could you just give us a really quick overview of what those two things are and what is the difference between those two things?

Thomas Sparrow​03:38

Misinformation with an M is misleading or false information that does not have harmful intent. So imagine if I'm writing an article for a newspaper paper and I've got a picture with a certain politician that I want to publish, and by mistake I write the wrong name of the politician and that gets printed. So it's a mistake and I need to apologise. The media outlet needs to apologise. But the intent is not to deceive. It's simply something that happened as part of the editorial process and that you then obviously apologise and repair.

Daniel Emmerson​04:10

Before we go for the disinformation, just to come back to you, if I may, with a quick question on that, because you're talking about that from the perspective of a broadcaster, but what about if that happens because so many people go to social media and you have influencers, for example, that share stories on current affairs and opinion pieces, does that same misinformation principle apply to them?

Thomas Sparrow​04:35

Yes.

Daniel Emmerson​04:35

Or are we getting into a different area?

Thomas Sparrow​04:38

So imagine there's an earthquake somewhere around the world and you receive something on WhatsApp or Signal or Telegram, and you don't know if it's true, but it's shocking. And then you send it to your relatives or, I don't know, to your friends around the world, and then three or four hours later you realise that what you've shared with them is actually false. So the intent was not to deceive your friends or family. The intent was to share it with them. But it was not a misleading intent. It was not something where you want to say, oh, I'm harming them. I'm going to send them something so they believe something that didn't actually happen. So the principle of misinformation works irrespective of whether it's for me as a journalist or for someone who's just sharing information online.

Daniel Emmerson​05:23

Okay, got it. Thank you. Let's go to disinformation.

Thomas Sparrow​05:27

Disinformation, again from both perspectives. So for me as a journalist, but also let's say for someone who's just sharing information online, is when something does have the intent to deceive, to manipulate, to cause harm. So it can be a disinformation campaign by a foreign actor by creating websites that share false stories, or it can be again, a specific actor that uses automated bots and AI generated disinformation to spread a false claim about a politician. So that clearly goes beyond the scope of just spreading information that is not accurate by mistake. Because what's important there is the intent. So what's behind it, why it is actually being shared and what do they want to achieve by doing so? And that is mostly the focus of our work as fact checkers. So it's not only the mistakes. Let's say you share something about an earthquake that is not accurate. It's more about trying to identify narratives that can affect public discourse. And why I'm saying this because I've heard often when I'm, for example, in schools, students can say they don't see any problem in sharing something that's fake. If it's funny, they share it with their friends and they can just joke about it and they don't see any problem with it. And I say actually there is a problem and actually there is a very serious problem because fact based information, reliable, trustworthy information is at the core of how we communicate in a democracy. So what kind of information we have as a base for our decisions in a democracy, be it when we vote, if we can vote, or if we make a decision in our own neighborhood or in a local community, or any kind of decision that you make as part of a democratic process, you need to have reliable information. And if we are spreading, with or without intent, information that is not accurate, that ultimately is going to poison that public discourse that is so relevant to our democratic institutions.

Daniel Emmerson​07:34

So is it right to say that as a journalist, your concern with both misinformation and disinformation is more relevant today in terms of how people communicate and how people access their news than it was, I don't know, 10 or 20 years ago? Is this something that needs more attention paid towards it?

Thomas Sparrow​07:51

Absolutely. I mean, absolutely. There's no doubt about that. Disinformation is not something new. I mean, we've had disinformation for decades and decades. There have been actors in the past that have tried to spread information that is not accurate. The difference is how information and disinformation spread in today's world. So basically how we access information, whereas in the past, let's say three, four, five decades ago, your main source of information would be the newspaper or would be the television, a news program in the evening or the main radio station in the morning while you're driving to work. Now basically how we're getting our information is through social media platforms. And those social media platforms do not have the same kind of safeguards that the newspaper, the TV channel or the radio station do or did have in the past. So whereas in the past, and I guess even also now, for those radio stations and television channels and newspapers, you have journalists that are there to basically ask politicians and hold them accountable whether you have journalists there that are trying to identify whether something is accurate or not before publishing. On social media platform, you have a massive amount of information, a massive amount of data, and it is very difficult for anyone, a journalist or a non journalist, to identify in a reliable way whether something is true. So that's where the problem lies today. In addition to that, a separate issue that we have today compared to, let's say 2016, 2017, when I began talking about disinformation in schools, is the rise of generative artificial intelligence that is either helping to spread disinformation or enabling the spread of disinformation. We'll get into that in a second. But that is creating a whole set of new challenges for us as fact checkers when it comes to, to A. identifying disinformation and B. providing also tools to our listeners, viewers, readers, so they can ultimately do that themselves. I'll give you a concrete example to make this a little bit more specific. I was giving a training this morning virtually for people who are in Lesotho, Zimbabwe, South Africa, Namibia and Malawi. And I was helping them, teaching them basically tools that they can use themselves in their own countries, in their own contexts to verify information.

Daniel Emmerson​10:24

These are teachers or these are journalists or what was your audience here?

Thomas Sparrow​10:27

These are journalists. Okay, but basically they work, let's say for the local radio station in Malawi and they want to verify information that they get at the radio station in Malawi. And I'm helping them, providing them some free AI based tools that they can use, for example to geolocate something or to verify whether, whether they're receiving an audio deepfake or to check whether some pictures that they're receiving have been manipulated in one way or another. And just as I've been doing that with journalists in Southern Africa, we're also trying to provide similar kinds of tools and experiences to users that are not journalists. Because the understanding is that fact checking is not something that should just be exclusive to the world of journalists and fact checkers, but that fact checking should ultimately be something available to everyone in their daily lives as well.

Daniel Emmerson​11:28

For sure. Particularly teachers, I would have said, who are working with young people, the majority of which are going to be accessing their news content on social media platforms. And a lot of that is going to be AI generated. And as you said, we'll come to that in just a moment. I'm interested, though, in some of the things that you mentioned earlier, Thomas, around what it means to be a fact checker. And you mentioned geolocating as well as one or two other techniques that you employ as a journalist. Can you give us a sort of layman's overview as to what that might look like? Just to give people a bit of context, I think before we go into the AI side of things.

Thomas Sparrow​12:08

I can give you plenty of examples. One, there was a picture that was spread online about an explosion in Beirut, and allegedly that explosion had happened last year. So we tried to verify whether it had actually happened last year.

Daniel Emmerson​12:26

So you saw this on social media or someone sent it to you?

Thomas Sparrow​12:30

On social media. So it was viral on social media. In fact, I think some, if I'm not mistaken, it was last year. I can't remember exactly. It was also spread by some official Israeli sites.

Daniel Emmerson​12:40

Okay.

Thomas Sparrow​12:41

So we tried to verify it. How did we go about it? We checked. We did a reverse image search on Google Images or on TinEye, the two tools that we normally use. And we realised that the BBC had published the same photo four or five years earlier.

Daniel Emmerson​12:56

Can you just talk us through that reverse image search, what that means?

Thomas Sparrow​13:00

Yes, I'll give you another example in a minute. Basically, a reverse image search is instead of using words or sentences for your search on Google and we use Google or Tineye, what you do is you search with an image. So you upload an image and then what the reverse image search tool gives you is whether that image was used in the past. And in the specific case of Tineye, it provides you. You can filter it to see when Tineye first found that picture.

Daniel Emmerson​13:36

Okay.

Thomas Sparrow​13:36

And that is a hint for you to say, okay, if some politician or someone on social media is claiming that it actually happened now, then there is a likelihood that didn't happen. If we found the same picture published six years ago.

Daniel Emmerson​13:50

Right.

Thomas Sparrow​13:51

Last year, last week, actually, when I was doing some fact checking, there was a viral story about a big protest in London in support of Donald Trump. And you could see this video published on X that was actually also reposted by Elon Musk in which people were chanting, we love Trump. We love Trump. The first thing that we did was, can this actually be true? And we checked the video in great detail and we realised that a lot of people were wearing shorts and T-shirts. So either the people were not aware of the weather in London in March, or the protest did not happen the first week of March in London. The second thing we did, after observation, I mentioned this not because it's funny, but because observation is the first thing we need to develop and enhance if we're going to be fact checkers.

Daniel Emmerson​14:43

Sure.

Thomas Sparrow​14:44

But the second is you take a screenshot of the video where you see key elements of it.

Daniel Emmerson​14:50

Oh, and what would a key element be? Someone in that case holding a sign or someone.

Thomas Sparrow​14:54

Someone holding a sign or with a T-shirt. Or if you see a specific element in, let's say, a lamppost or a building or someone speaking in the front of the protest or flags, something that you can recognise, you put it through Google Images or through a reverse image search operator. And then very quickly you found that the protest did not happen last week, but that it happened during Donald Trump's first term in office when he was visiting London the first time. And why is this important? It was not only just someone who published it by mistake, the person who was sharing it, which was then reposted by Elon Musk, was actually criticising the current UK government, which, as you know, is a different UK government to the one that was in power during Donald Trump's first term in office. So back then it was Theresa May from the Conservatives. Now it's obviously Keir Starmer from the Labour Party. So basically that helps you to then identify that the claim that you saw on social media is misleading and that the video is not accurate.

Daniel Emmerson​16:00

You've given us some context already about why this is important to get right as far as the audience is concerned. And we're talking here about non-AI generated images and content. You mentioned earlier that now generative AI has come along. This poses a much more significant problem in terms of fact checking and identifying the source of content. Why is that the case, Thomas? And what are you seeing on the ground here? From a GenAI perspective?

Thomas Sparrow​16:36

One of the challenges that we have as journalists is to what extent we use AI to verify AI. So there are lots of AI detectors out there, some of them better than others. And the question is to what extent we can use them or how reliable they are. I'm talking here from a fact checker perspective, but I'll get into the broader perspective in a second. In any case, AI detectors, or in general, using AI to detect AI can be one step in the verification process that we have as fact checkers, but also that someone at home could use, because some of these tools are actually available to anyone. So that's one challenge. A second challenge is to what extent you can use AI. So can we generate an AI picture by using a prompt and then put it, let's say, in our newspaper instead of a photo of a, I don't know, a court case? Consensus in most cases now is that you shouldn't, or if you are going to use it, if you are going to publish it, that it has to be labeled and identified as being AI generated. You can say, yes, that's great. I'll give you the problem. It is very easy for someone to then take that label off and then repost it again. So basically you can take a screenshot of it, eliminate the label, and then you publish it again and say it's true. So basically those are the challenges that we are facing. But if you look at it from the perspective of not a journalist, but let's say someone who's in school, imagine you open your WhatsApp in the morning and you receive an audio, let's say a 10 second, 15 second audio of a politician saying that something very bad has happened in the country. And then you think, I imagine your gut feeling would be to believe what you're listening and to say, wow, something really bad has happened in my country. The problem is now we don't know whether the politician did or didn't say that something bad had happened in the country. It is very easy, especially with audio, to use AI to generate synthetic audio, in other words, to manipulate someone's voice to make him or her say something that he or she didn't say. And it basically means that the challenge is not only for me as a journalist, trying to identify whether politician X, Y or Z did say that or whether it was something generated by AI. Concrete example, in the US election last year, robocall, that's how they call it in the US. So a deep fake audio was spread of Joe Biden telling people not to vote in one of the primary elections. But in reality it was not Joe Biden saying that. It was an AI deepfake audio where they took his voice, cloned it and made him say something that he hadn't said. The impact of that could be that someone decides not to go and vote, because if the president is saying that, then obviously you probably believe it. So this example shows you that disinformation, especially AI generated disinformation, is not something that is just abstract in nature, but that it can have concrete impact in people's lives.

Daniel Emmerson​19:55

And is that something, Thomas, that you would say occurs frequently? So you mentioned WhatsApp earlier. I'm thinking about this from a student perspective and obviously what I'd love to get into a little bit later on is how teachers can better prepare their students for being able to analyse this and navigate this themselves. But, we're talking here about an isolated audio clip that's sent as a WhatsApp message. Is that going to have the same impact as a video that you might see on TikTok or a post you might see from a verified account on another social media platform? What are your thoughts on that?

Thomas Sparrow​20:32

So it can, because in fact, some studies have revealed that if you receive disinformation in the, and in that case it is disinformation from someone you trust, from a school friend or from a relative, then the likelihood that you'll believe it is higher than if you just see it on a random social media platform. In other words, let's say you're in your school WhatsApp group. There are, I don't know, 60, 60 students in it, right? And one of them posts this isolated WhatsApp audio of the president saying something, then the likelihood is that you will end up believing it because it's coming from someone you know. So that's one side of the story. The other side of the story, a different perspective on that is if you use a social media platform and then you see something that has massive reach. So in this case, for example, not an audio deepfake, but a video deepfake, and it's got massive reach and it's well done. So you don't realise at first glance that maybe the voice and the lips are not synced or that the movements are a bit strange. So if it's really, really well done and at first glance you believe it, if you see that it's got massive reach, so hundreds of thousands, if not millions of views, then you will doubt. Then you will doubt in the sense that you may think that it's actually accurate, because if so many people have shared it, then they must be right. So basically there's a lot of psychology behind that and that's why it's so important to actually know how to tackle disinformation or in particular AI generated disinformation because it challenges the basic of what we believe in. Because if you have just a very random shallow fake, so not a deep fake, a shallow fake, one that doesn't use AI, which you can very easily identify as being fake.

Daniel Emmerson​22:26

Can you give us an example of what a shallow fake might be?

Thomas Sparrow​22:29

A shallow fake is if you edit something just with Photoshop and then it's very easily identified as being false. So it doesn't have that AI element behind it where you actually think, because you're seeing it and you're listening it, that it may actually be right when it's not. So deepfake is the opposite to shallow fake. So a shallow fake, for example, can be a video that has not been edited that is actually accurate, but in a different context. So it has been completely taken out of context. Or it can be an absurd claim about, I don't know, a politician that likes to dance this or this or that when he or she doesn't like to dance that. So it doesn't have that actual relevance. So basically it's shallow fake versus deep fake. And the focus is more on deepfakes than on shallow fakes. Although there are many more deep shallow fakes than deepfakes. But the focus is on deep fakes because it really challenges our own beliefs in a much more specific way. Because you're actually seeing, let's say, Obama or Trump saying something in his own voice. And then you still have to ask yourself, is that the President of the United States or is it someone who has just manipulated it, manipulated his voice or his face?

Daniel Emmerson​23:48

This has implications that go way beyond current affairs. Right. If you are able to impact individuals beliefs in how they understand the world to work. If you're a teacher and you're trying to teach a concept, but your students have been exposed to disinformation that's adverse to what they're trying to teach, this could have very problematic consequences. Is that something that you've seen in the schools that you're, you're working with?

Thomas Sparrow​24:20

So if you visit schools, and I'm sure you visit plenty of schools as well, you will realise that probably the most challenging aspect for many schools is not necessarily the big sort of foreign policy or political stories, but actually the deepfakes that may even be created by students themselves against other students. I'm not saying that every school faces that, but it is something that you encounter in some schools. Or disinformation doesn't have to be a deep fake that is spread also on social media, specifically against some of the, of the other students in the classroom. So there's that level, if you move it one level further, then you have the political level as well. And that is something that not only affects what students, what teachers, what the school community might, might end up discussing or even believing in, but it can also affect obviously how well you can talk about issues that are very controversial but that are still necessary in the school curriculum and in the school platform. So if you're talking, for example, about climate change, which is a topic that, at least from the, from the schools that I have visited, is a very important one. One that is discussed in several subjects, there is so much disinformation about climate change, there is so much misleading information about it that it's sometimes very difficult for teachers and for schools more broadly to really focus on information that is fact based, that is trustworthy, that is reliable. Trustworthy and reliable. I mentioned both because there are a little bit of, there's a little bit of a difference there. It's not necessarily always a synonym, but in general, basically it makes it more difficult for teachers to also guide students in a way in which they can end up really focusing on fact based information. And I'm not just saying it's a problem with students. I've had really, really a lot of teachers approach me to say that they are also facing issues when it comes to identifying what's true and what's not. It's not just only something that you see among school students, you see it very clearly among teachers themselves who are concerned about what kind of information they are getting and how they can then transmit it to the students in a responsible way. And that's why when I visit schools, I do not only visit schools to talk to students, but I also visit schools to talk to teachers because I think it's equally important to, to not only provide students with the skills that they need to tackle disinformation, but also to guide teachers respectfully. And I'll mention why respectfully and how they can approach the topic in a, in a better way. And I say respectfully because I don't want to come there to a school and say, look, I am here, the one who knows everything about disinformation and I'm going to tell you how to do it. I am not a teacher, but what I can do is in a very respectful way, suggest issues that they could incorporate in their own school curriculum, suggest. I'm not telling them this is the way forward. I'm just telling them, look, I see a problem. And maybe these are tools that you can implement to guide your students and yourselves better when it comes to the issue of disinformation.

Daniel Emmerson​27:47

So what are the main approaches that you might recommend to a teacher in that situation, particularly if they come from a non technical background, something that's easy to implement.

Thomas Sparrow​27:59

One thing that may work, and that's an initial step, is to have someone external talk about the issue of disinformation. Why? Because when you have a guest, then students, that tends to break a bit the dynamic between the teacher and the pupils. And you have someone there who's external, who has maybe the experience of a journalist or the experience of a fact checker or the experience of a, I don't know, an academic and who can basically present his or her views and his or her analysis. It doesn't have to come specifically from the teacher. So that's one thing. You see that more and more, whether it is project days at the end of school term or whether it is inviting a journalist to spend the day in school, you see that very frequently. So that's one element. A second element is incorporating very small bits and pieces into the school curriculum itself. So if you're talking about democracy as part of a political science class, I don't know the name of the specific class in a UK school, but let's say it's called political science and you're talking about democracy, then instead of just focusing, or as well as just focusing on the theory that you have to talk to students about and on the concrete history of democracy, you could begin by bringing one disinformation case on democracy and then discussing it with the students and then going through the issue with the students and verifying it yourself. Now, the question that I get from many teachers is where do they get the example from? Because they can. Obviously if they have the example, they might use it, but where do they get the example from? And that's where the next step has to come in, namely that they get in touch with those journalists that I mentioned in the first place to provide assistance so that they can have the examples that they need to focus it as part of the school curriculum. Because I don't expect, and I can't expect teachers to then come up with disinformation examples just by the sake of it. But there are also specific tools that are provided by institutions to help teachers also have those elements at hand when they need them, so they can then implement them if necessary. So it's not very difficult. I know it's a challenge, but it's not the most difficult challenge. The third element, which I obviously tell them and it's related to the second one, is there are fact checking sites that they can rely on. So basically you just, if you do an advanced Google search on a topic, democracy, then you can just go to Google, put democracy plus disinformation or false stories, and you see what happens and then you see what fits into the topic that you are discussing with your students. So I guess a little bit more of advanced search could help. But I understand perfectly that teachers are not experts in disinformation, so they're not just going to come up with a viral disinformation case to then discuss with their students.

Daniel Emmerson​31:01

I think it's helpful for folks to be mindful of the implications of misinformation and disinformation, particularly through the lens of generative AI, Thomas. And I hope that this conversation will have been intriguing and hopefully helpful as well to many who are listening. We are working on some research together as well, specifically relating to this, which I'm very excited for us to be publishing in the very near future. Thomas, as always, it's absolutely fascinating speaking with you on this subject. You're doing incredible work and we're very, very grateful to have had you today on foundational impact. Thanks ever so much for your time.

Thomas Sparrow​31:44

Always a pleasure. And as a final sort of note, as a final message, I can only encourage teachers and I can only encourage pupils as well to be curious, but at the same time cautious when they're on social media and to reach out to journalists in wherever they might be to ask for assistance when it comes to dealing with this information. So if the school is in a specific town in England, in Scotland, in Wales, wherever in Germany, in France, go to the local newspaper, call them, say, look, I want someone to come to the school to talk about disinformation, to talk about how journalism works. I'm pretty sure that in most cases someone will be available to go to school. This is something that, by the way, is not only important in the school community. Among journalists, there is a growing understanding that we have to go beyond our television channels and our radio stations and our newspapers, that we have to go where people are actually listening to us, where people are watching us, where people are reading our reports. We have to go and talk to other parts of society. So I'm pretty sure that if the school X, Y or Z reaches out to the newspaper in their local community, that someone would be able to provide assistance and maybe guide students on the issue of disinformation and if not, then they can contact you or they can contact me, and then we will be able to help.

Daniel Emmerson​33:09

A wonderful place to leave off. Thomas, thank you ever so much once again. Really appreciate it.

Thomas Sparrow​33:13

Always a pleasure, Daniel.

Voice Over​33:15

That's it for this episode. Don't forget, the next episode is coming out soon, so make sure you click that option to follow or subscribe. It just means you won't miss it. But in the meantime, thank you for being here and we'll see you next time.

JOIN OUR MAILING LIST

Be the first to find out more about our programs and have the opportunity to work with us
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.