Thomas Sparrow: Navigating AI and the disinformation landscape

Transcript
Daniel Emmerson00:02
Welcome to Foundational Impact, a podcast series that focuses on education and artificial intelligence from a non profit perspective. My name is Daniel Emmerson and I'm the Executive Director of Good Future Foundation, a nonprofit whose mission is to equip educators to confidently prepare all students, regardless of their background, to benefit from and succeed in an AI infused world.
Absolutely wonderful, Thomas Sparrow, to have you here with us today as part of Foundational Impact. Thank you so much for being a guest on this series. Thomas, as one of our guests that is most frequently in the media, I think it would be wonderful for our audience to know a little bit about your role, what you do on a day to day basis, and perhaps an example of something you've reported on recently.
Thomas Sparrow00:52
So it's a pleasure to be joining you on this and I'm really happy to be discussing all these different topics that we've got ahead of us in the next few minutes. I really appreciate the invitation. My name's Thomas Sparrow and I work as a correspondent in Germany. I've been here in Germany now for 10 years, mostly working for Germany's international broadcaster, Deutsche Welle. So basically the equivalent to the BBC World Service. Before I came to Germany, I was actually working for the BBC as a correspondent in the United States, in Washington D.C. and in Florida. And since 2016 or 2017 I've been combining my work as a correspondent with media literacy work in schools. So basically traveling to schools in many different parts of the world to help students and more broadly the school community to learn about disinformation and also learn techniques on howto deal with disinformation. I am also a fact checker for Deutsche Welle. So that basically means that on a daily basis I am verifying disinformation that we find on social media platforms like X, or like TikTok or like Facebook or like Snapchat or other platforms as well. We do that based on reach. So if something has become viral or is particularly relevant in one region of the world and that is linked, as I'm mentioning already, to relevance. In other words, if the claim has any relevance, either political relevance, social relevance, economic relevance. So we use basically open source intelligence tools. So from geolocation to satellite technology, to reverse image searches, to tracking, to face recognition in certain cases to verify claims and provide elements that can help viewers, listeners, readers identify what's true and what's not true. So I divide my between my work as a correspondent, so on camera or in the radio as well, reporting on what's happening here in Germany and then working on the issue of disinformation, be it traveling to schools or fact checking claims that we find online.
Daniel Emmerson03:05
That's a lot for our audience to take in, I think, Thomas, in terms of the scope of your work, I mean, it's absolutely fascinating, the era that we're living in, conversations around how people that go to social media, for example, for their news, are able to discern what is true and what is not. Maybe let's start by breaking that down. So you mentioned misinformation and disinformation. So for our audience, could you just give us a really quick overview of what those two things are and what is the difference between those two things?
Thomas Sparrow03:38
Misinformation with an M is misleading or false information that does not have harmful intent. So imagine if I'm writing an article for a newspaper paper and I've got a picture with a certain politician that I want to publish, and by mistake I write the wrong name of the politician and that gets printed. So it's a mistake and I need to apologise. The media outlet needs to apologise. But the intent is not to deceive. It's simply something that happened as part of the editorial process and that you then obviously apologise and repair.
Daniel Emmerson04:10
Before we go for the disinformation, just to come back to you, if I may, with a quick question on that, because you're talking about that from the perspective of a broadcaster, but what about if that happens because so many people go to social media and you have influencers, for example, that share stories on current affairs and opinion pieces, does that same misinformation principle apply to them?
Thomas Sparrow04:35
Yes.
Daniel Emmerson04:35
Or are we getting into a different area?
Thomas Sparrow04:38
So imagine there's an earthquake somewhere around the world and you receive something on WhatsApp or Signal or Telegram, and you don't know if it's true, but it's shocking. And then you send it to your relatives or, I don't know, to your friends around the world, and then three or four hours later you realise that what you've shared with them is actually false. So the intent was not to deceive your friends or family. The intent was to share it with them. But it was not a misleading intent. It was not something where you want to say, oh, I'm harming them. I'm going to send them something so they believe something that didn't actually happen. So the principle of misinformation works irrespective of whether it's for me as a journalist or for someone who's just sharing information online.
Daniel Emmerson05:23
Okay, got it. Thank you. Let's go to disinformation.
Thomas Sparrow05:27
Disinformation, again from both perspectives. So for me as a journalist, but also let's say for someone who's just sharing information online, is when something does have the intent to deceive, to manipulate, to cause harm. So it can be a disinformation campaign by a foreign actor by creating websites that share false stories, or it can be again, a specific actor that uses automated bots and AI generated disinformation to spread a false claim about a politician. So that clearly goes beyond the scope of just spreading information that is not accurate by mistake. Because what's important there is the intent. So what's behind it, why it is actually being shared and what do they want to achieve by doing so? And that is mostly the focus of our work as fact checkers. So it's not only the mistakes. Let's say you share something about an earthquake that is not accurate. It's more about trying to identify narratives that can affect public discourse. And why I'm saying this because I've heard often when I'm, for example, in schools, students can say they don't see any problem in sharing something that's fake. If it's funny, they share it with their friends and they can just joke about it and they don't see any problem with it. And I say actually there is a problem and actually there is a very serious problem because fact based information, reliable, trustworthy information is at the core of how we communicate in a democracy. So what kind of information we have as a base for our decisions in a democracy, be it when we vote, if we can vote, or if we make a decision in our own neighborhood or in a local community, or any kind of decision that you make as part of a democratic process, you need to have reliable information. And if we are spreading, with or without intent, information that is not accurate, that ultimately is going to poison that public discourse that is so relevant to our democratic institutions.
Daniel Emmerson07:34
So is it right to say that as a journalist, your concern with both misinformation and disinformation is more relevant today in terms of how people communicate and how people access their news than it was, I don't know, 10 or 20 years ago? Is this something that needs more attention paid towards it?
Thomas Sparrow07:51
Absolutely. I mean, absolutely. There's no doubt about that. Disinformation is not something new. I mean, we've had disinformation for decades and decades. There have been actors in the past that have tried to spread information that is not accurate. The difference is how information and disinformation spread in today's world. So basically how we access information, whereas in the past, let's say three, four, five decades ago, your main source of information would be the newspaper or would be the television, a news program in the evening or the main radio station in the morning while you're driving to work. Now basically how we're getting our information is through social media platforms. And those social media platforms do not have the same kind of safeguards that the newspaper, the TV channel or the radio station do or did have in the past. So whereas in the past, and I guess even also now, for those radio stations and television channels and newspapers, you have journalists that are there to basically ask politicians and hold them accountable whether you have journalists there that are trying to identify whether something is accurate or not before publishing. On social media platform, you have a massive amount of information, a massive amount of data, and it is very difficult for anyone, a journalist or a non journalist, to identify in a reliable way whether something is true. So that's where the problem lies today. In addition to that, a separate issue that we have today compared to, let's say 2016, 2017, when I began talking about disinformation in schools, is the rise of generative artificial intelligence that is either helping to spread disinformation or enabling the spread of disinformation. We'll get into that in a second. But that is creating a whole set of new challenges for us as fact checkers when it comes to, to A. identifying disinformation and B. providing also tools to our listeners, viewers, readers, so they can ultimately do that themselves. I'll give you a concrete example to make this a little bit more specific. I was giving a training this morning virtually for people who are in Lesotho, Zimbabwe, South Africa, Namibia and Malawi. And I was helping them, teaching them basically tools that they can use themselves in their own countries, in their own contexts to verify information.
Daniel Emmerson10:24
These are teachers or these are journalists or what was your audience here?
Thomas Sparrow10:27
These are journalists. Okay, but basically they work, let's say for the local radio station in Malawi and they want to verify information that they get at the radio station in Malawi. And I'm helping them, providing them some free AI based tools that they can use, for example to geolocate something or to verify whether, whether they're receiving an audio deepfake or to check whether some pictures that they're receiving have been manipulated in one way or another. And just as I've been doing that with journalists in Southern Africa, we're also trying to provide similar kinds of tools and experiences to users that are not journalists. Because the understanding is that fact checking is not something that should just be exclusive to the world of journalists and fact checkers, but that fact checking should ultimately be something available to everyone in their daily lives as well.
Daniel Emmerson11:28
For sure. Particularly teachers, I would have said, who are working with young people, the majority of which are going to be accessing their news content on social media platforms. And a lot of that is going to be AI generated. And as you said, we'll come to that in just a moment. I'm interested, though, in some of the things that you mentioned earlier, Thomas, around what it means to be a fact checker. And you mentioned geolocating as well as one or two other techniques that you employ as a journalist. Can you give us a sort of layman's overview as to what that might look like? Just to give people a bit of context, I think before we go into the AI side of things.
Thomas Sparrow12:08
I can give you plenty of examples. One, there was a picture that was spread online about an explosion in Beirut, and allegedly that explosion had happened last year. So we tried to verify whether it had actually happened last year.
Daniel Emmerson12:26
So you saw this on social media or someone sent it to you?
Thomas Sparrow12:30
On social media. So it was viral on social media. In fact, I think some, if I'm not mistaken, it was last year. I can't remember exactly. It was also spread by some official Israeli sites.
Daniel Emmerson12:40
Okay.
Thomas Sparrow12:41
So we tried to verify it. How did we go about it? We checked. We did a reverse image search on Google Images or on TinEye, the two tools that we normally use. And we realised that the BBC had published the same photo four or five years earlier.
Daniel Emmerson12:56
Can you just talk us through that reverse image search, what that means?
Thomas Sparrow13:00
Yes, I'll give you another example in a minute. Basically, a reverse image search is instead of using words or sentences for your search on Google and we use Google or Tineye, what you do is you search with an image. So you upload an image and then what the reverse image search tool gives you is whether that image was used in the past. And in the specific case of Tineye, it provides you. You can filter it to see when Tineye first found that picture.
Daniel Emmerson13:36
Okay.
Thomas Sparrow13:36
And that is a hint for you to say, okay, if some politician or someone on social media is claiming that it actually happened now, then there is a likelihood that didn't happen. If we found the same picture published six years ago.
Daniel Emmerson13:50
Right.
Thomas Sparrow13:51
Last year, last week, actually, when I was doing some fact checking, there was a viral story about a big protest in London in support of Donald Trump. And you could see this video published on X that was actually also reposted by Elon Musk in which people were chanting, we love Trump. We love Trump. The first thing that we did was, can this actually be true? And we checked the video in great detail and we realised that a lot of people were wearing shorts and T-shirts. So either the people were not aware of the weather in London in March, or the protest did not happen the first week of March in London. The second thing we did, after observation, I mentioned this not because it's funny, but because observation is the first thing we need to develop and enhance if we're going to be fact checkers.
Daniel Emmerson14:43
Sure.
Thomas Sparrow14:44
But the second is you take a screenshot of the video where you see key elements of it.
Daniel Emmerson14:50
Oh, and what would a key element be? Someone in that case holding a sign or someone.
Thomas Sparrow14:54
Someone holding a sign or with a T-shirt. Or if you see a specific element in, let's say, a lamppost or a building or someone speaking in the front of the protest or flags, something that you can recognise, you put it through Google Images or through a reverse image search operator. And then very quickly you found that the protest did not happen last week, but that it happened during Donald Trump's first term in office when he was visiting London the first time. And why is this important? It was not only just someone who published it by mistake, the person who was sharing it, which was then reposted by Elon Musk, was actually criticising the current UK government, which, as you know, is a different UK government to the one that was in power during Donald Trump's first term in office. So back then it was Theresa May from the Conservatives. Now it's obviously Keir Starmer from the Labour Party. So basically that helps you to then identify that the claim that you saw on social media is misleading and that the video is not accurate.
Daniel Emmerson16:00
You've given us some context already about why this is important to get right as far as the audience is concerned. And we're talking here about non-AI generated images and content. You mentioned earlier that now generative AI has come along. This poses a much more significant problem in terms of fact checking and identifying the source of content. Why is that the case, Thomas? And what are you seeing on the ground here? From a GenAI perspective?
Thomas Sparrow16:36
One of the challenges that we have as journalists is to what extent we use AI to verify AI. So there are lots of AI detectors out there, some of them better than others. And the question is to what extent we can use them or how reliable they are. I'm talking here from a fact checker perspective, but I'll get into the broader perspective in a second. In any case, AI detectors, or in general, using AI to detect AI can be one step in the verification process that we have as fact checkers, but also that someone at home could use, because some of these tools are actually available to anyone. So that's one challenge. A second challenge is to what extent you can use AI. So can we generate an AI picture by using a prompt and then put it, let's say, in our newspaper instead of a photo of a, I don't know, a court case? Consensus in most cases now is that you shouldn't, or if you are going to use it, if you are going to publish it, that it has to be labeled and identified as being AI generated. You can say, yes, that's great. I'll give you the problem. It is very easy for someone to then take that label off and then repost it again. So basically you can take a screenshot of it, eliminate the label, and then you publish it again and say it's true. So basically those are the challenges that we are facing. But if you look at it from the perspective of not a journalist, but let's say someone who's in school, imagine you open your WhatsApp in the morning and you receive an audio, let's say a 10 second, 15 second audio of a politician saying that something very bad has happened in the country. And then you think, I imagine your gut feeling would be to believe what you're listening and to say, wow, something really bad has happened in my country. The problem is now we don't know whether the politician did or didn't say that something bad had happened in the country. It is very easy, especially with audio, to use AI to generate synthetic audio, in other words, to manipulate someone's voice to make him or her say something that he or she didn't say. And it basically means that the challenge is not only for me as a journalist, trying to identify whether politician X, Y or Z did say that or whether it was something generated by AI. Concrete example, in the US election last year, robocall, that's how they call it in the US. So a deep fake audio was spread of Joe Biden telling people not to vote in one of the primary elections. But in reality it was not Joe Biden saying that. It was an AI deepfake audio where they took his voice, cloned it and made him say something that he hadn't said. The impact of that could be that someone decides not to go and vote, because if the president is saying that, then obviously you probably believe it. So this example shows you that disinformation, especially AI generated disinformation, is not something that is just abstract in nature, but that it can have concrete impact in people's lives.
Daniel Emmerson19:55
And is that something, Thomas, that you would say occurs frequently? So you mentioned WhatsApp earlier. I'm thinking about this from a student perspective and obviously what I'd love to get into a little bit later on is how teachers can better prepare their students for being able to analyse this and navigate this themselves. But, we're talking here about an isolated audio clip that's sent as a WhatsApp message. Is that going to have the same impact as a video that you might see on TikTok or a post you might see from a verified account on another social media platform? What are your thoughts on that?
Thomas Sparrow20:32
So it can, because in fact, some studies have revealed that if you receive disinformation in the, and in that case it is disinformation from someone you trust, from a school friend or from a relative, then the likelihood that you'll believe it is higher than if you just see it on a random social media platform. In other words, let's say you're in your school WhatsApp group. There are, I don't know, 60, 60 students in it, right? And one of them posts this isolated WhatsApp audio of the president saying something, then the likelihood is that you will end up believing it because it's coming from someone you know. So that's one side of the story. The other side of the story, a different perspective on that is if you use a social media platform and then you see something that has massive reach. So in this case, for example, not an audio deepfake, but a video deepfake, and it's got massive reach and it's well done. So you don't realise at first glance that maybe the voice and the lips are not synced or that the movements are a bit strange. So if it's really, really well done and at first glance you believe it, if you see that it's got massive reach, so hundreds of thousands, if not millions of views, then you will doubt. Then you will doubt in the sense that you may think that it's actually accurate, because if so many people have shared it, then they must be right. So basically there's a lot of psychology behind that and that's why it's so important to actually know how to tackle disinformation or in particular AI generated disinformation because it challenges the basic of what we believe in. Because if you have just a very random shallow fake, so not a deep fake, a shallow fake, one that doesn't use AI, which you can very easily identify as being fake.
Daniel Emmerson22:26
Can you give us an example of what a shallow fake might be?
Thomas Sparrow22:29
A shallow fake is if you edit something just with Photoshop and then it's very easily identified as being false. So it doesn't have that AI element behind it where you actually think, because you're seeing it and you're listening it, that it may actually be right when it's not. So deepfake is the opposite to shallow fake. So a shallow fake, for example, can be a video that has not been edited that is actually accurate, but in a different context. So it has been completely taken out of context. Or it can be an absurd claim about, I don't know, a politician that likes to dance this or this or that when he or she doesn't like to dance that. So it doesn't have that actual relevance. So basically it's shallow fake versus deep fake. And the focus is more on deepfakes than on shallow fakes. Although there are many more deep shallow fakes than deepfakes. But the focus is on deep fakes because it really challenges our own beliefs in a much more specific way. Because you're actually seeing, let's say, Obama or Trump saying something in his own voice. And then you still have to ask yourself, is that the President of the United States or is it someone who has just manipulated it, manipulated his voice or his face?
Daniel Emmerson23:48
This has implications that go way beyond current affairs. Right. If you are able to impact individuals beliefs in how they understand the world to work. If you're a teacher and you're trying to teach a concept, but your students have been exposed to disinformation that's adverse to what they're trying to teach, this could have very problematic consequences. Is that something that you've seen in the schools that you're, you're working with?
Thomas Sparrow24:20
So if you visit schools, and I'm sure you visit plenty of schools as well, you will realise that probably the most challenging aspect for many schools is not necessarily the big sort of foreign policy or political stories, but actually the deepfakes that may even be created by students themselves against other students. I'm not saying that every school faces that, but it is something that you encounter in some schools. Or disinformation doesn't have to be a deep fake that is spread also on social media, specifically against some of the, of the other students in the classroom. So there's that level, if you move it one level further, then you have the political level as well. And that is something that not only affects what students, what teachers, what the school community might, might end up discussing or even believing in, but it can also affect obviously how well you can talk about issues that are very controversial but that are still necessary in the school curriculum and in the school platform. So if you're talking, for example, about climate change, which is a topic that, at least from the, from the schools that I have visited, is a very important one. One that is discussed in several subjects, there is so much disinformation about climate change, there is so much misleading information about it that it's sometimes very difficult for teachers and for schools more broadly to really focus on information that is fact based, that is trustworthy, that is reliable. Trustworthy and reliable. I mentioned both because there are a little bit of, there's a little bit of a difference there. It's not necessarily always a synonym, but in general, basically it makes it more difficult for teachers to also guide students in a way in which they can end up really focusing on fact based information. And I'm not just saying it's a problem with students. I've had really, really a lot of teachers approach me to say that they are also facing issues when it comes to identifying what's true and what's not. It's not just only something that you see among school students, you see it very clearly among teachers themselves who are concerned about what kind of information they are getting and how they can then transmit it to the students in a responsible way. And that's why when I visit schools, I do not only visit schools to talk to students, but I also visit schools to talk to teachers because I think it's equally important to, to not only provide students with the skills that they need to tackle disinformation, but also to guide teachers respectfully. And I'll mention why respectfully and how they can approach the topic in a, in a better way. And I say respectfully because I don't want to come there to a school and say, look, I am here, the one who knows everything about disinformation and I'm going to tell you how to do it. I am not a teacher, but what I can do is in a very respectful way, suggest issues that they could incorporate in their own school curriculum, suggest. I'm not telling them this is the way forward. I'm just telling them, look, I see a problem. And maybe these are tools that you can implement to guide your students and yourselves better when it comes to the issue of disinformation.
Daniel Emmerson27:47
So what are the main approaches that you might recommend to a teacher in that situation, particularly if they come from a non technical background, something that's easy to implement.
Thomas Sparrow27:59
One thing that may work, and that's an initial step, is to have someone external talk about the issue of disinformation. Why? Because when you have a guest, then students, that tends to break a bit the dynamic between the teacher and the pupils. And you have someone there who's external, who has maybe the experience of a journalist or the experience of a fact checker or the experience of a, I don't know, an academic and who can basically present his or her views and his or her analysis. It doesn't have to come specifically from the teacher. So that's one thing. You see that more and more, whether it is project days at the end of school term or whether it is inviting a journalist to spend the day in school, you see that very frequently. So that's one element. A second element is incorporating very small bits and pieces into the school curriculum itself. So if you're talking about democracy as part of a political science class, I don't know the name of the specific class in a UK school, but let's say it's called political science and you're talking about democracy, then instead of just focusing, or as well as just focusing on the theory that you have to talk to students about and on the concrete history of democracy, you could begin by bringing one disinformation case on democracy and then discussing it with the students and then going through the issue with the students and verifying it yourself. Now, the question that I get from many teachers is where do they get the example from? Because they can. Obviously if they have the example, they might use it, but where do they get the example from? And that's where the next step has to come in, namely that they get in touch with those journalists that I mentioned in the first place to provide assistance so that they can have the examples that they need to focus it as part of the school curriculum. Because I don't expect, and I can't expect teachers to then come up with disinformation examples just by the sake of it. But there are also specific tools that are provided by institutions to help teachers also have those elements at hand when they need them, so they can then implement them if necessary. So it's not very difficult. I know it's a challenge, but it's not the most difficult challenge. The third element, which I obviously tell them and it's related to the second one, is there are fact checking sites that they can rely on. So basically you just, if you do an advanced Google search on a topic, democracy, then you can just go to Google, put democracy plus disinformation or false stories, and you see what happens and then you see what fits into the topic that you are discussing with your students. So I guess a little bit more of advanced search could help. But I understand perfectly that teachers are not experts in disinformation, so they're not just going to come up with a viral disinformation case to then discuss with their students.
Daniel Emmerson31:01
I think it's helpful for folks to be mindful of the implications of misinformation and disinformation, particularly through the lens of generative AI, Thomas. And I hope that this conversation will have been intriguing and hopefully helpful as well to many who are listening. We are working on some research together as well, specifically relating to this, which I'm very excited for us to be publishing in the very near future. Thomas, as always, it's absolutely fascinating speaking with you on this subject. You're doing incredible work and we're very, very grateful to have had you today on foundational impact. Thanks ever so much for your time.
Thomas Sparrow31:44
Always a pleasure. And as a final sort of note, as a final message, I can only encourage teachers and I can only encourage pupils as well to be curious, but at the same time cautious when they're on social media and to reach out to journalists in wherever they might be to ask for assistance when it comes to dealing with this information. So if the school is in a specific town in England, in Scotland, in Wales, wherever in Germany, in France, go to the local newspaper, call them, say, look, I want someone to come to the school to talk about disinformation, to talk about how journalism works. I'm pretty sure that in most cases someone will be available to go to school. This is something that, by the way, is not only important in the school community. Among journalists, there is a growing understanding that we have to go beyond our television channels and our radio stations and our newspapers, that we have to go where people are actually listening to us, where people are watching us, where people are reading our reports. We have to go and talk to other parts of society. So I'm pretty sure that if the school X, Y or Z reaches out to the newspaper in their local community, that someone would be able to provide assistance and maybe guide students on the issue of disinformation and if not, then they can contact you or they can contact me, and then we will be able to help.
Daniel Emmerson33:09
A wonderful place to leave off. Thomas, thank you ever so much once again. Really appreciate it.
Thomas Sparrow33:13
Always a pleasure, Daniel.
Voice Over33:15
That's it for this episode. Don't forget, the next episode is coming out soon, so make sure you click that option to follow or subscribe. It just means you won't miss it. But in the meantime, thank you for being here and we'll see you next time.