Dr. Biljana Scott: Language as Our Defining Asset

March 26, 2026

Videos

Transcript

Daniel Emmerson 00:27
Welcome everybody, once again to Foundational Impact, our podcast series where we're looking at AI from multiple perspectives to give teachers and school leaders an understanding of what and how their practice might be changing as a consequence of the evolution of this ever exciting technology. I am absolutely delighted to be joined today by Biljana Scott, who is, among other things, a phenomenal facilitator when it comes to workshops. I had the immense privilege of being with you a few months now, I suppose ago, in Portugal for one of your workshops. And I was absolutely blown away by the content and your delivery and your insights. And I'm so happy to be able to share some of this work with our listeners. Maybe I was wondering if we could maybe start with that and if you could just give us a bit of a flavor and one or two insights around your background and the work that led you up to facilitating that session.
Dr. Biljana Scott 01:40
Sure. So my background is mixed culturally in that I was a so-called third culture kid. I grew up in Switzerland, but my parents were, one was Scottish, the other was former Yugoslav. And I grew up with three languages but was never told that this was unusual. So I was always a little bit surprised and taken aback that I was worse at language than other people. You know, I didn't spell as well in French when I went to French school. I didn't know as much vocab in English when I went to English school. And I was really kind of stuck a little bit from, for words when I was speaking Serbo Croatian as it was then. So I think that childhood influence made me feel that I made me interested in language and made me feel I should study language and get to understand what everybody else clearly already knew. So I did various languages and then I did linguistics at university, did a doctorate on Chinese linguistics and taught linguistics and after a while felt that this was all very pie in the sky, very abstract, very theoretical, and had really not very much impact on making the world a better place, which maybe idealistically, naively, I was committed to. So I retrained and learned about diplomacy and then started lecturing on how we can use the resources of language in order to achieve our objectives. And not by bullying people or overruling them, but by getting them on board. So basically through the powers of persuasion and negotiation. And that interest in diplomatic communication then led me into a deeper dive into implicit communication and the various ways in which silence can be meaningful, especially silence that has been both created and constrained by language. So not just open ended silences, but the little bits of meaningful silence that you get between words, between the lines, implications, presuppositions, etc. And that then led me into an interest in how you can combine force and grace in your communication, where force and grace are really integral to the language that you're using, how these attributes can be found in language rather than in psychology or sociology or interpersonal dynamics or anything like that. So that's where you met me, in Portugal. That's what I was workshopping on, trying to mine language for elements of force, elements of grace, and ways in which language might integrally combine the two.
Daniel Emmerson 04:51
What I'm fascinated with in particular is our experience of how the dynamics of language are perhaps being shaped through our interactions with technology. And I'd love to come on to that a little bit later on. But perhaps first of all, to get a deeper level of understanding here with regards to your experiences in language and with language. When we're talking about language, do you see that as a tool predominantly that helps us to get things done and communicate as human beings, or does it have a deeper role in how we navigate and experience the world?
Dr. Biljana Scott 05:37
Yes, it's definitely a tool. There's no doubt about that. I mean, we can articulate our thoughts and convey them in ways that we wouldn't be able to do without language. So it's been a really huge asset for our species, but it's also been a defining asset of our species because we are Homo loquens. We are the species that thinks and speaks is able to put those thoughts into words. And those words, yes, they're often used for communication, for interaction, for giving each other, I don't know, for collaboration. But they're also used in a private manner for, let's say, just figuring out what our thoughts are. A lot of people write in order to understand what they think. And this may be at a philosophical level or at a problem solving level, but it may also be just at a creative level. They write because it's only when they write poetry or fiction that they really, truly understand what the deeper workings of their soul are. So language has a lot of functions, from communication and collaboration to introspection and a better understanding of how our minds work, I think.
Daniel Emmerson 07:06
And as someone who speaks many languages, do you have a particular language, for example, that you think in or dream in?
Dr. Biljana Scott 07:14
Good question. I think I dream in different languages. I know that I do that, but I think that over time, unfortunately, I've been shedding languages. In fact, I think I've probably shed more languages than I acquired. But that's only because one day I woke up having spoken one language, Serbia Croatian, and then discovered that I now spoke two languages, Serbian one hand and Croatian on the other. I think the thing about language is that unless we use it, we lose it. And I haven't been using my other languages that much. Having said that, I've got a lot of passive knowledge and I read in all of them and I enjoy that very much. So I'm sure I could reactivate them. But I think, I think in images. Actually, I don't really think in language, I think in images. And I spend a lot of my time entertaining myself by translating those images into different languages to see how they come out differently. What different insights I get from translating those images.
Daniel Emmerson 08:21
I'm wondering then, when we're thinking about subtracting meaning from language, how much of a difference does it make if we are thinking about the written word or spoken language?
Dr. Biljana Scott 08:41
I think there's two differences. One is a temporal one. So the written word allows us more time and we can therefore reflect on what's being communicated in our own time and to our own, you know, heart's content, where spoken language just is. You have to keep up with it because there's no press replay option when you're speaking live. However, spoken language is very much enriched by body language and expressions and micro expressions and the whole kind of energy that comes off a person. Now I say it's enriched by that. That can also prove a distraction. I find myself watching TV and thinking some comment or other about the person, person's looks, or about the way their tone of voice, and I'm not listening to what they're saying. So the written language actually focuses the mind because it gets rid of all those additional details.
Daniel Emmerson 09:47
And does that happen in the same way, do you think? If we are or when we are speaking, when we are using the language ourselves versus when we're writing something down, for instance?
Dr. Biljana Scott 10:02
I guess there would be individual variation there, but for me personally, I am perhaps more surprised by what I have to say when I'm speaking. So I love going to a reading group, for instance, because I'll have read a book and I'll have had various thoughts and impressions, but I won't really know what they are until I'm forced to translate them into words that are coherent in a sentence that is spontaneously uttered in the context of a discussion group. Whereas if I'm writing, I will probably have a better idea of what it is that I want to say. So there's a greater spontaneity when I speak, that has the power to startle me, and I quite enjoy that.
Daniel Emmerson 10:51
That's a wonderful way of looking at it, and I hadn't quite considered it. The reason I asked that question, I suppose, is because I spend a lot of time talking with teachers about the necessary friction that exists in learning. And at the same time, I also spend a great deal of time using and speaking with colleagues, in particular, about use of generative AI. And there's an emergence, particularly with students in our schools of oral communication, of speaking with the AIs in communicating their thoughts and feelings and finding solutions to problems and looking for information. When you write something, particularly if you're writing by hand, as opposed to typing, there's an increased level of friction there, isn't there, between your thoughts or your thinking and what comes out in terms of language?
Dr. Biljana Scott 11:52
I think you're right, definitely. I think maybe part of that comes from the fact that we're a lot of the time just communicating in platitudes. We're kind of exchanging set expressions, and we're not always producing new and interesting and original content. So maybe when we're writing, we're sitting down to write in order to challenge ourselves precisely to produce original content, whereas when we're speaking, we're maybe in part just saying what's expected of us or echoing what somebody else is saying or paraphrasing it. But I think, again, because it's a rather subjective experience, my own love of speaking is that it forces me to think clearly at a faster tempo, and that's quite the challenge of concentration. So if I'm writing, that friction that you mentioned might make me go off on a daydream in order to avoid the pressure of having to deliver. Whereas if I'm speaking to you, all the blood will rush to my head, as you'll probably see. Right. And I'll be thinking really hard in order to, you know, say what I have to say. But I never. I didn't know what you were going to ask me, so I didn't prepare in any way what I might say to you. And as is the nature of any podcast, we're just going where this happens to take us. And the fun of it, the challenge of it, and of course, the risk of it, is that we don't know where it's going to take us. It may take us nowhere at all or not anywhere. That's terribly interesting to your audience.
Daniel Emmerson 13:36
Well, I'm going to take us somewhere interesting for sure, because I want to focus a little bit on your blog post where you were writing about AI specifically. And there's a couple of things I just wanted to pull out from that. You said that language, for example, has the power to shape our perceptions so that it can change how we feel and engage with the world. Does that mean if our communication is predominantly with a machine, does that change anything?
Dr. Biljana Scott 14:10
Now, that's a very good question. I mean, that machine is going to be reflecting the kind of language that we're all using with each other anyway, because that's the nature of AI communication. So that should. Yeah, it's a good question. It's a very good question. To the extent that the machine is reflecting the words that we're using, then it may not change much, and I'll come back to that point. But to the extent that the machine may have an agenda in that it wants to present itself in a positive light, it may very well choose words that are positively connotated instead of more negatively connotated words that you might have been using, or that might be the more common version usage in society at large. And so there will be an influence coming in through that choice of connotations. For instance, I don't know if I think of an example, I mean,I don't know, things like an assistant bot, for instance, or an AI agent have some certain connotations. And if I were to use a word like buddy bot or an AI friend, this would have subtly different connotations. And any word that I choose will come with its little cloud of connotations, of secondary meanings. These meanings come from the relationship of that word to other words in the language. And they're slightly tinted towards positive or negative, towards something attractive or something that we would rather reject. So, yeah, I think if we communicate a lot with a machine and not with other people, we will get a certain series of connotations of words with particular, let's say positive connotations in one aspect or depending on the subject, negative ones. And that will influence our perception because we wouldn't be exposed to as wide a diversity of voices as we might if we were speaking to a room full of people.
Daniel Emmerson 16:55
So when we're talking, I mean, you mentioned bot there, and again, I'm referencing your article. But when we're talking about AI, does that, particularly if we're talking about it in a school context, I'm thinking about teachers talking about it with their students, with their learners in the classroom. Should we be thinking more carefully around language and how we're describing behavior? You mentioned hallucinations, for example, or even the word intelligence.
Dr. Biljana Scott 17:27
Oh, yeah. We definitely have to be thinking carefully about language. I mean, critical thinking is all about thinking about the role of language in presenting the world to us in a particular way. And so I think that right from, you know, subword level to words, to metaphors, to logical fallacies and even presuppositions, all of these elements of language, which are so inherent to language, so prevalent in language, and often which operate quite covertly in language, all of these have to be brought to consciousness. And we have to start thinking critically about them and encourage children to think critically about them in a way that children, I think, would thoroughly enjoy doing. If you were to, for instance, introduce children to presuppositions. Now, the word presuppositions is horrible because it's kind of multisyllabic and nobody really knows how to define it. And actually, the definition of presupposition is one of the worst definitions of any term that I've ever come across. It. It is an antecedent assumption that holds under negation. An antecedent assumption that holds under negation. Not particularly would understand that, let alone a child. Right? So that's like. No, we're not dealing with those. We're moving on. But if you illustrate a presupposition, if you say, play it again, Sam. Don't play it again, Sam. What you have is exactly that. A prior assumption that Sam has played it before. And whether you put it in the affirmative, play it, or in the negative, don't play it, the assumption remains that he has played it before because of the word again. And so a presupposition. Watch the flying cat tail. There we go. A presupposition is a really fun thing because it's all over language. I just have to say, to use a definite article, like the problem with AI And I've assumed that there is a problem with AI. I can also say the king of France, and I'm assuming there is a king of France, whereas there isn't. I can use the possessive and say, do you know what your problem is? And I'm assuming that the. You have a problem. I can say, don't you know what your problem is? I've negated it, but the assumption is still there. You have a problem. Right. I can say, the king of France's beard is gray. I'm making assumptions that are pure hallucinations in this case. Right. The king of France's beard is not gray. It's white or black. Right. It's the same assumption, right? So presuppositions are really fun because they're all over the language. And if you were to, for instance, take all the cover magazine, the covers of magazines like the Economist or Times or New Scientist or whatever it is, which have a title concerning AI, teach kids what these presuppositions are in practice, get them to, you know, play a game of identifying them and get them to identify how all these covers are first grabbing our attention with their titles and then playing out a story between. In the tension that exists between the title and the image, you will get those kids to be thinking critically in a way that will be really enjoyable for them. Really enjoyable. Because the pleasure comes from recognising that you can outmaster language that is trying to master you by influencing your way of seeing things.
Daniel Emmerson 21:58
But you mentioned covert meaning as well, right? Could you unpack that a little bit?
Dr. Biljana Scott 22:06
I think presuppositions are in a way covert. We don't see them. We don't walk around thinking, ah, there's a presupposition, I'm going to challenge it. A lot of us do, but not always. You know, when Gandhi was asked, what do you think of Western civilization? His response was, I think it would be a very good idea. So he saw the presupposition that none of us would see because we'd be straight in there thinking about the pros and cons of Western civilization and we'd be heading off into colonization and inventions and balancing them out. And he's like, yeah, I think it'd be a really good idea. We've not done it yet, clearly.
Daniel Emmerson 22:54
I mean, these are aspects that make, I suppose, language so rich and also so fragile and encouraged through activities that you've just mentioned about that engage critical thinking. I'm wondering then, when people are engaging with AI, which typically sounds or appears very calm and confident and certain of itself, how do you think that that is impacting users or impacting people, particularly if they're engaging with it frequently?
Dr. Biljana Scott 23:38
I think it's a way of outsourcing your own need to think. That's probably my. Let's say that's one reaction, right? And that's my fear driven reaction that if you have a tool and do it for you, then why would you keep up the skills? I used to speak five, six languages. I don't need to anymore because I could always use Google Translate or whatever and communicate in that way. So I'm losing skills out of laziness. And we are very lazy as a species. You know, we had to invent puritanism in order to try to make us work hard. But that had its day. And then, you know, we have teachers who tell us that we have to work hard. But you know, unless you get addicted to concentration, then why would you work hard if you could outsource that effort to somebody else? But of course, any kind of outsourcing, especially of intellectual effort, gives other people power over you because they can then decide. 
Okay, forget AI, think of an orator, right? A leader is often somebody who can put into words the aims, the values, the objectives of a people and who can convince them why their way of seeing the world and acting upon it is better than a rival's way of doing so. Leaders are often very articulate, they're good orators, right? And very many of us are happy to hand over to a leader and say, yep, you do the talking for me. And that's great because we have confidence. And then that trust is going to be there and probably is going to outlive the extent to which we should continue trusting that person. Because maybe at some point there's a divergence between our thoughts and theirs and our objectives and theirs, and our methods or preferred methods of getting to those objectives and theirs. But we've jumped on the bandwagon and loyalty demands that we stay true to them. And this is all a way of outsourcing our own free thinking and letting somebody else do it for us. So we do it amongst humans. Why wouldn't we do it with an AI bot? I mean, definitely it makes it so much easier and we're all of us largely inclined to do so.
Daniel Emmerson 26:28
Do you use the technology yourself?
Dr. Biljana Scott 26:34
I don't actually, but then I'm now retired, so I don't have to for work. Otherwise I definitely would. However, having said that, I work for an organization called Diplo Foundation. And because I'm interested in implicit communication and diplomatic communication, and because Diplo is very, very interested in cutting edge teaching, cutting edge technology, we've developed an AI assistance for the course that I teach on Force and Grace, which is able to ask loaded questions, hard hitting interview questions, and which is then also able to offer tentative answers using a very powerful response system called ABC, where you acknowledge the driving concern of a question. And every interviewer legitimately is asking questions that concern their audience. Right? That's what you're doing. And so the ABC response acknowledges the validity and the value of that driving concern that the interviewer has in his question. And then of course, the interviewer has a speaking point that they have to say that's their communication point. And they have to try to learn to bridge between that driving concern and their communication point. And that ABC acknowledge, bridge, communicate strategy, right, is, is a tough one to learn. It requires a bit of thinking on the spot and it's one that can definitely be trained. So we've developed this bot that can ask loaded questions and then help you. Well, asks you to provide an ABC response and then helps you analyze your ABC response. And I think that's a really good use for an AI assistant, right? In that it allows you to do self study at home in your own pace, especially on a subject that could be slightly embarrassing or humiliating if you were to lose face in front of colleagues, especially if those colleagues were junior to you in ranking in a diplomatic context. So you can kind of skill up in the privacy of a discussion with your bot and then deliver your better rehearsed answers or the method is better rehearsed for when you're actually in a, in a dialogue with somebody. So that kind of usage I, that's the kind of use I have made of ChatGPT and LLM more generally. Right. But otherwise not in my everyday life because I don't need to. In fact, I'm going in the opposite direction. I am much trying to dive deep into poetry and into writing poetry because it is one form of communication that seems to break so many of the rules of normal communication. Right. So remember at the outset I said that we speak in platitudes an awful lot of the time. We're just echoing each other. But with poetry, if you sit down, the friction that you mentioned, oh my God, it's so much greater that you know that white page staring at you and your mind filling with cliches and platitudes. And you have to try to start thinking in a, in a way that is much more fragmented, much more elusive, much more implicit, that basically breaks down all the 10 best ways of clear communication guides that we come across. And that's very much where I'm going in the opposite direction to getting help with my communication.
Daniel Emmerson 30:53
Interestingly, that's always a use case that I hear from teachers about how AI could be helpful, right? The blank page syndrome where you just need something in order to get you started. The counter argument to that is what you've just expressed, right? It's that think, that deep thought process that you need to go into that. That's where the thinking really happens, right. When you, when you're making those first steps, how important is that do you think when it comes to learning?
Dr. Biljana Scott 31:34
I think you need two things. I think you need the deep thinking time and perhaps your page and your pen in order to use a medium that is as slow as our thoughts seem to be, especially when we're developing them or translating them. But I think you also need the stimulation that causes your mind to spark in the first place. So you really need either conversation or a critical mass. You know that Louis Pasteur said chance favors the informed mind. You have to have reached a critical mass of information. You have to have the neurons and the neural networks in place and firing away for one chance spark to suddenly ignite a new series of connections that you will then, having grasped them in that moment of epiphany, be able to slowly work out and translate into words that will convince others of the value of what it is that you've understood or have to communicate. Right. So I think you need the deep time, but you also need the stimulation. And that's why if you only sit with a blank page, you will probably end up with nothing. Whereas if you have a bit of a blank page waiting for that spark of chance, and you then go out and live and interact and expose yourself to inspiration, make yourself ready for inspiration, then you've got the best of both worlds.
Daniel Emmerson 33:22
Does that spark need to come from, I'll phrase that differently. If that spark comes from. Or is it possible, let me say, a spark to come from artificially generated content, synthetic content. Content that doesn't come from another human being, an art form, a book, a piece of poetry?
Dr. Biljana Scott 33:52
Yeah, good question. You know, that spark can come from a single phrase. Very often people will read just one phrase, and it happens to ignite a whole network of associations that was already there in their mind. So if it's just a phrase, that phrase could come from anywhere. Yeah, it could come from something that somebody wrote 3,000 years ago to something somebody said on the radio while you were not even listening to it, but arrested you nevertheless, because it resonated to something that an AI might have generated. But you have to be ready for it. You have to. To have prepared the ground.
Daniel Emmerson 34:37
I want to take us back, if I can, just a little bit, to the engaging with Gen AI. We talked before in the workshop about communication and what that means good communication. A lot of that is around clarity in meaning, but also demonstrating that you're listening intently to the language and what's being said. But when AI starts to sound more conversational and mirrors good communication, is it possible for it to do that in the same way, is it possible to come across as though it's really listening or is it copying the surface of good listening?
Dr. Biljana Scott 35:28
It's both. I think that we're very good at suspending disbelief. And so, you know, when we go to the theater, it's not the real thing, but oh my God, can we be totally captivated and transported by a good piece of theater. Same with a film, same with a story. Whether we read it or whether we're told it, it's in our. It's in our DNA. You know, we are very, very susceptible to the influence of stories. And all stories involve a slight suspense of disbelief, a slight leap of faith into the world of the narrator. And so I'm sure the AI can elicit that same reaction that we take that leap of faith and then we fill in. Because the whole point about stories is that we fill them out for ourselves with the relevance that we're seeking or finding in them. And that sense of resonance is we are as much a part of that as the story is an agent in eliciting that buzz, that kind of to and fro that's constantly taking place as we listen, as we identify, as we evaluate, as we anticipate, as we affirm, and then maybe are surprised and then take off in another direction. So all that rather complex and hugely satisfying dynamic is one that we're very, very primed for. And we can project it onto an AI interlocutor just as easily as we can project it onto the construct that is a theater or that is a story. Now, as for the. Is it just a superficial mimicry that you gave as the alternative? It's probably, at the moment, AI is probably not entirely credible. I know that AI has been used to reproduce people who are dead, for instance. You know, you use their voice, you use their journals and whatever other data you have, and, and then you have a conversation with them. And, you know, it just shows to what extent we're ready for this suspension of disbelief. We know our loved one is dead, but we're still wanting to bring them back through AI and have conversations with them. There will be little slip ups because in that kind of context, you'll be very, very finely tuned to exactly what the dead person would have said, what your loved one would have said, or how they would have reacted, or what they might have known. And so a slip will jar and really sound false and will create a distance very rapidly. But if you're talking to an AI as a shrink, well, from what I gather, most shrinks don't talk back to you anyway, so you're just lying on this on the couch and they're just silent and you're just talking to yourself. But you know that there is a listening ear and that alone is enough to make you feel like you're developing a relationship with this ear, right? This listening ear. AI is more than an ear, it actually engages with you. So it's much easier for you to develop a relationship and a sense of, yeah, we're special to each other. They truly understand me. And I think that was the case really early on. Wasn't it that way back in The God knows when 60s or something, somebody, some secretary eventually came to believe that this proto AI that she was talking to was actually a really good friend and therapist to her.
I think our suspension of disbelief is such that we go off in all sorts of weird directions in terms of having relationships with AI, granting them human status, granting them passports and nationalities and legal rights, and trusting our children with them. These are all, these all involve leaps of faith. And yeah, unfortunately we're primed for them. They were not that good necessarily at monitoring and regulating them.
Daniel Emmerson 40:17
And we, and yet we know this is how a lot of people use their, their AI tools, right? So OpenAI produced research on how people use ChatGPT in September, I think last year. They were looking at one, one and a half million users and how they engaged with ChatGPT on a, on a weekly basis. And 70% of all use cases were on this sort of counseling or social or even intimate examples of how people were using the tool as opposed to academic and professional, which is interesting. But of course then you have teenagers that are using AIs that are built as you know, going back to one of your earlier points, AI friends or companions. And because these are typically non judgmental, they feel comfortable talking to them about issues they're facing at home or in a peer group or whatsoever. And on the one hand that could be positive, right, because they're at least communicating their feelings in some way. But on the other hand, they might otherwise have been interactions they have with another human being. Is there a concern there that we're almost sort of offloading resilience building in young people when it comes to how they're engaging with this technology because of how it responds back to us?
Dr. Biljana Scott 41:58
That's interesting because they are building resilience by thinking their problems through with the help of AI. So to the extent that resilience is achieved by digging deep into yourself and finding the resources that equip you for further effort, they're doing it. Any kind of self analysis, even if it's with the help of a prompt, is still self analysis, right? If the AI were simply to produce answers and tell them how to behave and how they should feel, then that's maybe a different matter because then you are totally outsourcing the analysis to somebody else and then obeying that somebody or something else. And they may have their own agenda, which is not your best self interest. But at the moment, to the extent that AI is just a friendly listening ear which prompts you occasionally, I think that's probably a pretty good way of building resilience.
Daniel Emmerson 43:15
Any final thoughts before we wrap up? Be just particularly when considering that we have heads of school and senior leaders and teachers at school here who are thinking about how they might best have conversations with their learners about their use of generative AI tools. From the perspective of language, is there anything you'd want to focus on?
Dr. Biljana Scott 43:42
Yeah, I think maybe two things. One is the critical thinking dimension of it and setting up exercises and interactions where we're all invited to see, to analyze critically. The way that we ourselves are telling stories about AI, we tend to tell them either as kind of friend or foe narratives, right? So AI is either this very dangerous future enemy disguised as a current friend whose mission is to annihilate us, or AI is a really great technological advance that is opening up all sorts of fields and facilitating all sorts of procedures and interactions in a way that can only be a good thing. And of course AI is both of those, or potentially both of those, because we have both those aptitudes, but we have every aptitude in between. Between good and bad are all the shades of everything else, right? So I think some critical thinking about the relationship between how we speak about AI and how we perceive AI is always going to be a really good thing. But the other thing that I think would be really interesting for children especially to do would be to envisage ways in which they think that AI, that human communication is unique to humans and that AI might not be able to emulate these particular traits. So if you were to ask a kid which ways of communication do you have that AI does not have? First of all, you'd get lots of interesting insights into what kids think communication is and what is central to their individual form of communication. It may be touch, it may be facial expressions, it may be something else that we don't know, something more related to language. And that would be interesting to find out about because I'm interested in implicit communication. I'm fascinated to know which aspects of implicit communication AI can master and which it may not be able to master. So we've spoken about presuppositions. AI can definitely master that because there's a set number of parts of speech or functions in language that trigger presuppositions. And you can always test a presupposition by negating it. And if it still holds under negation, you know you've got a presupposition. So AI can do that very easily. But can AI understand connotations? Probably yes. Simply for the fact that connotations very often end up being good or bad. Right. And that is so reductive that I'm sure AI could do that. Can AI understand story capsules that come through illusions? If I were to refer to somebody as a homegrown terrorist, as opposed to a legal protester, for instance, would AI understand the package that comes with each of those two terms? Yes, I'm sure it would. It would do a really good job of that because it would understand the context and it would gather all that information very, very quickly and understand what the connotations, the loadedness of those terms might be. Can AI understand analogies? I think it probably understands analogies better than we do because it has a much larger database, it has much faster access to. So I think it can. And analogies are not just a way of expressing yourself more clearly and forcefully, but they're also a way of learning faster and concentrating that whole learning because you can package lots of things into the equivalence dynamic. So I'm sure AI can do that. Can it do analogies that involve a break, you know, like a discontinuity, like, you know, I don't know, some weird poetic analogy like the evening being spread across the sky like a patient etherized upon a table. It's like, what. How does that work? What is that even about? Right. If you take the TS Eliot line, would AI be able to generate that? Yes. Would we care? No, because it wouldn't have a human intention. It wouldn't have a psychological depth. It wouldn't have observational originality. It would just be random generated by a machine and wouldn't have any relevance to us as a result, or not necessarily.
So I'm not sure which areas of implicit communication AI is not capable of or which areas of human communication AI is not capable of. But it's a subject I'd love to explore. And I think that kids would be really good at exploring this because they're still a lot more plastic and flexible than fluid in their understanding of what is included in communication. And they're also probably the ones who are going to have to determine in the future how they can escape the Big Brother eye of AI should they need to. And that's how you could present this game. AI is a Big Brother. We need to communicate without it understanding us. How are we going to do that? Are we going to do it by signs, by body language, by secret coded communication? And what is that going to be? So I think that's a long winded answer to a question about my last thoughts about communication and AI. But I think as far as children are concerned, encourage critical thinking. Show them that they're the boss of AI because they can see how we are being invited to think about AI in certain ways and they can rise above those ways of being influenced by language. On the other hand, start thinking about what forms of communication do we have that are unique to us and that AI will not master. Poetry, in my opinion, is one of those. AI poetry is absolutely terrible. No matter how much we try to improve it, it's still palpably bad. So what is it about poetry that seems to be so far unique to human communication that's worth exploring.
Daniel Emmerson 51:35
I'm sure that will have sparked so many ideas and a huge amount of inspiration for our listeners. I know it has in me as well. And I'm so, so grateful for your insights and your reflections and your time, of course. B, thank you so very much for being a part of Foundational Impacts.
Dr. Biljana Scott 51:53
Thank you, Daniel. It was a great pleasure. Thanks.

About this Episode

Dr. Biljana Scott: Language as Our Defining Asset

What makes human communication unique in an age of increasingly sophisticated AI? Daniel Emmerson invites Dr. Biljana Scott, a linguist with expertise in diplomatic communication and language analysis, to explore this question in depth. With her multilingual background and extensive experience in teaching the nuances of communication, Biljana probes the complex interplay between human language and AI interaction. Their conversation illuminates whether our increasing reliance on AI might reshape how we think and express ourselves, unpacks linguistic concepts like “presuppositions” in everyday speech, and reveals how the terminology we use to describe AI carries powerful connotations that fundamentally shape our relationship with technology.

Dr. Biljana Scott

Associate of the China Centre, University of Oxford

Related Episodes

May 5, 2026

Calvin Eden: When Your Students Trusts AI Over You

Our guest for this episode is Calvin Eden, the founder of LoudSpeaker who works with students across the UK through high-energy and interactive workshops on topics like resilience, emotional intelligence, and healthy relationships. Calvin meets young people in his everyday work, and seeing AI tools become a growing part of students' social and emotional lives reinforces his belief in the urgent need for schools to strengthen young people's confidence, communication skills, and sense of belonging. One major theme in his conversation with Daniel is the importance of human connection. While AI can be a useful tool in different ways, young people need to practice communication, build relationships with others, and learn to speak about their own vulnerability and ask teachers and parents for help when needed. He warns that if AI becomes a substitute for human interaction, students may become less resilient and more isolated. Their conversation also explores the student voice. Daniel shares Good Future Foundation's belief that students should help shape responsible AI policies in schools. Calvin agrees and describes how his work supports schools to build student voice strategies, run student conferences, and create opportunities for young people to be heard. Calvin also encourages school leaders to create a culture where staff and students connect as people, not just through formal roles. He wraps up the conversation by inviting educators to share stories, talk honestly about challenges and failures, and celebrate what they are proud of with their students to build a school environment where young people feel seen, safe, and valued.
April 21, 2026

Erin Mote: The AI Research to Classroom Gap No One is Talking About

In this episode, Daniel sits down with Erin Mote of InnovateEDU about how education systems are responding to AI and where current approaches are falling short. Erin challenges the assumption that progress in education operates within fixed limits. She argues that system-level change depends on collaboration, shared practice, and open infrastructure rather than competition between schools, organisations, or regions. This approach underpins the work of the EDSAFE AI Alliance, which brings together policymakers, educators, and industry to define practical standards for AI use. Its SAFE framework focuses on safety, accountability, fairness, transparency and efficacy, with direct implications for procurement, policy and classroom practice. The conversation addresses the tension between the pace of AI adoption and the slower development of traditional evidence. Schools are already using these tools at scale, while formal research remains limited. Erin outlines the need for informed, iterative decision making supported by shared insight across systems. There is also a detailed discussion of risk. AI-driven personalisation has potential, but current implementations can narrow opportunity through rigid progression models, limited student agency and the use of sensitive data in ways that affect outcomes. These issues require closer scrutiny of how tools are designed and deployed. For school leaders, the priority is to act with intent. Building AI literacy across students, staff and parents is identified as the most immediate and practical step. Current usage levels among educators are high, while formal guidance remains inconsistent, creating a gap that needs to be addressed quickly. Erin also shares resources from InnovateEDU, including policy frameworks, planning tools and AI literacy materials designed to support schools in making informed decisions. The discussion returns throughout to the role of shared standards and coordinated action. Where systems align on safety and implementation, progress becomes more consistent and risks are easier to manage.
February 6, 2026

Claire Archibald: Creating Effective AI Governance Structures in Schools

Is having an AI policy enough to protect your school? In this episode, Daniel Emmerson speaks with Claire Archibald, Legal Director at Brown Jacobson and former Data Protection Officer, about what effective AI governance in schools looks like. Their conversation covers essential topics including what makes a good Data Protection Impact Assessment (DPIA), the importance of vendor due diligence, and why schools need robust governance structures beyond just having an AI policy. Claire emphasises the critical role of incident reporting, creating transparent cultures around AI use, and the need for collaborative approaches involving all stakeholders. She also shares a six-step governance framework and practical advice for schools starting their AI journey.
January 14, 2026

Setting Visible Boundaries to Safeguard our Students in an AI-infused World

Daniel's conversation with Gemma Gwilliam, Portsmouth's Head of Digital Learning, Education and Innovation, explores transparency, privacy and safeguarding in AI education. The discussion takes a dramatic turn when Gemma puts on a pair of AI-enabled glasses which she purchased easily for under £10 right in the middle of the recording, bringing theoretical concerns into stark reality. This jaw-dropping demonstration underscores the urgent challenges teachers face as sophisticated AI wearables become increasingly accessible to students. While we may debate whether AI belongs in classrooms, we cannot ignore the significant risks these technologies present to young people. This episode reveals how Portsmouth supports its schools and teachers in approaching AI responsibly to strike a balance between innovation and essential safeguarding measures.
December 9, 2025

Hult Prize Accelerator Startups: How the Next Generation is Solving Global Problems with AI

What skills will our students genuinely need to thrive in a future driven by AI? To find the answer, Daniel Emmerson goes straight to the source and sits down with brilliant young minds behind seven teams from the Hult Prize Global Accelerator, one of the final stages of the world’s largest student startup competition.
November 11, 2025

Muireann Hendriksen: Adapting AI Tools Based on Learning Science

In this episode, Daniel speaks with Muireann Hendriksen, the Principal Research Scientist at Pearson, about her team's recent research study called "Asking to Learn" The study analysed 128,000 AI queries from 9,000 student users to gain deeper insights into how students learn when they interact with AI study tools. Their key finding revealed that approximately one-third of student queries demonstrated higher-order thinking skills. Their conversation also explores important themes around trust, student engagement, accessibility, and inclusivity, as well as how AI tools can promote active learning behaviours.
October 13, 2025

Leena, Alicia and Swati: Embracing AI in GEMS Winchester School Dubai

Leena, Alicia and Swati from GEMS Winchester School Dubai, share their remarkable journey to achieving AI Quality Mark gold status. Over 12 months, they developed a school-wide AI strategy by establishing an AI core team, working party, and champions across both primary and secondary divisions. Their systematic approach also included AI tool evaluation through detailed risk assessments, and the creation of a bespoke AI literacy programme for their teachers. Their conversation reveals how they engage all stakeholders, including teachers, students, and parents, to cope with the challenges of this rapidly evolving technology and prepare students for an AI-infused world.
September 29, 2025

Matthew Pullen: Purposeful Technology and AI Deployment in Education

This episode features Matthew Pullen from Jamf, who talks about what thoughtful integration of technology and AI looks like in educational settings. Drawing from his experience working in the education division of a company that serves more than 40,000 schools globally, Mat has seen numerous use cases. He distinguishes between the purposeful application of technology to dismantle learning barriers and the less effective approach of adopting technology for its own sake. He also asserts that finding the correct balance between IT needs and pedagogical objectives is crucial for successful implementation.
September 15, 2025

Matt King: Creating a Culture of AI Literacy Through Conversation at Brentwood School

Many schools begin their AI journey by formulating AI policies. However, Matt King, Director of Innovative Learning at Brentwood School, reveals their preference for establishing guiding principles over rigid policies considering AI’s rapidly evolving nature.
September 1, 2025

Alex More: Preserving Humanity in an AI-Enhanced Education

Alex was genuinely fascinated when reviewing transcripts from his research interviews and noticed that students consistently referred to AI as "they," while adults, including teachers, used "it." This small but meaningful linguistic difference revealed a fundamental variation in how different generations perceive artificial intelligence. As a teacher, senior leader, and STEM Learning consultant, Alex developed his passion for educational technology through creating the award-winning "Future Classroom", a space designed to make students owners rather than consumers of knowledge. In this episode, he shares insights from his research on student voice, explores the race toward Artificial General Intelligence (AGI), and unpacks the concept of AI "glazing". While he touches on various topics around AI during his conversation with Daniel, the key theme that shines through is the importance of approaching AI thoughtfully and deliberately balancing technological progress with human connection.
June 16, 2025

David Leonard, Steve Lancaster: Approaching AI with cautious optimism at Watergrove Trust

This podcast episode was recorded during the Watergrove Trust AI professional development workshop, delivered by Good Future Foundation and Educate Ventures. Dave Leonard, the Strategic IT Director, and Steve Lancaster, a member of their AI Steering Group, shared how they led the Trust's exploration and discussion of AI with a thoughtful, cautious optimism. With strong support from leadership and voluntary participation from staff across the Trust forming the AI working group, they've been able to foster a trust-wide commitment to responsible AI use and harness AI to support their priority of staff wellbeing.
June 2, 2025

Thomas Sparrow: Navigating AI and the disinformation landscape

This episode features Thomas Sparrow, a correspondent and fact checker, who helps us differentiate misinformation and disinformation, and understand the evolving landscape of information dissemination, particularly through social media and the challenges posed by generative AI. He is also very passionate about equipping teachers and students with practical fact checking techniques and encourages educators to incorporate discussions about disinformation into their curricula.
May 19, 2025

Bukky Yusuf: Responsible technology integration in educational settings

With her extensive teaching experience in both mainstream and special schools, Bukky Yusuf shares how purposeful and strategic use of technology can unlock learning opportunities for students. She also equally emphasises the ethical dimensions of AI adoption, raising important concerns about data representation, societal inequalities, and the risks of widening digital divides and unequal access.
May 6, 2025

Dr Lulu Shi: A Sociological Lens on Educational Technology

In this enlightening episode, Dr Lulu Shi from the University of Oxford, shares technology’s role in education and society through a sociological lens. She examines how edtech companies shape learning environments and policy, while challenging the notion that technological progress is predetermined. Instead, Dr. Shi argues that our collective choices and actions actively shape technology's future and emphasises the importance of democratic participation in technological development.
April 26, 2025

George Barlow and Ricky Bridge: AI Implementation at Belgrave St Bartholomew’s Academy

In this podcast episode, Daniel, George, and Ricky discuss the integration of AI and technology in education, particularly at Belgrave St Bartholomew's Academy. They explore the local context of the school, the impact of technology on teaching and learning, and how AI is being utilised to enhance student engagement and learning outcomes. The conversation also touches on the importance of community involvement, parent engagement, and the challenges and opportunities presented by AI in the classroom. They emphasise the need for effective professional development for staff and the importance of understanding the purpose behind using technology in education.
April 2, 2025

Becci Peters and Ben Davies: AI Teaching Support from Computing at School

In this episode, Becci Peters and Ben Davies discuss their work with Computing at School (CAS), an initiative backed by BCS, The Chartered Institute for IT, which boasts 27,000 dedicated members who support computing teachers. Through their efforts with CAS, they've noticed that many teachers still feel uncomfortable about AI technology, and many schools are grappling with uncertainty around AI policies and how to implement them. There's also a noticeable digital divide based on differing school budgets for AI tools. Keeping these challenges in mind, their efforts don’t just focus on technical skills; they aim to help more teachers grasp AI principles and understand important ethical considerations like data bias and the limitations of training models. They also work to equip educators with a critical mindset, enabling them to make informed decisions about AI usage.
March 17, 2025

Student Council: Students Perspectives on AI and the Future of Learning

In this episode, four members of our Student Council, Conrado, Kerem, Felicitas and Victoria, who are between 17 and 20 years old, share their personal experiences and observations about using generative AI, both for themselves and their peers. They also talk about why it’s so crucial for teachers to confront and familiarize themselves with this new technology.
March 3, 2025

Suzy Madigan: AI and Civil Society in the Global South

AI’s impact spans globally across sectors, yet attention and voices aren’t equally distributed across impacted communities. This week, the Foundational Impact presents a humanitarian perspective as Daniel Emmerson speaks with Suzy Madigan, Responsible AI Lead at CARE International, to shine a light on those often left out of the AI narrative. The heart of their discussion centers on “AI and the Global South, Exploring the Role of Civil Society in AI Decision-Making”, a recent report that Suzy co-authored with Accentures, a multinational tech company. They discuss how critical challenges including digital infrastructure gaps, data representation, and ethical frameworks, perpetuate existing inequalities. Increasing civil society participation in AI governance has become more important than ever to ensure an inclusive and ethical AI development.
February 17, 2025

Liz Robinson: Leading Through the AI Unknown for Students

In this episode, Liz opens up about her path and reflects on her own "conscious incompetence" with AI - that pivotal moment when she understood that if she, as a leader of a forward-thinking trust, feels overwhelmed by AI's implications, many other school leaders must feel the same. Rather than shying away from this challenge, she chose to lean in, launching an exciting new initiative to help school leaders navigate the AI landscape.
February 3, 2025

Lori van Dam: Nurturing Students into Social Entrepreneurs

In this episode, Hult Prize CEO Lori van Dam pulls back the curtain on the global competition empowering student innovators into social entrepreneurs across 100+ countries. She believes in sustainable models that combine social good with financial viability. Lori also explores how AI is becoming a powerful ally in this space, while stressing that human creativity and cross-cultural collaboration remain at the heart of meaningful innovation.
January 20, 2025

Laura Knight: A Teacher’s Journey into AI Education

From decoding languages to decoding the future of education: Laura Knight takes us on her fascinating journey from a linguist to a computer science teacher, then Director of Digital Learning, and now a consultant specialising in digital strategy in education. With two decades of classroom wisdom under her belt, Laura has witnessed firsthand how AI is reshaping education and she’s here to help make sense of it all.
January 6, 2025

Richard Culatta: Understand AI's Capabilities and Limitations

Richard Culatta, former Government advisor, speaks about flying planes as an analogy to explain the perils of taking a haphazard approach to AI in education. Using aviation as an illustration, he highlights the most critical tech skills that teachers need today. The CEO of ISTE and ASCD draws a clear parallel: just as planes don't fly by magic, educators must deeply understand AI's capabilities and limitations.
December 16, 2024

Prof Anselmo Reyes: AI in Legal Education and Justice

Professor Anselmo Reyes, an international arbitrator and legal expert, discusses the potential of AI in making legal services more accessible to underserved communities. He notes that while AI works well for standardised legal matters, it faces limitations in areas requiring emotional intelligence or complex human judgment. Prof Reyes advocates for teaching law students to use AI critically as an assistive tool, emphasising that human oversight remains essential in legal decision making.
December 2, 2024

Esen Tümer: AI’s Role from Classrooms to Operating Rooms

Healthcare and technology leader Esen Tümer discusses how AI and emerging trends in technology are transforming medical settings and doctor-patient interactions. She encourages teachers not to shy away from technology, but rather understand how it’s reshaping society and prepare their students for this tech-enabled future.
November 19, 2024

Julie Carson: AI Integration Journey of Woodland Academy Trust

A forward-thinking educational trust shows what's possible when AI meets strategic implementation. From personalised learning platforms to innovative administrative solutions, Julie Carson, Director of Education at Woodland Academy Trust, reveals how they're enhancing teaching and learning across five primary schools through technology and AI to serve both classroom and operational needs.
November 4, 2024

Joseph Lin: AI Use Cases in Hong Kong Classrooms

In this conversation, Joseph Lin, an education technology consultant, discusses how some Hong Kong schools are exploring artificial intelligence and their implementation challenges. He emphasises the importance of data ownership, responsible use of AI, and the need for schools to adapt slowly to these technologies. Joseph also shares some successful AI implementation cases and how some of the AI tools may enhance creative learning experiences.
October 21, 2024

Sarah Brook: Rethinking Charitable Approaches to Tech and Sustainability

In our latest episode, we speak with Sarah Brook, Founder and CEO of the Sparkle Foundation, currently supporting 20,000 lives in Malawi. Sarah shares how education is evolving in Malawi and the role of AI plays to young people and international NGOs. She also provides a candid look at the challenges facing the charity sector, drawing from her daily work at Sparkle.
October 7, 2024

Rohan Light: Assurance and Oversight in the Age of AI

Join Rohan Light, Principal Analyst of Data Governance at Health New Zealand, as he discusses the critical need for accountability, transparency, and clear explanations of system behaviour. Discover the the government's role in regulation, and the crucial importance of strong data privacy practices.
September 23, 2024

Yom Fox: Leading Schools in an AI-infused World

With the rapid pace of technological change, Yom Fox, the high school principal at Georgetown Day School shares her insights on the importance of creating collaborative spaces where students and faculty learn together and teaching digital citizenship.
September 5, 2024

Debra Wilson: NAIS Perspectives on AI Professional Development

Join Debra Wilson, President of National Association of Independent Schools (NAIS) as she shares her insights on taking an incremental approach to exploring AI. Discover how to find the best solutions for your school, ensure responsible adoption at every stage, and learn about the ways AI can help tackle teacher burnout.
April 18, 2024

Steven Chan and Minh Tran: Preparing Students for AI and New Technologies

Discuss the importance of preparing students for AI and new technologies, the role of the Good Future Foundation in bridging the gap between technology and education, and the potential impact of AI on the future of work.

Dr. Biljana Scott: Language as Our Defining Asset

Published on
March 26, 2026

Biljana Scott was born and brought up in Switzerland, educated in England and Wales, and now lives in Orkney. Her interest in languages stemmed from a multilingual childhood and led to her studying and lecturing in linguistics. For the last 20 years, she has been workshopping internationally on diplomatic communication, focussing on implicit language and linguistic expressions of force and grace.

Videos

Transcript

Daniel Emmerson 00:27
Welcome everybody, once again to Foundational Impact, our podcast series where we're looking at AI from multiple perspectives to give teachers and school leaders an understanding of what and how their practice might be changing as a consequence of the evolution of this ever exciting technology. I am absolutely delighted to be joined today by Biljana Scott, who is, among other things, a phenomenal facilitator when it comes to workshops. I had the immense privilege of being with you a few months now, I suppose ago, in Portugal for one of your workshops. And I was absolutely blown away by the content and your delivery and your insights. And I'm so happy to be able to share some of this work with our listeners. Maybe I was wondering if we could maybe start with that and if you could just give us a bit of a flavor and one or two insights around your background and the work that led you up to facilitating that session.
Dr. Biljana Scott 01:40
Sure. So my background is mixed culturally in that I was a so-called third culture kid. I grew up in Switzerland, but my parents were, one was Scottish, the other was former Yugoslav. And I grew up with three languages but was never told that this was unusual. So I was always a little bit surprised and taken aback that I was worse at language than other people. You know, I didn't spell as well in French when I went to French school. I didn't know as much vocab in English when I went to English school. And I was really kind of stuck a little bit from, for words when I was speaking Serbo Croatian as it was then. So I think that childhood influence made me feel that I made me interested in language and made me feel I should study language and get to understand what everybody else clearly already knew. So I did various languages and then I did linguistics at university, did a doctorate on Chinese linguistics and taught linguistics and after a while felt that this was all very pie in the sky, very abstract, very theoretical, and had really not very much impact on making the world a better place, which maybe idealistically, naively, I was committed to. So I retrained and learned about diplomacy and then started lecturing on how we can use the resources of language in order to achieve our objectives. And not by bullying people or overruling them, but by getting them on board. So basically through the powers of persuasion and negotiation. And that interest in diplomatic communication then led me into a deeper dive into implicit communication and the various ways in which silence can be meaningful, especially silence that has been both created and constrained by language. So not just open ended silences, but the little bits of meaningful silence that you get between words, between the lines, implications, presuppositions, etc. And that then led me into an interest in how you can combine force and grace in your communication, where force and grace are really integral to the language that you're using, how these attributes can be found in language rather than in psychology or sociology or interpersonal dynamics or anything like that. So that's where you met me, in Portugal. That's what I was workshopping on, trying to mine language for elements of force, elements of grace, and ways in which language might integrally combine the two.
Daniel Emmerson 04:51
What I'm fascinated with in particular is our experience of how the dynamics of language are perhaps being shaped through our interactions with technology. And I'd love to come on to that a little bit later on. But perhaps first of all, to get a deeper level of understanding here with regards to your experiences in language and with language. When we're talking about language, do you see that as a tool predominantly that helps us to get things done and communicate as human beings, or does it have a deeper role in how we navigate and experience the world?
Dr. Biljana Scott 05:37
Yes, it's definitely a tool. There's no doubt about that. I mean, we can articulate our thoughts and convey them in ways that we wouldn't be able to do without language. So it's been a really huge asset for our species, but it's also been a defining asset of our species because we are Homo loquens. We are the species that thinks and speaks is able to put those thoughts into words. And those words, yes, they're often used for communication, for interaction, for giving each other, I don't know, for collaboration. But they're also used in a private manner for, let's say, just figuring out what our thoughts are. A lot of people write in order to understand what they think. And this may be at a philosophical level or at a problem solving level, but it may also be just at a creative level. They write because it's only when they write poetry or fiction that they really, truly understand what the deeper workings of their soul are. So language has a lot of functions, from communication and collaboration to introspection and a better understanding of how our minds work, I think.
Daniel Emmerson 07:06
And as someone who speaks many languages, do you have a particular language, for example, that you think in or dream in?
Dr. Biljana Scott 07:14
Good question. I think I dream in different languages. I know that I do that, but I think that over time, unfortunately, I've been shedding languages. In fact, I think I've probably shed more languages than I acquired. But that's only because one day I woke up having spoken one language, Serbia Croatian, and then discovered that I now spoke two languages, Serbian one hand and Croatian on the other. I think the thing about language is that unless we use it, we lose it. And I haven't been using my other languages that much. Having said that, I've got a lot of passive knowledge and I read in all of them and I enjoy that very much. So I'm sure I could reactivate them. But I think, I think in images. Actually, I don't really think in language, I think in images. And I spend a lot of my time entertaining myself by translating those images into different languages to see how they come out differently. What different insights I get from translating those images.
Daniel Emmerson 08:21
I'm wondering then, when we're thinking about subtracting meaning from language, how much of a difference does it make if we are thinking about the written word or spoken language?
Dr. Biljana Scott 08:41
I think there's two differences. One is a temporal one. So the written word allows us more time and we can therefore reflect on what's being communicated in our own time and to our own, you know, heart's content, where spoken language just is. You have to keep up with it because there's no press replay option when you're speaking live. However, spoken language is very much enriched by body language and expressions and micro expressions and the whole kind of energy that comes off a person. Now I say it's enriched by that. That can also prove a distraction. I find myself watching TV and thinking some comment or other about the person, person's looks, or about the way their tone of voice, and I'm not listening to what they're saying. So the written language actually focuses the mind because it gets rid of all those additional details.
Daniel Emmerson 09:47
And does that happen in the same way, do you think? If we are or when we are speaking, when we are using the language ourselves versus when we're writing something down, for instance?
Dr. Biljana Scott 10:02
I guess there would be individual variation there, but for me personally, I am perhaps more surprised by what I have to say when I'm speaking. So I love going to a reading group, for instance, because I'll have read a book and I'll have had various thoughts and impressions, but I won't really know what they are until I'm forced to translate them into words that are coherent in a sentence that is spontaneously uttered in the context of a discussion group. Whereas if I'm writing, I will probably have a better idea of what it is that I want to say. So there's a greater spontaneity when I speak, that has the power to startle me, and I quite enjoy that.
Daniel Emmerson 10:51
That's a wonderful way of looking at it, and I hadn't quite considered it. The reason I asked that question, I suppose, is because I spend a lot of time talking with teachers about the necessary friction that exists in learning. And at the same time, I also spend a great deal of time using and speaking with colleagues, in particular, about use of generative AI. And there's an emergence, particularly with students in our schools of oral communication, of speaking with the AIs in communicating their thoughts and feelings and finding solutions to problems and looking for information. When you write something, particularly if you're writing by hand, as opposed to typing, there's an increased level of friction there, isn't there, between your thoughts or your thinking and what comes out in terms of language?
Dr. Biljana Scott 11:52
I think you're right, definitely. I think maybe part of that comes from the fact that we're a lot of the time just communicating in platitudes. We're kind of exchanging set expressions, and we're not always producing new and interesting and original content. So maybe when we're writing, we're sitting down to write in order to challenge ourselves precisely to produce original content, whereas when we're speaking, we're maybe in part just saying what's expected of us or echoing what somebody else is saying or paraphrasing it. But I think, again, because it's a rather subjective experience, my own love of speaking is that it forces me to think clearly at a faster tempo, and that's quite the challenge of concentration. So if I'm writing, that friction that you mentioned might make me go off on a daydream in order to avoid the pressure of having to deliver. Whereas if I'm speaking to you, all the blood will rush to my head, as you'll probably see. Right. And I'll be thinking really hard in order to, you know, say what I have to say. But I never. I didn't know what you were going to ask me, so I didn't prepare in any way what I might say to you. And as is the nature of any podcast, we're just going where this happens to take us. And the fun of it, the challenge of it, and of course, the risk of it, is that we don't know where it's going to take us. It may take us nowhere at all or not anywhere. That's terribly interesting to your audience.
Daniel Emmerson 13:36
Well, I'm going to take us somewhere interesting for sure, because I want to focus a little bit on your blog post where you were writing about AI specifically. And there's a couple of things I just wanted to pull out from that. You said that language, for example, has the power to shape our perceptions so that it can change how we feel and engage with the world. Does that mean if our communication is predominantly with a machine, does that change anything?
Dr. Biljana Scott 14:10
Now, that's a very good question. I mean, that machine is going to be reflecting the kind of language that we're all using with each other anyway, because that's the nature of AI communication. So that should. Yeah, it's a good question. It's a very good question. To the extent that the machine is reflecting the words that we're using, then it may not change much, and I'll come back to that point. But to the extent that the machine may have an agenda in that it wants to present itself in a positive light, it may very well choose words that are positively connotated instead of more negatively connotated words that you might have been using, or that might be the more common version usage in society at large. And so there will be an influence coming in through that choice of connotations. For instance, I don't know if I think of an example, I mean,I don't know, things like an assistant bot, for instance, or an AI agent have some certain connotations. And if I were to use a word like buddy bot or an AI friend, this would have subtly different connotations. And any word that I choose will come with its little cloud of connotations, of secondary meanings. These meanings come from the relationship of that word to other words in the language. And they're slightly tinted towards positive or negative, towards something attractive or something that we would rather reject. So, yeah, I think if we communicate a lot with a machine and not with other people, we will get a certain series of connotations of words with particular, let's say positive connotations in one aspect or depending on the subject, negative ones. And that will influence our perception because we wouldn't be exposed to as wide a diversity of voices as we might if we were speaking to a room full of people.
Daniel Emmerson 16:55
So when we're talking, I mean, you mentioned bot there, and again, I'm referencing your article. But when we're talking about AI, does that, particularly if we're talking about it in a school context, I'm thinking about teachers talking about it with their students, with their learners in the classroom. Should we be thinking more carefully around language and how we're describing behavior? You mentioned hallucinations, for example, or even the word intelligence.
Dr. Biljana Scott 17:27
Oh, yeah. We definitely have to be thinking carefully about language. I mean, critical thinking is all about thinking about the role of language in presenting the world to us in a particular way. And so I think that right from, you know, subword level to words, to metaphors, to logical fallacies and even presuppositions, all of these elements of language, which are so inherent to language, so prevalent in language, and often which operate quite covertly in language, all of these have to be brought to consciousness. And we have to start thinking critically about them and encourage children to think critically about them in a way that children, I think, would thoroughly enjoy doing. If you were to, for instance, introduce children to presuppositions. Now, the word presuppositions is horrible because it's kind of multisyllabic and nobody really knows how to define it. And actually, the definition of presupposition is one of the worst definitions of any term that I've ever come across. It. It is an antecedent assumption that holds under negation. An antecedent assumption that holds under negation. Not particularly would understand that, let alone a child. Right? So that's like. No, we're not dealing with those. We're moving on. But if you illustrate a presupposition, if you say, play it again, Sam. Don't play it again, Sam. What you have is exactly that. A prior assumption that Sam has played it before. And whether you put it in the affirmative, play it, or in the negative, don't play it, the assumption remains that he has played it before because of the word again. And so a presupposition. Watch the flying cat tail. There we go. A presupposition is a really fun thing because it's all over language. I just have to say, to use a definite article, like the problem with AI And I've assumed that there is a problem with AI. I can also say the king of France, and I'm assuming there is a king of France, whereas there isn't. I can use the possessive and say, do you know what your problem is? And I'm assuming that the. You have a problem. I can say, don't you know what your problem is? I've negated it, but the assumption is still there. You have a problem. Right. I can say, the king of France's beard is gray. I'm making assumptions that are pure hallucinations in this case. Right. The king of France's beard is not gray. It's white or black. Right. It's the same assumption, right? So presuppositions are really fun because they're all over the language. And if you were to, for instance, take all the cover magazine, the covers of magazines like the Economist or Times or New Scientist or whatever it is, which have a title concerning AI, teach kids what these presuppositions are in practice, get them to, you know, play a game of identifying them and get them to identify how all these covers are first grabbing our attention with their titles and then playing out a story between. In the tension that exists between the title and the image, you will get those kids to be thinking critically in a way that will be really enjoyable for them. Really enjoyable. Because the pleasure comes from recognising that you can outmaster language that is trying to master you by influencing your way of seeing things.
Daniel Emmerson 21:58
But you mentioned covert meaning as well, right? Could you unpack that a little bit?
Dr. Biljana Scott 22:06
I think presuppositions are in a way covert. We don't see them. We don't walk around thinking, ah, there's a presupposition, I'm going to challenge it. A lot of us do, but not always. You know, when Gandhi was asked, what do you think of Western civilization? His response was, I think it would be a very good idea. So he saw the presupposition that none of us would see because we'd be straight in there thinking about the pros and cons of Western civilization and we'd be heading off into colonization and inventions and balancing them out. And he's like, yeah, I think it'd be a really good idea. We've not done it yet, clearly.
Daniel Emmerson 22:54
I mean, these are aspects that make, I suppose, language so rich and also so fragile and encouraged through activities that you've just mentioned about that engage critical thinking. I'm wondering then, when people are engaging with AI, which typically sounds or appears very calm and confident and certain of itself, how do you think that that is impacting users or impacting people, particularly if they're engaging with it frequently?
Dr. Biljana Scott 23:38
I think it's a way of outsourcing your own need to think. That's probably my. Let's say that's one reaction, right? And that's my fear driven reaction that if you have a tool and do it for you, then why would you keep up the skills? I used to speak five, six languages. I don't need to anymore because I could always use Google Translate or whatever and communicate in that way. So I'm losing skills out of laziness. And we are very lazy as a species. You know, we had to invent puritanism in order to try to make us work hard. But that had its day. And then, you know, we have teachers who tell us that we have to work hard. But you know, unless you get addicted to concentration, then why would you work hard if you could outsource that effort to somebody else? But of course, any kind of outsourcing, especially of intellectual effort, gives other people power over you because they can then decide. 
Okay, forget AI, think of an orator, right? A leader is often somebody who can put into words the aims, the values, the objectives of a people and who can convince them why their way of seeing the world and acting upon it is better than a rival's way of doing so. Leaders are often very articulate, they're good orators, right? And very many of us are happy to hand over to a leader and say, yep, you do the talking for me. And that's great because we have confidence. And then that trust is going to be there and probably is going to outlive the extent to which we should continue trusting that person. Because maybe at some point there's a divergence between our thoughts and theirs and our objectives and theirs, and our methods or preferred methods of getting to those objectives and theirs. But we've jumped on the bandwagon and loyalty demands that we stay true to them. And this is all a way of outsourcing our own free thinking and letting somebody else do it for us. So we do it amongst humans. Why wouldn't we do it with an AI bot? I mean, definitely it makes it so much easier and we're all of us largely inclined to do so.
Daniel Emmerson 26:28
Do you use the technology yourself?
Dr. Biljana Scott 26:34
I don't actually, but then I'm now retired, so I don't have to for work. Otherwise I definitely would. However, having said that, I work for an organization called Diplo Foundation. And because I'm interested in implicit communication and diplomatic communication, and because Diplo is very, very interested in cutting edge teaching, cutting edge technology, we've developed an AI assistance for the course that I teach on Force and Grace, which is able to ask loaded questions, hard hitting interview questions, and which is then also able to offer tentative answers using a very powerful response system called ABC, where you acknowledge the driving concern of a question. And every interviewer legitimately is asking questions that concern their audience. Right? That's what you're doing. And so the ABC response acknowledges the validity and the value of that driving concern that the interviewer has in his question. And then of course, the interviewer has a speaking point that they have to say that's their communication point. And they have to try to learn to bridge between that driving concern and their communication point. And that ABC acknowledge, bridge, communicate strategy, right, is, is a tough one to learn. It requires a bit of thinking on the spot and it's one that can definitely be trained. So we've developed this bot that can ask loaded questions and then help you. Well, asks you to provide an ABC response and then helps you analyze your ABC response. And I think that's a really good use for an AI assistant, right? In that it allows you to do self study at home in your own pace, especially on a subject that could be slightly embarrassing or humiliating if you were to lose face in front of colleagues, especially if those colleagues were junior to you in ranking in a diplomatic context. So you can kind of skill up in the privacy of a discussion with your bot and then deliver your better rehearsed answers or the method is better rehearsed for when you're actually in a, in a dialogue with somebody. So that kind of usage I, that's the kind of use I have made of ChatGPT and LLM more generally. Right. But otherwise not in my everyday life because I don't need to. In fact, I'm going in the opposite direction. I am much trying to dive deep into poetry and into writing poetry because it is one form of communication that seems to break so many of the rules of normal communication. Right. So remember at the outset I said that we speak in platitudes an awful lot of the time. We're just echoing each other. But with poetry, if you sit down, the friction that you mentioned, oh my God, it's so much greater that you know that white page staring at you and your mind filling with cliches and platitudes. And you have to try to start thinking in a, in a way that is much more fragmented, much more elusive, much more implicit, that basically breaks down all the 10 best ways of clear communication guides that we come across. And that's very much where I'm going in the opposite direction to getting help with my communication.
Daniel Emmerson 30:53
Interestingly, that's always a use case that I hear from teachers about how AI could be helpful, right? The blank page syndrome where you just need something in order to get you started. The counter argument to that is what you've just expressed, right? It's that think, that deep thought process that you need to go into that. That's where the thinking really happens, right. When you, when you're making those first steps, how important is that do you think when it comes to learning?
Dr. Biljana Scott 31:34
I think you need two things. I think you need the deep thinking time and perhaps your page and your pen in order to use a medium that is as slow as our thoughts seem to be, especially when we're developing them or translating them. But I think you also need the stimulation that causes your mind to spark in the first place. So you really need either conversation or a critical mass. You know that Louis Pasteur said chance favors the informed mind. You have to have reached a critical mass of information. You have to have the neurons and the neural networks in place and firing away for one chance spark to suddenly ignite a new series of connections that you will then, having grasped them in that moment of epiphany, be able to slowly work out and translate into words that will convince others of the value of what it is that you've understood or have to communicate. Right. So I think you need the deep time, but you also need the stimulation. And that's why if you only sit with a blank page, you will probably end up with nothing. Whereas if you have a bit of a blank page waiting for that spark of chance, and you then go out and live and interact and expose yourself to inspiration, make yourself ready for inspiration, then you've got the best of both worlds.
Daniel Emmerson 33:22
Does that spark need to come from, I'll phrase that differently. If that spark comes from. Or is it possible, let me say, a spark to come from artificially generated content, synthetic content. Content that doesn't come from another human being, an art form, a book, a piece of poetry?
Dr. Biljana Scott 33:52
Yeah, good question. You know, that spark can come from a single phrase. Very often people will read just one phrase, and it happens to ignite a whole network of associations that was already there in their mind. So if it's just a phrase, that phrase could come from anywhere. Yeah, it could come from something that somebody wrote 3,000 years ago to something somebody said on the radio while you were not even listening to it, but arrested you nevertheless, because it resonated to something that an AI might have generated. But you have to be ready for it. You have to. To have prepared the ground.
Daniel Emmerson 34:37
I want to take us back, if I can, just a little bit, to the engaging with Gen AI. We talked before in the workshop about communication and what that means good communication. A lot of that is around clarity in meaning, but also demonstrating that you're listening intently to the language and what's being said. But when AI starts to sound more conversational and mirrors good communication, is it possible for it to do that in the same way, is it possible to come across as though it's really listening or is it copying the surface of good listening?
Dr. Biljana Scott 35:28
It's both. I think that we're very good at suspending disbelief. And so, you know, when we go to the theater, it's not the real thing, but oh my God, can we be totally captivated and transported by a good piece of theater. Same with a film, same with a story. Whether we read it or whether we're told it, it's in our. It's in our DNA. You know, we are very, very susceptible to the influence of stories. And all stories involve a slight suspense of disbelief, a slight leap of faith into the world of the narrator. And so I'm sure the AI can elicit that same reaction that we take that leap of faith and then we fill in. Because the whole point about stories is that we fill them out for ourselves with the relevance that we're seeking or finding in them. And that sense of resonance is we are as much a part of that as the story is an agent in eliciting that buzz, that kind of to and fro that's constantly taking place as we listen, as we identify, as we evaluate, as we anticipate, as we affirm, and then maybe are surprised and then take off in another direction. So all that rather complex and hugely satisfying dynamic is one that we're very, very primed for. And we can project it onto an AI interlocutor just as easily as we can project it onto the construct that is a theater or that is a story. Now, as for the. Is it just a superficial mimicry that you gave as the alternative? It's probably, at the moment, AI is probably not entirely credible. I know that AI has been used to reproduce people who are dead, for instance. You know, you use their voice, you use their journals and whatever other data you have, and, and then you have a conversation with them. And, you know, it just shows to what extent we're ready for this suspension of disbelief. We know our loved one is dead, but we're still wanting to bring them back through AI and have conversations with them. There will be little slip ups because in that kind of context, you'll be very, very finely tuned to exactly what the dead person would have said, what your loved one would have said, or how they would have reacted, or what they might have known. And so a slip will jar and really sound false and will create a distance very rapidly. But if you're talking to an AI as a shrink, well, from what I gather, most shrinks don't talk back to you anyway, so you're just lying on this on the couch and they're just silent and you're just talking to yourself. But you know that there is a listening ear and that alone is enough to make you feel like you're developing a relationship with this ear, right? This listening ear. AI is more than an ear, it actually engages with you. So it's much easier for you to develop a relationship and a sense of, yeah, we're special to each other. They truly understand me. And I think that was the case really early on. Wasn't it that way back in The God knows when 60s or something, somebody, some secretary eventually came to believe that this proto AI that she was talking to was actually a really good friend and therapist to her.
I think our suspension of disbelief is such that we go off in all sorts of weird directions in terms of having relationships with AI, granting them human status, granting them passports and nationalities and legal rights, and trusting our children with them. These are all, these all involve leaps of faith. And yeah, unfortunately we're primed for them. They were not that good necessarily at monitoring and regulating them.
Daniel Emmerson 40:17
And we, and yet we know this is how a lot of people use their, their AI tools, right? So OpenAI produced research on how people use ChatGPT in September, I think last year. They were looking at one, one and a half million users and how they engaged with ChatGPT on a, on a weekly basis. And 70% of all use cases were on this sort of counseling or social or even intimate examples of how people were using the tool as opposed to academic and professional, which is interesting. But of course then you have teenagers that are using AIs that are built as you know, going back to one of your earlier points, AI friends or companions. And because these are typically non judgmental, they feel comfortable talking to them about issues they're facing at home or in a peer group or whatsoever. And on the one hand that could be positive, right, because they're at least communicating their feelings in some way. But on the other hand, they might otherwise have been interactions they have with another human being. Is there a concern there that we're almost sort of offloading resilience building in young people when it comes to how they're engaging with this technology because of how it responds back to us?
Dr. Biljana Scott 41:58
That's interesting because they are building resilience by thinking their problems through with the help of AI. So to the extent that resilience is achieved by digging deep into yourself and finding the resources that equip you for further effort, they're doing it. Any kind of self analysis, even if it's with the help of a prompt, is still self analysis, right? If the AI were simply to produce answers and tell them how to behave and how they should feel, then that's maybe a different matter because then you are totally outsourcing the analysis to somebody else and then obeying that somebody or something else. And they may have their own agenda, which is not your best self interest. But at the moment, to the extent that AI is just a friendly listening ear which prompts you occasionally, I think that's probably a pretty good way of building resilience.
Daniel Emmerson 43:15
Any final thoughts before we wrap up? Be just particularly when considering that we have heads of school and senior leaders and teachers at school here who are thinking about how they might best have conversations with their learners about their use of generative AI tools. From the perspective of language, is there anything you'd want to focus on?
Dr. Biljana Scott 43:42
Yeah, I think maybe two things. One is the critical thinking dimension of it and setting up exercises and interactions where we're all invited to see, to analyze critically. The way that we ourselves are telling stories about AI, we tend to tell them either as kind of friend or foe narratives, right? So AI is either this very dangerous future enemy disguised as a current friend whose mission is to annihilate us, or AI is a really great technological advance that is opening up all sorts of fields and facilitating all sorts of procedures and interactions in a way that can only be a good thing. And of course AI is both of those, or potentially both of those, because we have both those aptitudes, but we have every aptitude in between. Between good and bad are all the shades of everything else, right? So I think some critical thinking about the relationship between how we speak about AI and how we perceive AI is always going to be a really good thing. But the other thing that I think would be really interesting for children especially to do would be to envisage ways in which they think that AI, that human communication is unique to humans and that AI might not be able to emulate these particular traits. So if you were to ask a kid which ways of communication do you have that AI does not have? First of all, you'd get lots of interesting insights into what kids think communication is and what is central to their individual form of communication. It may be touch, it may be facial expressions, it may be something else that we don't know, something more related to language. And that would be interesting to find out about because I'm interested in implicit communication. I'm fascinated to know which aspects of implicit communication AI can master and which it may not be able to master. So we've spoken about presuppositions. AI can definitely master that because there's a set number of parts of speech or functions in language that trigger presuppositions. And you can always test a presupposition by negating it. And if it still holds under negation, you know you've got a presupposition. So AI can do that very easily. But can AI understand connotations? Probably yes. Simply for the fact that connotations very often end up being good or bad. Right. And that is so reductive that I'm sure AI could do that. Can AI understand story capsules that come through illusions? If I were to refer to somebody as a homegrown terrorist, as opposed to a legal protester, for instance, would AI understand the package that comes with each of those two terms? Yes, I'm sure it would. It would do a really good job of that because it would understand the context and it would gather all that information very, very quickly and understand what the connotations, the loadedness of those terms might be. Can AI understand analogies? I think it probably understands analogies better than we do because it has a much larger database, it has much faster access to. So I think it can. And analogies are not just a way of expressing yourself more clearly and forcefully, but they're also a way of learning faster and concentrating that whole learning because you can package lots of things into the equivalence dynamic. So I'm sure AI can do that. Can it do analogies that involve a break, you know, like a discontinuity, like, you know, I don't know, some weird poetic analogy like the evening being spread across the sky like a patient etherized upon a table. It's like, what. How does that work? What is that even about? Right. If you take the TS Eliot line, would AI be able to generate that? Yes. Would we care? No, because it wouldn't have a human intention. It wouldn't have a psychological depth. It wouldn't have observational originality. It would just be random generated by a machine and wouldn't have any relevance to us as a result, or not necessarily.
So I'm not sure which areas of implicit communication AI is not capable of or which areas of human communication AI is not capable of. But it's a subject I'd love to explore. And I think that kids would be really good at exploring this because they're still a lot more plastic and flexible than fluid in their understanding of what is included in communication. And they're also probably the ones who are going to have to determine in the future how they can escape the Big Brother eye of AI should they need to. And that's how you could present this game. AI is a Big Brother. We need to communicate without it understanding us. How are we going to do that? Are we going to do it by signs, by body language, by secret coded communication? And what is that going to be? So I think that's a long winded answer to a question about my last thoughts about communication and AI. But I think as far as children are concerned, encourage critical thinking. Show them that they're the boss of AI because they can see how we are being invited to think about AI in certain ways and they can rise above those ways of being influenced by language. On the other hand, start thinking about what forms of communication do we have that are unique to us and that AI will not master. Poetry, in my opinion, is one of those. AI poetry is absolutely terrible. No matter how much we try to improve it, it's still palpably bad. So what is it about poetry that seems to be so far unique to human communication that's worth exploring.
Daniel Emmerson 51:35
I'm sure that will have sparked so many ideas and a huge amount of inspiration for our listeners. I know it has in me as well. And I'm so, so grateful for your insights and your reflections and your time, of course. B, thank you so very much for being a part of Foundational Impacts.
Dr. Biljana Scott 51:53
Thank you, Daniel. It was a great pleasure. Thanks.

JOIN OUR MAILING LIST

Be the first to find out more about our programs and have the opportunity to work with us
If you use Microsoft email services, please whitelist the domain goodfuture.foundation in your email settings to ensure you receive our emails.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.