Dr. Biljana Scott: Language as Our Defining Asset
Videos
Transcript
Daniel Emmerson 00:27
Welcome everybody, once again to Foundational Impact, our podcast series where we're looking at AI from multiple perspectives to give teachers and school leaders an understanding of what and how their practice might be changing as a consequence of the evolution of this ever exciting technology. I am absolutely delighted to be joined today by Biljana Scott, who is, among other things, a phenomenal facilitator when it comes to workshops. I had the immense privilege of being with you a few months now, I suppose ago, in Portugal for one of your workshops. And I was absolutely blown away by the content and your delivery and your insights. And I'm so happy to be able to share some of this work with our listeners. Maybe I was wondering if we could maybe start with that and if you could just give us a bit of a flavor and one or two insights around your background and the work that led you up to facilitating that session.
Dr. Biljana Scott 01:40
Sure. So my background is mixed culturally in that I was a so-called third culture kid. I grew up in Switzerland, but my parents were, one was Scottish, the other was former Yugoslav. And I grew up with three languages but was never told that this was unusual. So I was always a little bit surprised and taken aback that I was worse at language than other people. You know, I didn't spell as well in French when I went to French school. I didn't know as much vocab in English when I went to English school. And I was really kind of stuck a little bit from, for words when I was speaking Serbo Croatian as it was then. So I think that childhood influence made me feel that I made me interested in language and made me feel I should study language and get to understand what everybody else clearly already knew. So I did various languages and then I did linguistics at university, did a doctorate on Chinese linguistics and taught linguistics and after a while felt that this was all very pie in the sky, very abstract, very theoretical, and had really not very much impact on making the world a better place, which maybe idealistically, naively, I was committed to. So I retrained and learned about diplomacy and then started lecturing on how we can use the resources of language in order to achieve our objectives. And not by bullying people or overruling them, but by getting them on board. So basically through the powers of persuasion and negotiation. And that interest in diplomatic communication then led me into a deeper dive into implicit communication and the various ways in which silence can be meaningful, especially silence that has been both created and constrained by language. So not just open ended silences, but the little bits of meaningful silence that you get between words, between the lines, implications, presuppositions, etc. And that then led me into an interest in how you can combine force and grace in your communication, where force and grace are really integral to the language that you're using, how these attributes can be found in language rather than in psychology or sociology or interpersonal dynamics or anything like that. So that's where you met me, in Portugal. That's what I was workshopping on, trying to mine language for elements of force, elements of grace, and ways in which language might integrally combine the two.
Daniel Emmerson 04:51
What I'm fascinated with in particular is our experience of how the dynamics of language are perhaps being shaped through our interactions with technology. And I'd love to come on to that a little bit later on. But perhaps first of all, to get a deeper level of understanding here with regards to your experiences in language and with language. When we're talking about language, do you see that as a tool predominantly that helps us to get things done and communicate as human beings, or does it have a deeper role in how we navigate and experience the world?
Dr. Biljana Scott 05:37
Yes, it's definitely a tool. There's no doubt about that. I mean, we can articulate our thoughts and convey them in ways that we wouldn't be able to do without language. So it's been a really huge asset for our species, but it's also been a defining asset of our species because we are Homo loquens. We are the species that thinks and speaks is able to put those thoughts into words. And those words, yes, they're often used for communication, for interaction, for giving each other, I don't know, for collaboration. But they're also used in a private manner for, let's say, just figuring out what our thoughts are. A lot of people write in order to understand what they think. And this may be at a philosophical level or at a problem solving level, but it may also be just at a creative level. They write because it's only when they write poetry or fiction that they really, truly understand what the deeper workings of their soul are. So language has a lot of functions, from communication and collaboration to introspection and a better understanding of how our minds work, I think.
Daniel Emmerson 07:06
And as someone who speaks many languages, do you have a particular language, for example, that you think in or dream in?
Dr. Biljana Scott 07:14
Good question. I think I dream in different languages. I know that I do that, but I think that over time, unfortunately, I've been shedding languages. In fact, I think I've probably shed more languages than I acquired. But that's only because one day I woke up having spoken one language, Serbia Croatian, and then discovered that I now spoke two languages, Serbian one hand and Croatian on the other. I think the thing about language is that unless we use it, we lose it. And I haven't been using my other languages that much. Having said that, I've got a lot of passive knowledge and I read in all of them and I enjoy that very much. So I'm sure I could reactivate them. But I think, I think in images. Actually, I don't really think in language, I think in images. And I spend a lot of my time entertaining myself by translating those images into different languages to see how they come out differently. What different insights I get from translating those images.
Daniel Emmerson 08:21
I'm wondering then, when we're thinking about subtracting meaning from language, how much of a difference does it make if we are thinking about the written word or spoken language?
Dr. Biljana Scott 08:41
I think there's two differences. One is a temporal one. So the written word allows us more time and we can therefore reflect on what's being communicated in our own time and to our own, you know, heart's content, where spoken language just is. You have to keep up with it because there's no press replay option when you're speaking live. However, spoken language is very much enriched by body language and expressions and micro expressions and the whole kind of energy that comes off a person. Now I say it's enriched by that. That can also prove a distraction. I find myself watching TV and thinking some comment or other about the person, person's looks, or about the way their tone of voice, and I'm not listening to what they're saying. So the written language actually focuses the mind because it gets rid of all those additional details.
Daniel Emmerson 09:47
And does that happen in the same way, do you think? If we are or when we are speaking, when we are using the language ourselves versus when we're writing something down, for instance?
Dr. Biljana Scott 10:02
I guess there would be individual variation there, but for me personally, I am perhaps more surprised by what I have to say when I'm speaking. So I love going to a reading group, for instance, because I'll have read a book and I'll have had various thoughts and impressions, but I won't really know what they are until I'm forced to translate them into words that are coherent in a sentence that is spontaneously uttered in the context of a discussion group. Whereas if I'm writing, I will probably have a better idea of what it is that I want to say. So there's a greater spontaneity when I speak, that has the power to startle me, and I quite enjoy that.
Daniel Emmerson 10:51
That's a wonderful way of looking at it, and I hadn't quite considered it. The reason I asked that question, I suppose, is because I spend a lot of time talking with teachers about the necessary friction that exists in learning. And at the same time, I also spend a great deal of time using and speaking with colleagues, in particular, about use of generative AI. And there's an emergence, particularly with students in our schools of oral communication, of speaking with the AIs in communicating their thoughts and feelings and finding solutions to problems and looking for information. When you write something, particularly if you're writing by hand, as opposed to typing, there's an increased level of friction there, isn't there, between your thoughts or your thinking and what comes out in terms of language?
Dr. Biljana Scott 11:52
I think you're right, definitely. I think maybe part of that comes from the fact that we're a lot of the time just communicating in platitudes. We're kind of exchanging set expressions, and we're not always producing new and interesting and original content. So maybe when we're writing, we're sitting down to write in order to challenge ourselves precisely to produce original content, whereas when we're speaking, we're maybe in part just saying what's expected of us or echoing what somebody else is saying or paraphrasing it. But I think, again, because it's a rather subjective experience, my own love of speaking is that it forces me to think clearly at a faster tempo, and that's quite the challenge of concentration. So if I'm writing, that friction that you mentioned might make me go off on a daydream in order to avoid the pressure of having to deliver. Whereas if I'm speaking to you, all the blood will rush to my head, as you'll probably see. Right. And I'll be thinking really hard in order to, you know, say what I have to say. But I never. I didn't know what you were going to ask me, so I didn't prepare in any way what I might say to you. And as is the nature of any podcast, we're just going where this happens to take us. And the fun of it, the challenge of it, and of course, the risk of it, is that we don't know where it's going to take us. It may take us nowhere at all or not anywhere. That's terribly interesting to your audience.
Daniel Emmerson 13:36
Well, I'm going to take us somewhere interesting for sure, because I want to focus a little bit on your blog post where you were writing about AI specifically. And there's a couple of things I just wanted to pull out from that. You said that language, for example, has the power to shape our perceptions so that it can change how we feel and engage with the world. Does that mean if our communication is predominantly with a machine, does that change anything?
Dr. Biljana Scott 14:10
Now, that's a very good question. I mean, that machine is going to be reflecting the kind of language that we're all using with each other anyway, because that's the nature of AI communication. So that should. Yeah, it's a good question. It's a very good question. To the extent that the machine is reflecting the words that we're using, then it may not change much, and I'll come back to that point. But to the extent that the machine may have an agenda in that it wants to present itself in a positive light, it may very well choose words that are positively connotated instead of more negatively connotated words that you might have been using, or that might be the more common version usage in society at large. And so there will be an influence coming in through that choice of connotations. For instance, I don't know if I think of an example, I mean,I don't know, things like an assistant bot, for instance, or an AI agent have some certain connotations. And if I were to use a word like buddy bot or an AI friend, this would have subtly different connotations. And any word that I choose will come with its little cloud of connotations, of secondary meanings. These meanings come from the relationship of that word to other words in the language. And they're slightly tinted towards positive or negative, towards something attractive or something that we would rather reject. So, yeah, I think if we communicate a lot with a machine and not with other people, we will get a certain series of connotations of words with particular, let's say positive connotations in one aspect or depending on the subject, negative ones. And that will influence our perception because we wouldn't be exposed to as wide a diversity of voices as we might if we were speaking to a room full of people.
Daniel Emmerson 16:55
So when we're talking, I mean, you mentioned bot there, and again, I'm referencing your article. But when we're talking about AI, does that, particularly if we're talking about it in a school context, I'm thinking about teachers talking about it with their students, with their learners in the classroom. Should we be thinking more carefully around language and how we're describing behavior? You mentioned hallucinations, for example, or even the word intelligence.
Dr. Biljana Scott 17:27
Oh, yeah. We definitely have to be thinking carefully about language. I mean, critical thinking is all about thinking about the role of language in presenting the world to us in a particular way. And so I think that right from, you know, subword level to words, to metaphors, to logical fallacies and even presuppositions, all of these elements of language, which are so inherent to language, so prevalent in language, and often which operate quite covertly in language, all of these have to be brought to consciousness. And we have to start thinking critically about them and encourage children to think critically about them in a way that children, I think, would thoroughly enjoy doing. If you were to, for instance, introduce children to presuppositions. Now, the word presuppositions is horrible because it's kind of multisyllabic and nobody really knows how to define it. And actually, the definition of presupposition is one of the worst definitions of any term that I've ever come across. It. It is an antecedent assumption that holds under negation. An antecedent assumption that holds under negation. Not particularly would understand that, let alone a child. Right? So that's like. No, we're not dealing with those. We're moving on. But if you illustrate a presupposition, if you say, play it again, Sam. Don't play it again, Sam. What you have is exactly that. A prior assumption that Sam has played it before. And whether you put it in the affirmative, play it, or in the negative, don't play it, the assumption remains that he has played it before because of the word again. And so a presupposition. Watch the flying cat tail. There we go. A presupposition is a really fun thing because it's all over language. I just have to say, to use a definite article, like the problem with AI And I've assumed that there is a problem with AI. I can also say the king of France, and I'm assuming there is a king of France, whereas there isn't. I can use the possessive and say, do you know what your problem is? And I'm assuming that the. You have a problem. I can say, don't you know what your problem is? I've negated it, but the assumption is still there. You have a problem. Right. I can say, the king of France's beard is gray. I'm making assumptions that are pure hallucinations in this case. Right. The king of France's beard is not gray. It's white or black. Right. It's the same assumption, right? So presuppositions are really fun because they're all over the language. And if you were to, for instance, take all the cover magazine, the covers of magazines like the Economist or Times or New Scientist or whatever it is, which have a title concerning AI, teach kids what these presuppositions are in practice, get them to, you know, play a game of identifying them and get them to identify how all these covers are first grabbing our attention with their titles and then playing out a story between. In the tension that exists between the title and the image, you will get those kids to be thinking critically in a way that will be really enjoyable for them. Really enjoyable. Because the pleasure comes from recognising that you can outmaster language that is trying to master you by influencing your way of seeing things.
Daniel Emmerson 21:58
But you mentioned covert meaning as well, right? Could you unpack that a little bit?
Dr. Biljana Scott 22:06
I think presuppositions are in a way covert. We don't see them. We don't walk around thinking, ah, there's a presupposition, I'm going to challenge it. A lot of us do, but not always. You know, when Gandhi was asked, what do you think of Western civilization? His response was, I think it would be a very good idea. So he saw the presupposition that none of us would see because we'd be straight in there thinking about the pros and cons of Western civilization and we'd be heading off into colonization and inventions and balancing them out. And he's like, yeah, I think it'd be a really good idea. We've not done it yet, clearly.
Daniel Emmerson 22:54
I mean, these are aspects that make, I suppose, language so rich and also so fragile and encouraged through activities that you've just mentioned about that engage critical thinking. I'm wondering then, when people are engaging with AI, which typically sounds or appears very calm and confident and certain of itself, how do you think that that is impacting users or impacting people, particularly if they're engaging with it frequently?
Dr. Biljana Scott 23:38
I think it's a way of outsourcing your own need to think. That's probably my. Let's say that's one reaction, right? And that's my fear driven reaction that if you have a tool and do it for you, then why would you keep up the skills? I used to speak five, six languages. I don't need to anymore because I could always use Google Translate or whatever and communicate in that way. So I'm losing skills out of laziness. And we are very lazy as a species. You know, we had to invent puritanism in order to try to make us work hard. But that had its day. And then, you know, we have teachers who tell us that we have to work hard. But you know, unless you get addicted to concentration, then why would you work hard if you could outsource that effort to somebody else? But of course, any kind of outsourcing, especially of intellectual effort, gives other people power over you because they can then decide.
Okay, forget AI, think of an orator, right? A leader is often somebody who can put into words the aims, the values, the objectives of a people and who can convince them why their way of seeing the world and acting upon it is better than a rival's way of doing so. Leaders are often very articulate, they're good orators, right? And very many of us are happy to hand over to a leader and say, yep, you do the talking for me. And that's great because we have confidence. And then that trust is going to be there and probably is going to outlive the extent to which we should continue trusting that person. Because maybe at some point there's a divergence between our thoughts and theirs and our objectives and theirs, and our methods or preferred methods of getting to those objectives and theirs. But we've jumped on the bandwagon and loyalty demands that we stay true to them. And this is all a way of outsourcing our own free thinking and letting somebody else do it for us. So we do it amongst humans. Why wouldn't we do it with an AI bot? I mean, definitely it makes it so much easier and we're all of us largely inclined to do so.
Daniel Emmerson 26:28
Do you use the technology yourself?
Dr. Biljana Scott 26:34
I don't actually, but then I'm now retired, so I don't have to for work. Otherwise I definitely would. However, having said that, I work for an organization called Diplo Foundation. And because I'm interested in implicit communication and diplomatic communication, and because Diplo is very, very interested in cutting edge teaching, cutting edge technology, we've developed an AI assistance for the course that I teach on Force and Grace, which is able to ask loaded questions, hard hitting interview questions, and which is then also able to offer tentative answers using a very powerful response system called ABC, where you acknowledge the driving concern of a question. And every interviewer legitimately is asking questions that concern their audience. Right? That's what you're doing. And so the ABC response acknowledges the validity and the value of that driving concern that the interviewer has in his question. And then of course, the interviewer has a speaking point that they have to say that's their communication point. And they have to try to learn to bridge between that driving concern and their communication point. And that ABC acknowledge, bridge, communicate strategy, right, is, is a tough one to learn. It requires a bit of thinking on the spot and it's one that can definitely be trained. So we've developed this bot that can ask loaded questions and then help you. Well, asks you to provide an ABC response and then helps you analyze your ABC response. And I think that's a really good use for an AI assistant, right? In that it allows you to do self study at home in your own pace, especially on a subject that could be slightly embarrassing or humiliating if you were to lose face in front of colleagues, especially if those colleagues were junior to you in ranking in a diplomatic context. So you can kind of skill up in the privacy of a discussion with your bot and then deliver your better rehearsed answers or the method is better rehearsed for when you're actually in a, in a dialogue with somebody. So that kind of usage I, that's the kind of use I have made of ChatGPT and LLM more generally. Right. But otherwise not in my everyday life because I don't need to. In fact, I'm going in the opposite direction. I am much trying to dive deep into poetry and into writing poetry because it is one form of communication that seems to break so many of the rules of normal communication. Right. So remember at the outset I said that we speak in platitudes an awful lot of the time. We're just echoing each other. But with poetry, if you sit down, the friction that you mentioned, oh my God, it's so much greater that you know that white page staring at you and your mind filling with cliches and platitudes. And you have to try to start thinking in a, in a way that is much more fragmented, much more elusive, much more implicit, that basically breaks down all the 10 best ways of clear communication guides that we come across. And that's very much where I'm going in the opposite direction to getting help with my communication.
Daniel Emmerson 30:53
Interestingly, that's always a use case that I hear from teachers about how AI could be helpful, right? The blank page syndrome where you just need something in order to get you started. The counter argument to that is what you've just expressed, right? It's that think, that deep thought process that you need to go into that. That's where the thinking really happens, right. When you, when you're making those first steps, how important is that do you think when it comes to learning?
Dr. Biljana Scott 31:34
I think you need two things. I think you need the deep thinking time and perhaps your page and your pen in order to use a medium that is as slow as our thoughts seem to be, especially when we're developing them or translating them. But I think you also need the stimulation that causes your mind to spark in the first place. So you really need either conversation or a critical mass. You know that Louis Pasteur said chance favors the informed mind. You have to have reached a critical mass of information. You have to have the neurons and the neural networks in place and firing away for one chance spark to suddenly ignite a new series of connections that you will then, having grasped them in that moment of epiphany, be able to slowly work out and translate into words that will convince others of the value of what it is that you've understood or have to communicate. Right. So I think you need the deep time, but you also need the stimulation. And that's why if you only sit with a blank page, you will probably end up with nothing. Whereas if you have a bit of a blank page waiting for that spark of chance, and you then go out and live and interact and expose yourself to inspiration, make yourself ready for inspiration, then you've got the best of both worlds.
Daniel Emmerson 33:22
Does that spark need to come from, I'll phrase that differently. If that spark comes from. Or is it possible, let me say, a spark to come from artificially generated content, synthetic content. Content that doesn't come from another human being, an art form, a book, a piece of poetry?
Dr. Biljana Scott 33:52
Yeah, good question. You know, that spark can come from a single phrase. Very often people will read just one phrase, and it happens to ignite a whole network of associations that was already there in their mind. So if it's just a phrase, that phrase could come from anywhere. Yeah, it could come from something that somebody wrote 3,000 years ago to something somebody said on the radio while you were not even listening to it, but arrested you nevertheless, because it resonated to something that an AI might have generated. But you have to be ready for it. You have to. To have prepared the ground.
Daniel Emmerson 34:37
I want to take us back, if I can, just a little bit, to the engaging with Gen AI. We talked before in the workshop about communication and what that means good communication. A lot of that is around clarity in meaning, but also demonstrating that you're listening intently to the language and what's being said. But when AI starts to sound more conversational and mirrors good communication, is it possible for it to do that in the same way, is it possible to come across as though it's really listening or is it copying the surface of good listening?
Dr. Biljana Scott 35:28
It's both. I think that we're very good at suspending disbelief. And so, you know, when we go to the theater, it's not the real thing, but oh my God, can we be totally captivated and transported by a good piece of theater. Same with a film, same with a story. Whether we read it or whether we're told it, it's in our. It's in our DNA. You know, we are very, very susceptible to the influence of stories. And all stories involve a slight suspense of disbelief, a slight leap of faith into the world of the narrator. And so I'm sure the AI can elicit that same reaction that we take that leap of faith and then we fill in. Because the whole point about stories is that we fill them out for ourselves with the relevance that we're seeking or finding in them. And that sense of resonance is we are as much a part of that as the story is an agent in eliciting that buzz, that kind of to and fro that's constantly taking place as we listen, as we identify, as we evaluate, as we anticipate, as we affirm, and then maybe are surprised and then take off in another direction. So all that rather complex and hugely satisfying dynamic is one that we're very, very primed for. And we can project it onto an AI interlocutor just as easily as we can project it onto the construct that is a theater or that is a story. Now, as for the. Is it just a superficial mimicry that you gave as the alternative? It's probably, at the moment, AI is probably not entirely credible. I know that AI has been used to reproduce people who are dead, for instance. You know, you use their voice, you use their journals and whatever other data you have, and, and then you have a conversation with them. And, you know, it just shows to what extent we're ready for this suspension of disbelief. We know our loved one is dead, but we're still wanting to bring them back through AI and have conversations with them. There will be little slip ups because in that kind of context, you'll be very, very finely tuned to exactly what the dead person would have said, what your loved one would have said, or how they would have reacted, or what they might have known. And so a slip will jar and really sound false and will create a distance very rapidly. But if you're talking to an AI as a shrink, well, from what I gather, most shrinks don't talk back to you anyway, so you're just lying on this on the couch and they're just silent and you're just talking to yourself. But you know that there is a listening ear and that alone is enough to make you feel like you're developing a relationship with this ear, right? This listening ear. AI is more than an ear, it actually engages with you. So it's much easier for you to develop a relationship and a sense of, yeah, we're special to each other. They truly understand me. And I think that was the case really early on. Wasn't it that way back in The God knows when 60s or something, somebody, some secretary eventually came to believe that this proto AI that she was talking to was actually a really good friend and therapist to her.
I think our suspension of disbelief is such that we go off in all sorts of weird directions in terms of having relationships with AI, granting them human status, granting them passports and nationalities and legal rights, and trusting our children with them. These are all, these all involve leaps of faith. And yeah, unfortunately we're primed for them. They were not that good necessarily at monitoring and regulating them.
Daniel Emmerson 40:17
And we, and yet we know this is how a lot of people use their, their AI tools, right? So OpenAI produced research on how people use ChatGPT in September, I think last year. They were looking at one, one and a half million users and how they engaged with ChatGPT on a, on a weekly basis. And 70% of all use cases were on this sort of counseling or social or even intimate examples of how people were using the tool as opposed to academic and professional, which is interesting. But of course then you have teenagers that are using AIs that are built as you know, going back to one of your earlier points, AI friends or companions. And because these are typically non judgmental, they feel comfortable talking to them about issues they're facing at home or in a peer group or whatsoever. And on the one hand that could be positive, right, because they're at least communicating their feelings in some way. But on the other hand, they might otherwise have been interactions they have with another human being. Is there a concern there that we're almost sort of offloading resilience building in young people when it comes to how they're engaging with this technology because of how it responds back to us?
Dr. Biljana Scott 41:58
That's interesting because they are building resilience by thinking their problems through with the help of AI. So to the extent that resilience is achieved by digging deep into yourself and finding the resources that equip you for further effort, they're doing it. Any kind of self analysis, even if it's with the help of a prompt, is still self analysis, right? If the AI were simply to produce answers and tell them how to behave and how they should feel, then that's maybe a different matter because then you are totally outsourcing the analysis to somebody else and then obeying that somebody or something else. And they may have their own agenda, which is not your best self interest. But at the moment, to the extent that AI is just a friendly listening ear which prompts you occasionally, I think that's probably a pretty good way of building resilience.
Daniel Emmerson 43:15
Any final thoughts before we wrap up? Be just particularly when considering that we have heads of school and senior leaders and teachers at school here who are thinking about how they might best have conversations with their learners about their use of generative AI tools. From the perspective of language, is there anything you'd want to focus on?
Dr. Biljana Scott 43:42
Yeah, I think maybe two things. One is the critical thinking dimension of it and setting up exercises and interactions where we're all invited to see, to analyze critically. The way that we ourselves are telling stories about AI, we tend to tell them either as kind of friend or foe narratives, right? So AI is either this very dangerous future enemy disguised as a current friend whose mission is to annihilate us, or AI is a really great technological advance that is opening up all sorts of fields and facilitating all sorts of procedures and interactions in a way that can only be a good thing. And of course AI is both of those, or potentially both of those, because we have both those aptitudes, but we have every aptitude in between. Between good and bad are all the shades of everything else, right? So I think some critical thinking about the relationship between how we speak about AI and how we perceive AI is always going to be a really good thing. But the other thing that I think would be really interesting for children especially to do would be to envisage ways in which they think that AI, that human communication is unique to humans and that AI might not be able to emulate these particular traits. So if you were to ask a kid which ways of communication do you have that AI does not have? First of all, you'd get lots of interesting insights into what kids think communication is and what is central to their individual form of communication. It may be touch, it may be facial expressions, it may be something else that we don't know, something more related to language. And that would be interesting to find out about because I'm interested in implicit communication. I'm fascinated to know which aspects of implicit communication AI can master and which it may not be able to master. So we've spoken about presuppositions. AI can definitely master that because there's a set number of parts of speech or functions in language that trigger presuppositions. And you can always test a presupposition by negating it. And if it still holds under negation, you know you've got a presupposition. So AI can do that very easily. But can AI understand connotations? Probably yes. Simply for the fact that connotations very often end up being good or bad. Right. And that is so reductive that I'm sure AI could do that. Can AI understand story capsules that come through illusions? If I were to refer to somebody as a homegrown terrorist, as opposed to a legal protester, for instance, would AI understand the package that comes with each of those two terms? Yes, I'm sure it would. It would do a really good job of that because it would understand the context and it would gather all that information very, very quickly and understand what the connotations, the loadedness of those terms might be. Can AI understand analogies? I think it probably understands analogies better than we do because it has a much larger database, it has much faster access to. So I think it can. And analogies are not just a way of expressing yourself more clearly and forcefully, but they're also a way of learning faster and concentrating that whole learning because you can package lots of things into the equivalence dynamic. So I'm sure AI can do that. Can it do analogies that involve a break, you know, like a discontinuity, like, you know, I don't know, some weird poetic analogy like the evening being spread across the sky like a patient etherized upon a table. It's like, what. How does that work? What is that even about? Right. If you take the TS Eliot line, would AI be able to generate that? Yes. Would we care? No, because it wouldn't have a human intention. It wouldn't have a psychological depth. It wouldn't have observational originality. It would just be random generated by a machine and wouldn't have any relevance to us as a result, or not necessarily.
So I'm not sure which areas of implicit communication AI is not capable of or which areas of human communication AI is not capable of. But it's a subject I'd love to explore. And I think that kids would be really good at exploring this because they're still a lot more plastic and flexible than fluid in their understanding of what is included in communication. And they're also probably the ones who are going to have to determine in the future how they can escape the Big Brother eye of AI should they need to. And that's how you could present this game. AI is a Big Brother. We need to communicate without it understanding us. How are we going to do that? Are we going to do it by signs, by body language, by secret coded communication? And what is that going to be? So I think that's a long winded answer to a question about my last thoughts about communication and AI. But I think as far as children are concerned, encourage critical thinking. Show them that they're the boss of AI because they can see how we are being invited to think about AI in certain ways and they can rise above those ways of being influenced by language. On the other hand, start thinking about what forms of communication do we have that are unique to us and that AI will not master. Poetry, in my opinion, is one of those. AI poetry is absolutely terrible. No matter how much we try to improve it, it's still palpably bad. So what is it about poetry that seems to be so far unique to human communication that's worth exploring.
Daniel Emmerson 51:35
I'm sure that will have sparked so many ideas and a huge amount of inspiration for our listeners. I know it has in me as well. And I'm so, so grateful for your insights and your reflections and your time, of course. B, thank you so very much for being a part of Foundational Impacts.
Dr. Biljana Scott 51:53
Thank you, Daniel. It was a great pleasure. Thanks.
