In this episode of Technology & Security, Dr. Miah Hammond-Errey is joined by Professor Lyria Bennett Moses, one of Australia’s foremost experts in technology and law. We explore how government responses to AI often focus on regulating technology rather than addressing the human and social challenges these systems impact. We discuss how to centre humanity in legal responses to technology. We examine regulatory approaches, anti-discrimination laws and governance structures to better address the realities of AI-driven decision-making. As AI is increasingly embedded in daily life, much like past technological shifts, its influence may become invisible, but its impact on knowledge, democracy, and security will be significant.
Future leaders must develop systems thinking, recognising the deep interconnections between technology, law, politics, and security. Education must beyond data literacy to equip students with an understanding of how different systems function and their limitations. AI is reshaping how we access information, formulate ideas, and tell stories and it is shifting power in ways we are only beginning to grasp. In this episode, we explore the evolving role of search and AI-generated knowledge and the geopolitical tensions shaping the future of technology. This thought-provoking conversation will change the way you think about AI, law, knowledge creation and the future of regulation.
Professor Lyria Bennett Moses is the head of the School of Law, Society and Criminology and a professor at the University of New South Wales. She was previously the director of the Allens Hub for technology and has held many academic leadership and research roles related to law, data, cybersecurity and AI. She's worked on AI standards with Standards Australia and the Institute of Electrical and Electronics Engineers and has published extensively on technology and law. Lyria is a member of numerous editorial boards. She is a fellow of the Australian Academy of Law and Royal Society of New South Wales, and a fellow of the Association of Social Sciences Australia.
Resources mentioned in the recording:
+ The Rest is History podcast (BBC) www.therestishistory.com
+ The Machine Stops, E.M Forster
This podcast was recorded on the lands of the Gadigal people, and we pay our respects to their Elders past, present and emerging. We acknowledge their continuing connection to land, sea and community, and extend that respect to all Aboriginal and Torres Strait Islander people.
Music by Dr Paul Mac and production by Elliott Brennan.
Transcript, check against delivery:
Dr Miah Hammond-Errey: My guest today is Professor Lyria Bennett Moses. Lyria is one of Australia's most respected and well recognised technology and law thinkers. She's the head of the School of Law, Society and Criminology and a professor at the University of New South Wales. She was previously the director of the Allens Hub for technology and has held many academic leadership and research roles related to law, data, cybersecurity and AI. She's worked on AI standards with Standards Australia and the Institute of Electrical and Electronics Engineers and has published extensively on technology and law. Lyria is also a member of numerous editorial boards. She is a fellow of the Australian Academy of Law and Royal Society of New South Wales, and a fellow of the Association of Social Sciences Australia. It's a real pleasure to have you join me on the podcast, Lyria.
Prof Lyria Bennett-Moses: Thank you.
Dr Miah Hammond-Errey: We're coming to you today from the lands of the Gadigal people. We pay our respects to elders past, present and emerging and acknowledge their continuing connection to land, sea and community.
Dr Miah Hammond-Errey: So, Lyria, in your 2022 TEDx talk, you argued that government responses need to centre human needs and human rights in the systems they build, rather than focusing only on technology and regulation. So in 2025, how do you think we fare in this ambition?
Prof Lyria Bennett-Moses: Not terribly well, to be honest. We're still seeing very much an approach of governments to understand a technology, to try to define a technology and then to think about what the law needs to be for that thing they've just defined. So you can see that in the Europe's approach to the AI act, but also in what's being discussed in Australia at the moment, this idea that high risk artificial intelligence is going to need certain kinds of controls around it. And my point is not that we don't need to think about the legal rules in which this technology operates. My point is very much that if we start with the technology, rather than starting with the problems we're trying to solve, we're ending up with a very skewed vision of what the problem is and hence what the legal solution to that problem might be.
Dr Miah Hammond-Errey: how can we then better centre humanity in our technologies and systems and government responses?
Prof Lyria Bennett-Moses: So let me let me maybe give an example, because I think that makes things a little bit more concrete. One of the concerns that people have about artificial intelligence is the problem of bias. In other words, we know that machine learning, for example, which learns from data collected from some place and then tries to use those, um, those learnings, those patterns to make, say, future decisions that might affect individual people. We know that that often happens in a way that is biased. Now why does that happen? And why is there bias? Because whenever you take some kind of data set generated in a world where there is bias, then at least some of the patterns in that data are going to reflect the biases that already exist in terms of how people treat one another, what people assume about one another, and indeed how people might behave given their different histories and so forth. So then the learnings from that build, duplicate those biases. And in many cases, given the kinds of machine learning algorithms that exist in the kind of models that are being used actually enhance them. So it's actually makes the bias.
Dr Miah Hammond-Errey: Scale the bias.
Prof Lyria Bennett-Moses: So this is a problem. Now one approach says, okay, this is a problem with artificial intelligence or machine learning or some such technology kind of label. And then what we need to do is we need to say when those kinds of systems are built, some kind of thing needs to happen. Um, and that's certainly the approach in the European AI act. So you can see there that discrimination is a problem is referred to throughout the legislation as a justification for the legislation and also in the context of many of the provisions, not all, but many of the provisions of that legislation. And so, one of the issues that legislation is trying to solve is this kind of bias discrimination problem. However, let's start with people and problems. We know that discrimination and bias is a problem. We also, when we're being honest with ourselves, know that it is not a problem that was invented with machine learning and AI systems. We know that it is a problem that has been around for a long time, and in fact, part of the reason why the machines are learning what they're learning is because it's been a problem for a long time. So instead, let's think about, okay, what do we need to do to solve this kind of problem. Now we have laws. We have discrimination laws. And there's many of them.
Prof Lyria Bennett-Moses: And in Australia. It's complicated because of the federal system. Some are state, some are federal, some are about sex discrimination, some are more universal, and so forth. But many of those laws, are written in such a way that they don't deal effectively with the kind of discrimination and bias that occurs in machine learning systems, and the reason for that is just some of the language used. So, for example, a decision is made by reason of sex, race or what have you. Now is a machine learning system in which that variable might have been removed because of strategic reasons, then making a decision by reason of sex, race, etc. if what the system is ultimately doing might have biased outcomes along that dimension, but isn't actually the basis for the system design or deliberate in any way. Question. Right. The legislation clearly wasn't written with that in mind. It was written with the idea of, you know, the sexist, racist, what have you boss, who is making decisions about who they hire and how much they pay people based on their wrong assumptions. Um, and so they are making decisions by reason of sex, race, etc., but the system? Not necessarily. So the language of that legislation is wrong. So what do we do about it? My view is, and this is where maybe I differ from some of the other policy thinkers in this space, is that rather than saying, oh, because the problem is with the technology, let's regulate the technology.
Prof Lyria Bennett-Moses: I want to go back to discrimination law. Ask what we're trying to do there, what is its ultimate function? What are we trying to achieve? are there circumstances in which it's not achieving it? And I've given an example of one, and I could go through the provisions more closely and explain that. But broadly speaking, yes, these laws are not applying effectively in this new context. And then to ask the far more open ended question of what do we want to do about that? And we might and I don't think in this case we should we might say that regulating AI is the answer, but I don't think so, because once we recognise that that is merely one context in which the way the laws are drafted isn't working, once we recognise the fact that we can't neatly divide decisions into human and system that more and more they're hybrid. Then we also need to recognize that we need to not have laws over here for human decisions, laws over there for system decisions, but actually an integrated approach that achieves our objectives, which is the objective of ultimately fairness in this case.
Dr Miah Hammond-Errey: That's really illustrative. I'm not sure if you've just answered this, but if you were going to do a 2025 Ted talk, what would it be?
Prof Lyria Bennett-Moses: I think what I'd do in a 2025 Ted talk is focus more on technologies that are becoming increasingly important at the moment. So when I did the original Ted talk, it was it was, you know, decision systems and that kind of thing. And I think we're now looking much more into large language models and image generators and all of the sort of package of generative AI. And sometimes I think that people talk about those in ways that don't recognize it. The problems that existed in the previous round of these technologies still exist. So I think what I'd focus on is, is that new context, because I think that's far more important and far more familiar to most people now. Um, and really try to talk about how people use those and what I think needs to be, um, you know, legal and, and standards based responses to those.
Dr Miah Hammond-Errey: What are some of the trends in technology and law that have most interested you in your career, or have kind of been the threads through your career?
Prof Lyria Bennett-Moses: So when I started doing this, which was when I was doing my doctorate at Columbia Law School, I was actually really fascinated by how some of the really old technology so familiar to us, we don't think of them as technologies, had a profound impact on law, and a really good example of that, which is actually really easy to find. We don't really have anyone I've ever met nowadays who calls themselves a railroad lawyer, but apparently that's what Abraham Lincoln was, for example. Like this was the cutting edge lawyers, the cyber lawyers of their time. and in fact, so much corporations, law, tort law and so forth came out of the challenges of railroads. as they developed, as they created needs for different kinds of funding models and so forth. So I think that, you know, it doesn't necessarily have to be the new and greatest thing, but that was probably, the sort of first really historical one that I got really interested in. And then I think what fascinates me is that every time we hit a new problem, we pull out the old familiar tropes. So I've gone through, times when the big topics were sort of medical technologies, nanomaterials and nanotechnology, big data, um, AI. And each time we hit something, we actually forget the lessons that we've learned from every time we've had this challenge before. Um, and so we don't go back and look at, you know, railroads and older examples, or at least very rarely, we tend to be constantly thinking, this is new. Everything's changing fast, and now we need something completely different. I do think it's the links between them, in the sense of the kinds of challenges that they throw up, the kinds of rhetoric that we build around them. That's more interesting to me than any given example.
Dr Miah Hammond-Errey: Yeah. I mean, much of your often multidisciplinary work seems to involve understanding complex systems, So how does that shape your thinking on technology and law?
Prof Lyria Bennett-Moses: I might give two answers to that. One is I've always seen myself in a good position to understand some kinds of systems that other people with purely legal training might struggle with, just without the sort of background understanding. Um, so because I did a science degree majoring in maths, it's not that I know everything. It's not that we did AI back then or machine learning back then. But I have enough core knowledge to be able to approach more technical articles and textbooks. Not perfectly. Not at the same level as someone who might be a professor in those disciplines, but with at least enough understanding that I can see some of the problems that this is going to generate. Um, so that's sort of part of the answer as to what attracts me to them. the other part is, I think complexity is always its own challenge. Um, and I don't think anyone in in a technology and security podcast probably already knows this, but the idea that that when something is really complex, it's actually really hard to say. I've got the one solution. Um, you know, this is the you know, this is the one thing that's going to fix this. It's actually complicated. What we're seeing is technology, complex systems, complex human technology systems. So it's not purely technical or purely human. We're seeing these things integrated in ever more different ways to sort of be able to say, oh, the problem is this thing called AI, which I've defined as such and such. And by saying, when you do this thing, you have to, satisfy these criteria and so forth. That's the solution to what is becoming an increasingly more complicated world with, as I said, sociotechnical systems, for lack of a better word. I think the complexity is part of the answer to why I think the way I think.
Dr Miah Hammond-Errey: familiar technology tropes are actually a way of reducing the complexity because, you know, they sort of they focus on a system or a definition, something that feels more tangible than that complex system you've just described. Let's go to a segment. What are some of the interdependencies and vulnerabilities of technology and security or technology and law that you wish were better understood?
Prof Lyria Bennett-Moses: I think for both security and law, that in some sense they're always about technology. And in some sense technology is beside the point. So it doesn't make sense to look at cyber security purely from a sort of software. Um, and so forth perspective. If anyone can walk through a door, um, look at the hardware and, you know, do something physically to the hardware. Um, you know, you've sort of missed the point, right? But at the same token, from a security perspective, it doesn't make sense only to look at physical security, personnel security and so forth and ignore cyber. So it has to be looked at in an integrated way. And I think most security professionals actually understand that in fairness. But I'm not sure that people understand that in the same way about law, even though it's equally as true law always regulates technology. When you have a new technology, something, you know, some new thing that's never been possible before that is already regulated. It's not born into a world of legal absence. Right. Because all law deals with everything. So, you know, if you're going to sell this new widget, you're going to enter into a contract, um, you know, there's going to be goods and the Sale of Goods Act might be relevant. It's going to be, you know, all of these things, you know, you're going to be on land and whether you can do this new activity on land or not might be regulated, depending on the zoning and who owns the land and so forth. So everything comes into the world already entwined with the law, if you like. Even before. Lawyers have started to think about it or even know it might be happening. conversely, you know, all all law is about technology. And, you know, contract law is about technology and everything, right? And in a sense, everything is technology. And we and the technology vanishes over time. So we don't, you know, we don't think about road rules as technology regulation.
Prof Lyria Bennett-Moses: So other than the pedestrians, everything on the roads and most of what's being regulated is everything else is technology. So we could call it that, but we don't because the technology has become invisible. But on the other hand, when we're talking about, the AI for that, suddenly we need technology regulation. So I think there's this sense in both cases, and maybe this is my primary message that that, you know, either ignoring technology or thinking that technology is the sort of full sole focus of thinking, or that technology is somehow more special than it really is, are unhelpful in understanding the other phenomenon, whether that's law or security.
Dr Miah Hammond-Errey: I want to go to technology trends, How do you frame the challenges and opportunities of AI for business and government when there are so many loud voices at either end of that spectrum?
Prof Lyria Bennett-Moses: I'm sort of against anyone who is like rah rah technology. And I'm also against anyone who is like, you know, this is the great, thing to fear. I think both of those are very instinctive reactions to things rather than necessarily thought through. For me, when any entity is considering the use of technology, what is important is to understand enough about technology in the context of whatever it is you're trying to do. So sometimes technology is a solution in search of a problem, and that's not a helpful approach. You have to understand this is what we're trying to achieve. This is the problem we're trying to fix. And then you have to look at what does the technology do. Will it actually do the thing we want it to do. Will it only do it under certain conditions that we then need to ensure that it meets those particular needs? Um, there's an important technology literacy, I think that I would argue needs to be taught more and more in high school. And it's not necessarily how to program a computer or, you know, how to build an AI system or anything else. It's really understanding enough about how different kinds of systems work that you have a sufficient sense of what they do and what their limitations might be.
Prof Lyria Bennett-Moses: And I think everyone, no matter what role they're in, in government, business, any sector is going to need that understanding in order to be able to in whatever they're trying to do, use the technology more effectively, because otherwise it can seem like magic, fairy dust. And in particular, this is with the complexity particular as performance improves, you know, large language models can seem like magic fairy dust. but if you understand how they actually work, you understand when they will be useful to you, when they will perform well, and when. Also, you might need to be a little bit more cautious with what the outputs are, um, on various levels, but it's going to depend on the context. So understanding your own domain. Right. But then having the sufficient technological literacy to understand broadly how things work up to a point where you can sense their affordances and limitations.
Dr Miah Hammond-Errey: As a professional educator, Do you have some areas of focus in that sort of introduction to what is essentially, science, maths, engineering in terms of how systems work? it's a really interesting proposition in that so much of our technology literacy is focused on data literacy or, you know, even just computer literacy, essentially, that sort of digital literacy. What you're thinking about is critical thinking, literacy, actually.
Prof Lyria Bennett-Moses: When I was at school, we learnt. sort of basic math literacy, right? Like, if a litre of milk costs X and a lire and a half of milk costs Y, which represents the bit of value per milliliter, right? Um, and that was really important. I mean, not only because it was a mathematical exercise, but because ultimately people are going to be out buying milk and trying to work this thing out for themselves and pick the right thing off the shelf. that is only part of the world these kids are going to be in. they are going to be in a world where, depending on what the computer system knows about them and the internet knows about them, they might be offered different prices in the first place for something, um, where certain things will be marketed to them in different ways because of the marketers perception of who they are based on previous interactions with websites and so forth. Um, or even just assumptions from gender and other characteristics that the system has managed to work out. so it's not enough anymore, right, to tell people how to work out the relative value of different shaped milk? they need to understand that they're going to be in a world where governments are making decisions about them based on various kinds of data driven, um, assumptions and inferences.
Prof Lyria Bennett-Moses: They're going to make decisions about what law should look like in this world and what you know, what laws should be saying is okay or not. So they're going to need a number of understandings, right, to be able to navigate this effectively, good consumers, thinking citizens and so forth. So how do you do that? Right? And you pointed it out in a way that's not just some sort of lecture on this is how systems work and sort of irrelevant to the universe, but in a way that leads to critical thinking. So I've got lots of ideas, and I'm not a high school teacher, so I'll leave it to others to work out how these might be integrated. But one idea I had is something like a science class with experiments where different students who are going to have different search patterns on the internet do certain things on particular websites that you can test this in advance to yield a range of different results from sort of, you know, everyone gets the same answer to some websites where people are treated differently and perform the same searches, and then compare the results with each other and see what they can deduce about what the marketers are trying to do.
[Prof Lyria Bennett-Moses: Right. So that's an example of critical thinking. It's actually experiment design, which is an important skill in science anyway, but it makes it really stick and it makes it really real how different these kinds of things are. You can do the same thing in attempting to find out facts on the internet. Try to find something about, um, you know, the current government or whatever it might be, and see whether you're getting more critical or more favourable news sources in your search terms. Right, or whether you're getting more Australian content or more international content, maybe set your computer to different countries and time zones and see if that makes a difference. You can run this right and see what makes a difference and make it real. And then, you know, you can have the critical discussion about what this might mean and how we should use the internet and, and all sorts of questions that can fall out from that kind of an experiment.
Dr Miah Hammond-Errey: I think that would be useful for many people, not just high school students.
Prof Lyria Bennett-Moses: Well, I think high school is the last place we have everyone. Right? So absolutely, you know, I can think about what future lawyers might need to know, and I can put that into their legal education. people who are going to other, you know, other kinds of learning or more practical learning or whatever they might be doing towards their ultimate careers. They're all going to be in different places, and coordinating education at that level is hard. The other thing to do is kind of mass adult education campaigns, you know, like slip, slop, slap. Um, but that tends to have to be simple. You can't fit that much in a paid advertisement. So I like to think of high school as the last time we have everyone, and we have to recognize that what we're teaching them is not so much this is how Google works or whatever, because Google may or may not be the leading search engine in 20 years time. It's more how to think about it, how to work out how it's treating you.
Dr Miah Hammond-Errey: Is there anything that you're watching in the technology space at the moment?
Prof Lyria Bennett-Moses: we are moving from a world where people Are moving on from the idea of search to large language model answers to their questions, right. Search engines are changing in that way. Even like legal research, tools are increasingly providing things in that kind of format. So your point about what truths people receive is even more critical. It's important in search to, you know, most people didn't go past the first result anyway. But but there was at least a sense that there might be 100,000 answers, and maybe you only had time to read three of them, but but at least it was very visible on the screen that that was the case. Um, we are now going to a point of that. These models have a lot of choice within them. So go give you an example. this is going to take a little bit of time to explain, but and it's going to be simplified because, you know, large language models are very complicated and they have lots of features. And I'm not going there. But broadly speaking, one of the things they do in training is that they are putting tokens for which read words, because it's easy to think about it as words on a giant, multi-dimensional graph.
Prof Lyria Bennett-Moses: Right? So imagine every word is put on a multi-dimensional graph. And this isn't random. done in such a way that words with similar meanings are going to be close to each other in some sense, in this giant multidimensional space, and words that have the same meaning in some context will be on certain dimensions of that space and so forth. So, for example, there'll be a cluster somewhere of, you know, color words, for example, that will be, you know, different. They're not all in the same space, but they're all about the same kind of thing. So that all sounds fine. Subtraction also works. So think of these as vectors in a giant multidimensional space. In other words, you know, you can add them and subtract them in some sense. So subtraction is the more interesting for me. If you take the vector for man and subtract the vector for woman, and you take the vector for boy and subtract the vector for girl, etc., you will get a vector, right? That is, if you like, a representation of how we use language differently for men and women. And some of that will be non-controversial. You know, we use he for men, she for women. Sure. Um, some of it will potentially.
Prof Lyria Bennett-Moses: We would have to measure it. I don't have access to the back end of any of these systems, but some of it is potentially more controversial. And that's not just gendered language. That's an example. This is also the way we talk about particular kinds of people, or the way we talk about particular historical events, and which words are associated with which kinds of things that are happening or have happened or anything else. All of that we can just say, well, wherever it lands on the internet, That's what we're doing. And then which part of the internet are you training on? So that's still a choice. Or we can say, actually we could change this in some way. But at the moment, you know, we're not necessarily thinking about this enough, and we're certainly not thinking about where the decision lies at the moment. Reality is, it lies with the big tech companies. and they may have their own agendas. Right. We think about the agendas of these companies. You know, everyone focuses on TikTok as an example, because, relationship with the Chinese government and what that might mean. And everyone worries about its algorithms favouring a particular point of view. Right. And that can be a national security threat and so forth. Which is why, there have been decisions either made or in the process of being made about TikTok, not commenting on whether that's the right move, but just noting that. But this is in fact everywhere. It's all of it, right? And it's going to be every system. It's going to be how everyone answers all of their questions. About history. About the world, about how to compose an email to their boss, about everything.
Dr Miah Hammond-Errey: it is ceding an immense amount of power over subjective decisions inside those processes. Like, they may or may not be conscious decisions that, you know, the training data. You may not choose data specifically for that. You're just obtaining the data you can obtain. But that still has, decisions which have power over the ultimate outcome,
Prof Lyria Bennett-Moses: Power over people's, communications when they're using it to help formulate their own words, their the storytelling if they're using it in doing that. But also, as I said, just in how they find out answers and information to me, one can think of it as a national security issue. One can think of it as a epistemological issue, but either way, it's a really big, huge, important issue. Issues. So in terms of technology and the issues that keep me awake at night, um, that is the big one is what, you know. You know, on the one hand, if government regulates it, that's not necessarily helpful either, because I don't necessarily want them deciding, but someone's going to be making decisions. And a lot of the time these systems are not just taking it as given. They actually make rules around it. The answers are really hard and nuanced. They're deeply politically controversial. Um, and we are just moving into that without really thinking through what the very serious implications might be, including for national security.
Dr Miah Hammond-Errey: do you see the global AI landscape and particularly the regulatory landscape is shifting? And I'm thinking of developments like DeepSeek and the recent non-signatories of the US and UK to the AI Paris summit.
Prof Lyria Bennett-Moses: technology has actually for a long time had geopolitical features and is another landscape for, geopolitical influence. And that's not new with DeepSeekc. I mean, even before the sort of issue with large language models. We could see very different kinds in some areas, very different kinds of technologies and systems developing in China compared to what was being developed in the United States that largely reflected the different approaches of the two governments. Even though, you know, in America was being done in Silicon Valley, it still influenced by a way of thinking. Um, so that's not a new problem. I mean, I think that large language models, for the reasons I raised earlier, in fact, are a particular site of contestation and are going to be a pretty deep site of contestation, much like social media platforms have been. I don't know if I have the answer to, you know, how we deal with this. I mean, Australia's position in this is also, you know, we talk about ourselves as a middle power sometimes, you know, we have influence in some places and not in others.
Prof Lyria Bennett-Moses: We are not a big player in the large language model market. selling to us is not the big question that companies consider when they're developing products. So we can absolutely make rules around the kinds of systems we might want here. And it might mean that we can, you know, already, I think governments put various kinds of restrictions on the use of Deepseek in particular sectors like government. that's certainly influenced universities, at least in my experience. But that's not really changing the global game of this. I can note that it is going to be a place, where this is this is these questions are being argued because it is not just a question of contest of technology, because of the nature of the kinds of technology we're talking about. It is a place of contestation, of ideology and modes of government. So I don't know if that's going away anytime time soon.
Dr Miah Hammond-Errey: You've taken us to a segment. This is a new segment for 2025 and it's called the Contest Spectrum. What's a new cooperation, competition or conflict you see coming in 2025?
Prof Lyria Bennett-Moses: the conflict and it might not be new. It might just be bigger. if you think about it, the tech companies. and they have systems that have huge impact in the countries in which those systems are being used. The governments then of those countries that might want to, ensure that systems fit in line with their values as different as they may be around the world, but also the influence that other countries might have on technologies in other countries. In other words, that it's not only a question of countries trying to influence how technology is used in their own countries, but in much the same way as people try to spread ideologies, ideas or, general chaos. everyone is trying to influence, um, how technologies are being used, in other countries that they might want to either influence the kinds of ideas that are shared in those countries and held in those countries. Or they might simply, want to cause chaos and disrupt those countries, in various forms or disrupt democracies or change democratic outcomes or all sorts of things. And to me, this is countries in between themselves because they're all making different plays in this space. But the tech companies are kind of in the middle of it, and are going to have different kinds of, attempts at, at controlling that in different ways. I don't know where the cooperations are going to be in this game. I mean, you know, you can see some cooperative ideas in internationally around questions of the kinds of, common agreements we might have about the future of technology, in particular areas. But you can also see very strong differences. You can see differences in the approach in the US between the Biden and the Trump administrations. This is intra country as well as between countries. that is going to be a place where there is going to be not only conflict, but but maybe strange bedfellows and cooperation as well.
Dr Miah Hammond-Errey: Let's go to a segment on alliances. what alliances will be most important for Australia in AI in the next few years?
Prof Lyria Bennett-Moses: I think there are different kinds of alliances that Australia is has formed, will form um, around, we're talking about AI. You know, you could talk about more broadly systems. I think some are going to be where we are, the relatively, big fish, if you like. So the kinds of alliances we have around cybersecurity and our region would be a really good example of that. And I think cybersecurity and I you know, there is there is a not small overlap between those two things because ultimately, the more we rely on AI systems, the more their security is critical. so I see those kinds of alliances which have have been, you know, regional and where Australia is actually sometimes a vision of offering resources or sharing ideas in the region. there's also the kind of global alliances, right? So the sort of global statement kind of stuff, and this is an evolving space, um, with the changed administration in the US. But but you can see, you know, those kinds of alliances, right, where everyone comes together and makes a common statement. there's various kinds of potential for treaties in this area so there might be alliances that form, you know, out of those common ways of thinking that become more formalised in terms of treaty obligations.
[00:55:27] Prof Lyria Bennett-Moses: and again, you can see this in the area of cyber security, which is not unrelated from AI. Um, so, so there might be more of those kinds of alliances. I think ultimately, though, in that sort of broader international sphere, the role that Australia will have and I'm not the expert in international relations, but it seems to me that the role Australia might have, is more contingent. We're not necessarily a big player. We have a lot of influence. Um, in some ways. Um, and so the extent to which we are joining things versus shaping things in those sort of larger international alliances, At the moment, Australia hasn't fully worked out yet its own position. So it's very easy to see what the America's current position is. It's very easy to see what, you know. Europe's position has been, I don't know if we are yet a sort of strong voice knowing what our position has been. We've had a lot of talking about it, a lot of ideas, but we haven't actually really decided to what extent we're going in which direction yet. we sort of need to know where we sit in that ourselves. Um, and then maybe step in and have a role to play.
Dr Miah Hammond-Errey: emerging technology for emerging leaders. I obviously knew you were incredibly accomplished before inviting you on the podcast, but in my interview prep, I discovered you were awarded Equal Dux of your school year, you scored a remarkable TER of 100 in the HSC and studied maths as well as, law So what is your advice for leaders who want to blaze new trails,
Prof Lyria Bennett-Moses: I maybe just say what my advice to young people usually is, there is so much to learn now that there is no such thing as knowledges that don't go well. So when I did that, when I did maths and law together at university, people would sort of say, that's ridiculous. What's the point? There's no way that both are going to be relevant to one career path. And it turned out that was wrong. Um, but I actually think it's it's uniformly wrong. The world we're talking about complexity earlier. The world is a really complex place. it's hard to think of, you know, it's not like only engineers need to know about, you know, AI systems or, um, only lawyers need to understand the regulatory environment. Right? Like, there is there is so much in these important policy questions, if we think about it, if we think about AI and the future, if we think about cyber security, if we think about climate change, we think about any of these things. It's not like one person with one set of, disciplinary knowledge is going to be able to sit there alone with pen and paper and solve it, right. These are complex. So thinking about all the things you're interested in and not being afraid to do things that might seem a little bit unusual as a combination, because I do think that bringing different kinds of knowledges together, often highlights problems people didn't know about and also helps to solve them. So that's kind of the first thing.
Dr Miah Hammond-Errey: a segment coming up is eyes and ears. What have you been reading, listening to or watching lately
Prof Lyria Bennett-Moses: I've actually been, um, doing a lot, um, of history and philosophy, believe it or not. Um, and that's in reading, but also in podcasts, because more and more in the world today, I come back to big questions and I come back to understanding, you know, whether that's current affairs, whether that's in technology or not, in technology, but in understanding them in the context of how things have played out in, in, in other times and places. Um, I actually think that understanding history and philosophy, um, is really critical for me anyway. Love the rest is history podcast.
Dr Miah Hammond-Errey: a segment called disconnect. How do you wind down and unplug?
Prof Lyria Bennett-Moses: Exercise.
Dr Miah Hammond-Errey: And the final segment is need to know. Is there anything I haven't asked that would have been great to cover?
Prof Lyria Bennett-Moses: I suppose one thing we didn't really get to, um, because the focus was on AI when we were talking about technology. I mentioned it, but maybe just a bit more. The time of AI is also a really critical time to go back and look at what our legal framework is around cyber security, whether or not it's AI systems or not. Um, because I think they're all related. Like, we don't just need to worry about the AI part of systems, but the fact that we are relying increasingly on systems because of their greater capacities means that we need to go back to those fundamental security questions. Um, there is actually a great short story. I don't know if you've read it called The Machine Stops. the more we are dependent on machines, the more when the machines go wrong. That is actually a fundamental national security threat, as opposed to being an annoying delay to my starting my work day or something like that. I do think that the more we rely on systems, the more cyber security is important, but I don't think we pay enough attention to cyber security in the context of law and regulation. In particular, we have many lawyers, legal academics who talk a lot about AI and very few that talk about the law and policy around cyber security. So I could start a whole new conversation about that, and I won't. But I did want to put a flag in there and just say, you know, there are really important questions there. It doesn't get enough attention. We tend to think about it in quite a fragmented way. So data protection over here, um, critical infrastructure over there, hacking somewhere else, Disinformation campaign somewhere else, and so forth. Rather than a sort of joined up, problem space. and so I do think it needs more attention.
Dr Miah Hammond-Errey: Lyria, thank you so much for joining me today. It was a real pleasure to have you on the podcast.
Prof Lyria Bennett-Moses: Thank you for having me.