In this episode of Technology & Security, Dr. Miah Hammond-Errey is joined by Dr Zena Assaad to explore the technical, human, ethical, and geopolitical dimensions of artificial intelligence. From workforce disruption to military application, this episode unpacks the complex ways AI is reshaping leadership, war, jobs and global power structures. Dr Assaad challenges common misconceptions about AI’s capabilities, explaining why understanding its limits is just as crucial as understanding its potential. From code to command, the conversation explores the relationships between human decision-makers and machines. This podcast explores why leadership–and human decision-making–is key in technology. It is poor human decision making and inappropriate use of technology that drives harmful outcomes like inappropriate use, job loss and civilian casualties. It also covers why algorithmic transparency is key to security and why interactive and non-linear complexity are underappreciated interdependencies of AI.
Resources mentioned in the recording:
This podcast was recorded on the lands of the Gadigal people, and we pay our respects to their Elders past, present and emerging. We acknowledge their continuing connection to land, sea and community, and extend that respect to all Aboriginal and Torres Strait Islander people.Music by Dr Paul Mac and production by Elliott Brennan.
Transcript, check against delivery
Dr Miah Hammond-Errey: My guest today is Doctor Zena Assaad. Dr Zena Assaad is a senior lecturer in the School of Engineering at the Australian National University. She has held fellowships with the Australian Army Research Centre and Trusted Autonomous Systems. Her research explores the safety of human-machine teaming and the regulation and assurance of autonomous and AI systems. Zena is a member of the expert advisory group for the global commission on responsible AI in the military domain. She’s received numerous AI awards, which you can check out in her bio. I recently had the pleasure of joining Zena on her podcast Responsible Bytes, which you should absolutely check out. Thanks so much for joining me, Zena.
Dr Zena Assaad: Thank you for having me. I'm so excited.
Dr Miah Hammond-Errey: We're coming to you today from the lands of the Gadigal people. We pay our respects to elders past, present and emerging and acknowledge their continuing connection to land, sea and community.
Dr Miah Hammond-Errey: So following on from our chat on Responsible Bytes, I want to cover off on AI hype. A report called trust, Attitudes and Use of Artificial Intelligence a global study was released this month, and in it, it ranked Australia last out of 47 countries in believing that AI benefits outweigh the risks. Firstly, what do you think this says about Australia and Australians? And secondly, how do we make sure we have a sensible amount of skepticism but also embrace some of the opportunities from AI?
Dr Zena Assaad: what I think it says about Australia is that we are a risk averse nation. And I would say that AI is not the only example of where we are very risk averse. So I'm not surprised that we came. What did you say? 47th on the list?
Dr Miah Hammond-Errey:, Yeah. We came last out of the 27 countries.
Dr Zena Assaad: Right. I see so 47 out of 47. I'm not surprised. Australia's a very risk averse country, which has its benefits in some ways. But it also can come with a few complications. So in the context of AI specifically, one of the issues that comes with a very risk averse culture is that we will be slow on the uptake, but also slow on sovereign development of the capability. Now, I'm not somebody who believes that AI is the be all and end all, and that it is a technology everybody has to adopt. And, you know, I know that there are a lot of statistics that get thrown around. I saw one the other day and I don't know where it's from. I don't know how it was substantiated, but it was like, if you're not using AI in your workplace today, then you're losing out on 40% efficiency. I don't know where that. So these kind of numbers get thrown around. So I don't actually believe in those numbers. But what I do believe in is the opportunities that these technologies can provide. So for example they are used a lot for early disease diagnosis. And they're really fantastic for diagnosing quite rare diseases very early. So things like that are really positive uses of that technology. And that's something that we miss out on by not adopting it. But then the sovereign capability perspective is really where we start getting into things like national security. So if we are behind on developing our own sovereign capabilities, what ends up happening is that we have to rely on other nations for their capabilities to acquire and adopt those technologies, and that comes with a whole list of different complications.
Dr Miah Hammond-Errey: Do you think it's ever too late to get in the sovereign development stage, or it's always a good time to start?
Dr Zena Assaad: I don't think it's ever too late. What I will say is the infrastructure underpinning AI is enormously costly in every sense of the word word. So it is costly in terms of money. It's costly in terms of the environmental footprint. Uh, it's costly in terms of the geographical footprint. So from that perspective, I think it's important for us to consider whether or not it's worth it, given the size of our nation and whether or not we invest in the infrastructure. But what we absolutely can invest in and should invest in, and it is not too late to invest in, is building the capability and building the skill set at a national level.
Dr Miah Hammond-Errey: And so that's] an incredible suggestion, but really juxtaposed then with this idea essentially that we don't believe the benefits outweigh the risks. And another stat from that report, which was published by Melbourne University and KPMG, was that only 30% of Australians see AI as a net positive in some ways it's easy to see why, you know, there's an immense amount of hype that companies are overvalued. VC is kind of throwing huge amounts of cash at this problem. And we're not actually seeing a lot of tangible benefits to the individual user or to companies. everyone has an AI solution. But what actually is the AI solution kind of gets less.
Dr Zena Assaad: Ai solutions to problems I didn't even know I had.
Dr Miah Hammond-Errey: Right? But if we are to embrace some of those opportunities, exactly like you said, and and as the World Economic Forum's report, brought out, we're going to need to focus on a number of really specific skills and, and areas of job growth. I noted in there that they had things like curiosity and, human machine teaming knowledge. And so there were a whole bunch of factors that they felt were really critical for improving workforce skills in AI. Where do you think investment is most needed in Australia to improve the view of AI as a positive, where it actually is having a really incredible impact.
Dr Zena Assaad: From my perspective where we need to be investing is public education. As sophisticated as these technologies may be, and with all of the benefits that they may provide, if we don't have general public education, the way that this kind of technology will be adopted and accepted by the general public is not going to be very effective. So I'm not sure how you kind of meant that question, maybe at a broader level, but as soon as you ask that, I immediately thought of that kind of general public. And the reason I think about that is I always I use this a lot, but I always think about my parents, right? My parents immigrated to Australia in the 70s and the 80s. English is not their first language. So I think about, you know, if my parents were to go see a doctor and they were using some kind of AI enabled tool, What general education have they been provided to give them trust and confidence in the use of that tool, and allow them to have some level of personal security and safety in being able to say yes or no, and having an educated reason for saying yes or no.
Dr Miah Hammond-Errey: That's a great segue, because a lot of your research has been on AI safety, in particular in trusted autonomous systems. But can you take us back and tell us what does give people confidence in AI capabilities and how do people feel safe about them?
Dr Zena Assaad: What gives people confidence in any technology, not just AI, is a history of use. When you use a piece of technology and it has worked for you, a majority of the time, you are going to have a greater level of confidence in it. Google maps is a really, really good example. When Google Maps first became really popular. I don't know if you remember, but it would take you down these like random streets. It would make you do really random U-turns in places where you didn't actually have to do a U-turn, right?
Dr Miah Hammond-Errey: Or that it was illegal to do a U-turn.
Dr Zena Assaad: Yeah, it was so bizarre. But this was, you know, when the technology was still kind of coming about. And granted, it still makes those mistakes sometimes. But for the most part, the extended use over time of this technology is did not only just improved the technology, but it also gave people a greater level of confidence because they had they had a history of use with this technology. And it's the same with AI. What gives people confidence in that technology and what makes them feel safe using it, is that they have a history of use that they can lean back on. So if you're using, for example, ChatGPT every single day at work, which I don't recommend, but let's assume you're using it every single day at work. If it is working for you 95% of the time, of course you're going to have a high level of confidence in the system. But if it's working for you barely 20, 30% of the time, then you're not likely to have a greater level of confidence in that system. So the history of use of technologies is really where is really where the confidence building happens.
Dr Miah Hammond-Errey: So it's really interesting because for many of us who love technology, we kind of leap in on things early and we try them out as you were saying that, I'm thinking back going, oh yeah, I remember how bad some of those early directions were. I remember when maps came out and it was so bad that everyone just, refused to use I haven't gone back to maps to see if it ever improved because I just never.
Dr Zena Assaad: Gained your confidence again.
Dr Miah Hammond-Errey: I just used it once and was like, what is this program… but obviously it had a much stronger competitor. some of these platforms have huge user bases, so they are learning really quickly. How long does it take for a system to learn and grow from being something that's quite nascent and emerging to something that is robust and able to withstand daily use.
Dr Zena Assaad: what I can say is that the growth is directly related to the use. So the reason a lot of these technologies get released to the public on a completely free basis is to encourage the uptake of these technologies, because if people were going to pay for it, they would not use it in the same way that they have used it now that it's free. And if you notice now most of the models. So OpenAI and Google and Microsoft now have a payment service for their large language models. So you can still get a free version of them. But if you want to get improved, um, outputs, faster outputs, uh, they're calling it, I think more intelligent outputs if you want. If you want, you know, the better quality. The version 2.0, you now have to pay for it. So the how quickly these kinds of technologies develop really is directly correlated to how large the user base is and how that how that grows over time.
Dr Miah Hammond-Errey: that is really helpful. And I think for people, it's a great way to conceptualize why that 100 million user mark is so helpful. You know, it's why so many tech companies talk about it. Because once you have that kind of level of pervasiveness into a population, as you say, you get that feedback and the the actual capability grows. But you did just touch on something. So I have to go there. Is artificial intelligence really intelligent?
Dr Zena Assaad: So there's a huge debate about this, and it's highly a semantic debate on the idea of intelligence. So some people view intelligence from the perspective of memory. Some people view intelligence from the perspective of the way that you treat other people. Some people view intelligence from the perspective of I don't know how quick you are at doing a math problem. There's it varies. Right. So this idea of intelligence, we can't even agree on what it means to be intelligent. So to label an artificial system intelligent is contentious in and of itself. But one of the really popular ways to measure machine intelligence is obviously the Turing test. And there was a recent study done by two people at UC San Diego, where they put five large language models through a Turing like test, and ChatGPT came out on top. So 73% of the time, I believe it was people believed ChatGPT to be equivalent to a human. What's interesting about the Turing test is that it's actually highly contested. Not everybody unanimously agrees that the Turing test is an adequate way to test to to test machine intelligence, and more often than not, what it does is it tests an imitation of intelligence. How much can it imitate human intelligence? So do I believe that artificial is intelligent? No. I believe that they're complex systems, and I believe that they can produce very convincing outputs, but I do not believe that they are intelligent from the human sense of the term.
Dr Miah Hammond-Errey: Yeah. Thank you. there is this huge conversation about, artificial general intelligence and you know, computers taking over the world. And I think it's.
Dr Zena Assaad: Ai agents is the really big one now.
Dr Miah Hammond-Errey: Agents, of course. How could I forget?I want to go to a segment.What are some of the interdependencies and vulnerabilities in AI that you wish were better understood?
Dr Zena Assaad: I can talk about this from a technical level.AI enabled systems are complex systems and the complexity is multifaceted. And one of the complexities of AI systems is what we call interactive complexity and nonlinear complexity. And I usually put those two things together because they're quite interrelated. So software software is basically a scaffold of different things that come together to form a larger system. And the interactions between the different parts of that software is what we call interactive complexity. So a problem in one might lead to a problem in another because they're so intertwined. And then when nonlinear complexity comes in, is sometimes it's really hard to figure out the the point of failure. Not everything is linked in a linear way in a software system. And so nonlinear complexity kind of reflects that. It is really hard to find a linear point of failure. And when you put these two things together, this interactive and this nonlinear complexity, what you get are these quite complex systems that are really difficult to penetrate. So when so when something goes wrong or when you get a you get an output that you weren't expecting, it is really hard to penetrate these systems and to figure out where it went wrong.
Dr Zena Assaad: Now, this isn't to say that all AI enabled systems are black box systems. That's not correct. There are some that are complex enough that they are black box systems. So this is where even if you try to investigate or tried to look under the lid, you couldn't really figure out how you got from point A to point B, but a lot of these systems are complex enough that finding a point of failure would be really difficult. And the reason I bring this up is because with a lot of the work that I do around safety, a really big part of safety is finding points of failure and putting in redundancy measures for them. Right. Making sure that if this system fails in this way, these are the these are the measures that we have put in place. And when it's hard to find those points of failure, it becomes very, very hard to figure out what kind of redundancy measures you would put in place. And this is where that safety conversation gets a little bit more complicated. / [delete space]
Dr Miah Hammond-Errey: Single points of failure and redundancy are they feel like something that the whole world cottoned onto when CrowdStrike outage occurred. Right?
Dr Zena Assaad: Yes, because that was global, really. Everyone was freaking out. Airports were closing down. It was insane.
Dr Miah Hammond-Errey: And I mean, it ended up being a poorly system assessed system patch. Right? But like at the time, people didn't know that.
Dr Zena Assaad: And that's a really good example of that interactive complexity, right? It was this one system update that went into this software system and created multiple different points of failure because of that intertwined interaction between the different parts.
Dr Zena Assaad: There was a similar outage of an air traffic management system across the UK, there's this really fantastic blog post that goes in like step by step of where the points of failure were. And once you go step by step by step, it turns out that the original point of failure was an incorrect waypoint that had been put into the system. So aircraft fly across the trajectory and there are waypoints along that trajectory, and one incorrect waypoint has caused this snowballing and cascading effect. And it took a really long investigation, because what happens with what happens with these interactive systems is when you get a failure in one particular part of the system, it can manifest in a different way in another part of the system. So it's not always obvious or clear where the origin of the fault is. And so this one is really, really interesting because it was something so, well, not insignificant, but in the grand scheme of things, kind of insignificant as a single incorrect waypoint across an entire trajectory of an aircraft, and it caused this huge cascading effect. And the system ended up going down for it was quite a while, I think.
Dr Miah Hammond-Errey: Yeah. Wow. And both those examples highlight how reliant we are on systems that we often don't fully understand.
Dr Zena Assaad: It's really important to remember that technology doesn't kind of evolve overnight, right? It is incremental changes over time. So we didn't get to the smartphone overnight. We started with radios. We started with the first telephones in the home. We had corded telephones. You know, you had to if you wanted to go into another room, you had to take the phone with you because it was on a cord. We then had the cordless telephones, which were amazing, and then we had mobile phones. we now have smartphones, right? These were incremental changes over time. It's the same with these systems. Even though they may look different in their current form, they are an example of an incremental change of technology over time and with autonomous and AI enabled capabilities. Often they are integrated into existing systems. So really fantastic example is a drone. A drone is in its essence, an aircraft. And an aircraft has been around for decades. We are familiar with that system. We know how it works. We know how to regulate it. The challenge comes when we integrate autonomous and AI enabled capabilities into it. But the positive thing is that we're not starting from scratch. We are starting from an existing system that we do know and that we are familiar with, and then we just have to fill in the holes that autonomous and AI enabled capabilities produce when they're integrated into those existing systems.
Dr Zena Assaad: So all technologies are used in ways that they're not intended to be used. And the thing with safety regulations and policies and standards is they actually safeguard humans, not the technology. What we are doing is putting guardrails around the way that people can and can't use technology. And so what's interesting about drones is that they change the way aircraft were able to operate in different environments. So aircraft are traditionally very large, very noisy they require a lot of intervention. So you've got the pilots, the regulators, the air traffic controllers, Trollers. And then we introduce drones. And suddenly they were very lightweight. Uh, some are noisy, some are not. Uh, anyone could fly them. You didn't need to have a pilot who had gone through years and years of training and was constantly retrained. And so what this did was it opened up a pathway of opportunities for people to do what they do best, which is to be really curious and creative about the way that they use these technologies. But the technology itself is not where that problem originates. It originates from people. We are very curious, we are very creative, and we do like to use things in ways that they're not supposed to be used. The one that always baffles me is cars. You know, I don't know exactly how far the odometer goes on my car. Maybe it's 220 or 200, I don't know. I'm never going to drive my car at 200km an hour, but the possibility for me to do so is still there. And it's the same with these technologies, right? They have a quite a wide array of possible applications. We're not always going to use for all those different applications. But what that spectrum does is it opens up the possibilities for people to use these things in very different and unforeseen ways.
Dr Miah Hammond-Errey: I think you've just added yourself as a non speeding driver there Very strategic of you. Thank you. I think there are quite a lot of people who would like to test their car at 220.
Dr Zena Assaad: But you know what I mean right. Have you ever looked at the odometer in your car and been like, why? I can? There's no there is not a single road in Australia that will allow you to go to 200km an hour. Why does my car go to this speed?
Dr Miah Hammond-Errey: So for you, this is the aspirational AI dream the 220 on the dial, I like it. How do you see AI will impact national security in the coming few years?
Dr Zena Assaad: In a few ways, I think the first is I was talking before about black box systems, and not all AI enabled capabilities of black box systems. That's true, but most of them are quite opaque and they're opaque for a reason. And so in terms of national security, one of the challenges that we're going to have is an opaque system is a vulnerable system. Because when a system is opaque, it is going to be more difficult to safeguard against its vulnerabilities. In order to safeguard a system, you need to be fully aware of what it is and is not capable of and where its sensitivity points are. And if you don't have all of that information, it is a lot harder for you to safeguard that particular system. So if we're using AI enabled capabilities in the context of national security, if we are not able to safeguard all of its vulnerabilities, we're leaving those open for other people to discover them. And if we don't have our own sovereign capabilities here in Australia, and we're acquiring our capabilities from overseas, that information is not then just specific to Australia. And so we're really opening ourselves up to potential security threats in that regard.
Dr Miah Hammond-Errey: Why do you think so many of these systems are sold in opaque ways?
Dr Zena Assaad: I think they're opaque for a few reasons. So I think the first reason that they are opaque is because there is an elusiveness around AI and elusive narrative narrative, and it pushes the idea of AI being more intelligent than it actually is. And this is what's fueling the uptake and fueling. People are using it in ways that potentially it shouldn't be used. And so I think part of the opaqueness of AI is linked to that kind of narrative to fuel this idea of a mystery. Yeah. To fuel the mystery of AI. The other reason I think it's opaque is because it's not yet regulated. And so the more opaque the system is, the more the more the regulation lags. So I sit on a lot of international forums that talk about regulating AI. And one of the challenges I have is that a lot of people say, well, we don't know how these systems work, so we can't regulate them. And I believe that the opacity of these systems or the perceived opacity of these systems is what's pushing that lag in regulation, because it's easy to hide behind the narrative of, well, these systems are opaque, so we can't regulate them.
Dr Miah Hammond-Errey: So how do you think we should approach, AI safety or AI regulation should we be setting a standard and saying, actuallywe want transparency or accountability in these circumstances.
[00:27:54] Dr Zena Assaad: So it's context specific, right? If we're looking at the medical industry, you absolutely cannot put an opaque system in that context and then shrug your shoulders and say, well, it's opaque. I can't regulate it like this is people's health and their lives and their livelihood. Where you get a little bit more of a gray zone. Is the military domain. The military domain in and of itself is an opaque industry. And so when you embed these systems into that area, that's where it gets a little bit harder to navigate. Even though the consequences of the uses of these technologies in that domain can be catastrophic.
Dr Zena Assaad] So what I'm saying is that the illusion of opacity is what makes it more difficult to regulate, because you have to navigate the arguments of, oh, well, this system is too opaque. We don't understand how it works. That's why we can't regulate it. And so then myself and some of my other colleagues who come from a technical background, have to step in and say, well, no, they're not as opaque as they're being made. They're being made to seem this is how these systems work. This is how we could regulate them. So that's what I'm saying from the perspective of the relationship between a perceived level of opacity and regulation.
Dr Miah Hammond-Errey: Can you tell us You're you're an expert on the Global Commission on responsible AI in the military domain. Can you tell us a little bit about this forum?
Dr Zena Assaad: So GCM was established by the government of the Netherlands in 2023, and their role is to bring together a group of international experts to provide guidance around the governance of the use of AI in the military domain. So they have an initial amount of funding for two years, which I believe this year is the last year. And at the end of this year, there will be a final report that comes forward, which is a set of recommendations for the responsible governance of AI in the military domain.
Dr Miah Hammond-Errey: And this is, I'm guessing, a bunch of get togethers what are some of the issues that you've covered.
Dr Zena Assaad: So the issues have varied and they vary because of the interdisciplinary nature of the group of people that make up GCREAIM. You've got technical people like myself, you've got lawyers, you've got policymakers, you've got political scientists. And when you bring these very disparate people into one room together, you get, um, quite an energized discussion on what is and isn't important to focus on. And so because of that, we've had multiple different areas of focus. And so we've had focus on things like the extreme end of the spectrum. So um, the use of AI enabled systems to take life in military contexts. And then we've had discussions on the other end of the spectrum, the use of AI enabled systems for research and education in the military domain. So it has varied. And I think it's important for it to vary. Right. And this goes back to that spectrum that I was talking about. There will be a lot of applications of AI in the military domain that are not problematic, right. Like if you put AI, if you embedded an AI enabled capability into a drone to improve its trajectory, its trajectory optimization, that's not problematic. And so it's important for us to have this really wide ranging discussion. Now the challenge for us is going to be how we synthesize that very wide ranging discussion in that final report in a cohesive way that still has a point of view and puts forward a very cohesive argument.
Dr Miah Hammond-Errey: I do want to talk a little bit about human machine teaming. It's something that is described in a lot of literature as an area that will really be of significance for the national security community, particularly in relation to information collection triage. And then preparing that information in a way that a decision maker can make a very quick decision can you describe for us what human machine teaming actually is?
Dr Zena Assaad: Yeah. So it's important to note that there's not a universally accepted definition of human machine teaming. But in general, human machine teaming represents the combination of humans and machines working together to achieve an output that neither of them could effectively achieve on their own. So it's about enhancing both the human role and the machine role to achieve this optimal output that neither could have done as effectively on their own. And what distinguishes human machine teaming from something like human machine interaction, or HMI, is that it has more of a reciprocal and interdependent nature. So usually the way that we use a lot of machines is a very kind of one sided interaction. Um, but with human machine teaming, we get a little bit more of a reciprocal kind of relationship between the two. And what I mean by reciprocal is that there's give and take. You are waiting on a piece of information from the from the machine. The machine is waiting on information from you. There is a little bit of back and forth. And so that is the difference between human machine teaming and then other relationships like human machine interaction.
Dr Miah Hammond-Errey: So for people that might have only ever used a chat bot, how is it different
Dr Zena Assaad: So I actually think chatbots are a really fantastic example of human machine teaming, because they do require a substantial amount of effort from both parties.
Dr Miah Hammond-Errey: very recently there was an independent intelligence review, which was publicly released last month the co-authors have considered deeply the role of AI in intelligence. And one of the fundamentals that I have always struggled with are chatbots is the fact that they're often not accurate How do you how do you think about those challenges.
Dr Zena Assaad: So they are very real challenges, and they are often ignored because the pace of technology improvement surpasses people's concerns. So the current state of I'm just going to use ChatGPT again, because it's something that most people will be familiar with. And I'm not saying that ChatGPT is used in intelligence contexts. I'm hoping it is absolutely not but the state of ChatGPT today versus when it was first released a few years ago leaps and bounds ahead. And this is the reason why some of those concerns dissipate, because the technology is improving over time, and it improves with more consistent and frequent use of the technology. Now something I will say is a few years ago, I did a research project with the Australian Institute of Marine Sciences, and they wanted to better understand if they were going to use an AI enabled system to help process through the copious amounts of data that they collect across the Great Barrier Reef, because they monitor the Great Barrier Reef. And, you know, it spans hundreds and hundreds of kilometers, and it takes a very, very long time to monitor that particular geographical kind of area. And so they wanted to use AI enabled technologies to help them analyze the information. And one of the concerns that they had was that usually what they have is they have divers.
Dr Zena Assaad: They have divers who go down and collect all the information. And as they're collecting all that information, because they are so familiar with that environment, they notice little things along the way. They notice discoloration in some of the coral, which happens because of, you know, the temperature of the water increasing over time because of climate change I'm not a marine scientist, but they noticed these very small kind of nuanced details, and it's those small details that really do make up the substantial amount of the work in the assessment that they have. And so for when we bring this back to the intelligence sector, it'll be a similar thing. There's a lot of nuance and gray that comes with intelligence. And what AI enabled systems don't do well is capturing nuance. Now, do I think that the use of AI enabled systems in intelligence, in terms of filtering through large amounts of information, is going to be useful? Absolutely. But it needs to be used in the right way. And this goes back to what I was talking about with the example I gave with how I use ChatGPT, where I don't expect the first outcome that comes out of ChatGPT to be the one that I'm going to use.
Dr Zena Assaad: Quite the opposite actually. I know it's going to be crap, and I know I have to put in energy to make it better. And it's the same with the use of AI enabled systems in this context. If we're going to use it in an intelligence context, that's great. You might be able to filter through a lot more information in a shorter amount of time, but what we then have to do is train our intelligence officers in how to use this. Do we accept the first outcome of, you know, the first output from this technology? How do we use it in the most effective way? How do we try to capture nuance in the way that is important to us? So one of the things I always talk about is that these technologies will not replace people. They will augment people's roles. And this is an example of how someone's role would be augmented. We will no longer it's not that we would no longer have intelligence officers. It's that the way that they do their work will look different.
Dr Miah Hammond-Errey: I completely agree with you. I also see a large number of people deeply concerned about their jobs, and I'm not talking necessarily about the national security community here. How do we ensure that people really understand the risks they're taking on when they do embrace technologies, so that the humans actually can be involved in some way in those processes?
Dr Zena Assaad: I think it's a pendulum swing. I think with the current AI wave that we're all in, the pendulum has swung quite far in one direction. But what we're going to see over that time is that pendulum swing kind of come back a little bit to the middle. And what we're going to see with that is people realizing, oh, this AI tool actually doesn't do all the things that I need it to do, and it is creating more work for me to manage it. So a really fantastic example is there was an editing company in Australia that fired all of its editors because they believed they could use ChatGPT for everything, and within a few months they had to hire everyone back because they realized the limitations of the tool. And now their new model is that they get their editors to use ChatGPT to improve the efficiency of their work. This is what I think we will see. We're going to see that huge pendulum swing of people losing their jobs. Then people are going to realize they made terrible decisions because the technology does not work the way they think it works. And then we're going to get the pendulum kind of balance a little bit towards the middle. But one thing I will say is that the obsoletion of jobs over time is not, is never going to change. The world is evolving. It will always evolve. Our needs as a society, change the way the world looks, change.
Dr Zena Assaad: We will always have obsoletion of jobs over time. Do I think that technology has potentially accelerated it? Sure.It is not the cause of the obsoletion of jobs. So it's it's a natural progression of things over time. And I think that once that pendulum swing kind of balances in the middle a little bit, we'll kind of get a better understanding of how we can work alongside these technologies instead of them replacing us. But the one important factor I need to make here is that even with us kind of having this pendulum swing towards that extreme of people losing their jobs, that's not the fault of technology. That's the fault of management making poor decisions because they don't want to pay workers. Right? Like we can't blame AI and ChatGPT for that. That is not the fault of their technology. That is the fault of a management person who didn't value the work of the staff and was like, you know what, I reckon I could just get ChatGPT to do all of your jobs. So I'm going to fire you all, and then within a couple of months, I'm assuming it's a he. He had to be like he had to bring back, you know, all of his staff and say, I'm so sorry. I'd love to hire you again. And so we need to also just remember that humans are still making all of the decisions, um, around technology. And so it's very easy for us to villainize technology, but really we are still making the decisions around this technology.
Dr Miah Hammond-Errey: I'll go to a segment on this. What do you see as the biggest challenge for leaders in AI? Um, and what do you think will be some of the future challenges as we look forward to the new, new technologies coming in?
Dr Zena Assaad: So I think the biggest challenge for leaders with AI today is a complete misunderstanding of the technology. Without a proper understanding of this technology, it's really difficult to implement it in not just a safe way, but also an effective way. If you really want to go down, you know, the train of AI and productivity and all that kind of stuff. If you don't actually understand how this technology works, and that also includes how it doesn't work. So understanding the limitations of this technology, you're not going to get very far. And then as far as the future considerations of AI, for me a big one is going to be around data. So we're really starting to see a lot of the concerns around data privacy, data curation. And we're at a point where we have artificially generated data that has become part of the data sets for training these systems, and it's because we have quote unquote, run out of data. And that is what I think is going to be really interesting about future technology. How will these technologies look and how will they be shaped when they are being trained on artificially generated data? I think that's going to be very interesting.
Dr Miah Hammond-Errey: how do you suggest leaders try to learn more
Dr Zena Assaad: Using it is the best way to learn about it. Use the system in a bunch of different ways, in different random ways. Try and break the system.
Dr Miah Hammond-Errey: Try to get to your 220.
Dr Zena Assaad: Okay, yes, literally. Use these systems and interact with them. It'll also be quite a humbling experience, because you'll realize they're not as scary and elusive and intelligent as as you once thought that they were.
Dr Miah Hammond-Errey: We have a segment in 2025 called The Contest Spectrum. What's a new cooperation, competition or conflict you see coming?
Dr Zena Assaad: We are seeing an increasing amount of private, technical, private technology industries are partnering with defense and military industries in the production of these technologies. And what's concerning about the current state of technology is that really it's so ubiquitous, right? We're all using these technologies. We're all inputting our data into these technologies. And we've all signed up, basically signed away the rights to our information when we click the yes to those terms and conditions. And originally, most of these companies had clauses about the use of their technologies in the military domain. And it was clauses basically saying that we will not use our technologies for harm. We will not use them in military settings. Those clauses have now disappeared. They have the right to just remove them. They're not legally binding. And so this I think, is going to be a really complicated and concerning future for us, because I don't think that we have ever seen such a close intersection between the general public and military applications of technology.
Dr Miah Hammond-Errey: Yeah. So you're you're bringing this, these two things together. this ubiquity of technology and the kind of growing relationship between technology and warfare with what I would say is a real lack of understanding that at the end of the day, no matter how advanced a society is, war involves people.
Dr Zena Assaad: You know, the whole purpose of war is to bring more harm onto another party than is brought on to you. That is the purpose of war. And what these kind of ubiquitous technologies are doing is that they are changing the way that we are able to bring harm to another party. And from a civilian casualty perspective, if we look at the Israel-Gaza war, the use of AI enabled systems there really did increase the civilian casualty rate and they increased it because those systems weren't accurate and those inaccuracies were just accepted. So they were saying from their perspective, and we don't have actual evidence other than what is reported. But from their perspective, the benefit that they felt these particular technologies gave them was enough of a justification to use them, despite them not being accurate in the way that they needed to be. And this accelerated the civilian death toll in that particular war. And this is the issue that we see with these technologies. They're being implemented into different settings while they're being implemented into different settings when they're not properly regulated. People are hiding behind the elusive opacity of these systems We are seeing forms of harm being imposed onto people in ways that we haven't seen before. And that's really challenging.
Dr Miah Hammond-Errey: Let's go to a segment on alliances. How do you see the role of alliance building in AI?
Dr Zena Assaad: Because we are seeing a very strong disparity in the sovereign capabilities of this technology across different nations. I think that alliances are going to be very strategic and very interesting to see play out. So we're seeing these technologies predominantly being built in the US. And then the second largest competitor is China. And the geopolitical tensions between these two nations are quite high and quite sensitive at the moment. Quite temperamental. And then from a broader perspective, so where is Australia getting most of their AI enabled capabilities from? Where is Europe getting theirs from, and where are different parts of the world getting their from theirs from? And when thinking about alliances, I think this is going to open up the discussion more prominently about national sovereignty, about sovereign capabilities. This is where that conversation is really going to be unavoidable.
Dr Miah Hammond-Errey: And do you think there is potential for countries other than China and the US to develop or co-develop parts of the infrastructure for AI?
Dr Zena Assaad: I do my concerns around the infrastructure are obviously the environmental footprint and how costly it is. It requires a lot of electricity, a lot of water. And so I'm less I'm more hesitant to encourage all nations to be like, you can do it all on your own, because if every nation had their own sovereign infrastructure for these AI enabled capabilities, I think it would be horrific for our environmental footprint while not every nation may have the capacity to develop their own infrastructure, we do have the capacity to develop our own capabilities and our own skill sets. So thinking here about the actual education that we provide for our nation, the kinds of skill sets that we encourage, um, the way that technologies are developed here in, in different countries, those are the things I think that absolutely every country can pursue. But maybe let's take it a little bit easy on the infrastructure.
Dr Miah Hammond-Errey: Let's go to a segment. It's called disconnect. How do you wind down and unplug personally? Yeah.
Dr Zena Assaad: Yeah I'm a big reader. I like to read things that are not related to my job at all. That is my way to unwind. I think sometimes I get so caught up in the weeds of my job. And all of you know, when you work in academia, there's a new article and a new journal paper and eve