Signed in as:
filler@godaddy.com
Signed in as:
filler@godaddy.com
In this episode of the Technology & Security podcast, host Dr. Miah Hammond-Errey is joined by Connor Leahy, CEO of Conjecture. This episode unpacks the transformative potential of AI and AGI and need for responsible, global governance, drawing parallels to historical successes in treaties for ethical science practices, such as the moratorium on human cloning. It covers the current and potential impacts of AI monopolisation and centralisation of power and what AGI could mean, if achieved. The episode also explores the different risk profile complex cyber and cyber physical systems present for kinetic warfare.
This episode offers a deeply considered perspective on how to steer emerging technologies toward an inclusive, secure and human-centred future. It considers interdependencies in AI development, including the need for more recognition by technologists of the social and political implications of advanced AI systems. The conversation covers the California Governor’s veto of SB 1047, a bill designed to hold companies accountable for AI-caused catastrophic damage, and the necessity for international AI safety frameworks.
Connor Leahy is the cofounder and CEO of conjecture, an AI control and safety company. Previously, he co-founded EleutherAI, which facilitated early discussions on the risks of LLM-based advanced AI systems. He’s also a prominent voice warning of AI existential threats. He recently coauthored ‘The Compendium’ which aims to explainin the race to AGI, extinction risks and what to do about them, in a way that is accessible to non-technical readers who have no prior knowledge about AI.
Resources mentioned in the recording:
· The Compendium https://www.thecompendium.ai
· Byte-sized diplomacy, Lowy Interpreter, 11 Sep 2024, The search for safe AI
· East Asia Forum, 12 Aug 2024, Laying the foundations for AI in Australia and the Pacific
This podcast was recorded on the lands of the Gadigal people, and we pay our respects to their Elders past, present and emerging. We acknowledge their continuing connection to land, sea and community, and extend that respect to all Aboriginal and Torres Strait Islander people.
Thanks to the talents of those involved. Music by Dr Paul Mac and production by Elliott Brennan.
Transcript: please check against delivery
[00:00:01] Dr Miah Hammond-Errey: Welcome to Technology and Security. TS is a podcast exploring the intersections of emerging technologies and national security. I'm your host, Doctor Miah Hammond-Errey. My guest today is Connor Leahy. Thanks for joining me, Connor.
[00:00:20] Connor Leahy: Thanks for having me.
[00:00:21] Dr Miah Hammond-Errey: Conor Leahy is the co-founder and CEO of conjecture. He's also one of the most prominent voices warning of AI existential threats, and has a long history in machine learning, which he's going to share a little bit about with us today. I'm coming to you today from the lands of the Gadigal people. I pay my respects to their elders, past, present and emerging, and acknowledge their continued connection to sea, land and community. So you co-founded conjecture in 2022. Can you start off by telling me a little bit about the company?
[00:00:50] Connor Leahy: Yeah. So conjecture kind of is the latest in my longer career goals of addressing the problems of AI control. So kind of when I first got into machine learning, I was a teenager and my thinking was, how can I solve the most problems? And I thought, well, if I figure out intelligence, well, then I can automate science and I can solve all the problems. So I'll just go do that. How hard can it be? Um, but what I realized pretty quickly was that the hard part of AI and of AGI is not just how to build it, it's how to control it, how to make it actually beneficial for society rather than just powerful AI conjecture. We work specifically on kind of a and we're trying to build a commercial angle on this question. It's like, how do you build useful systems that you know are useful to people that provide value, that are good but do not have these kind of existential risks that we're concerned about? But I also want a positive vision of the future of like, how can we harness technology in a beneficial way, in a pro-social way? So conjecture specifically focuses on the narrow technical aspect of control. This question of just like, how do you build useful AI systems that do what you want and nothing else? The next conjecture I also spend quite a lot of time as a policy advocate, because fundamentally, what I deeply believe is that the problem of how to build a good future with technology is not a purely technical question. It is also a social and a political question.
[00:02:18] Dr Miah Hammond-Errey: What are the existential threats you see? But I want you to balance that out by saying, what is the good future that you're imagining?
[00:02:27] Connor Leahy: So fundamentally, if you build things that are smarter than humans, that are more capable at business, politics, science, warfare, everything, and you don't control them, what do you think happens? Humanity gets out, competed, and then the future belongs to something that is not human. This is the default outcome of where the market and things are currently being pushed, is that it's not just Existential risk is like some kind of, like, overdetermined thing that must happen. This is, like, completely wrong. I don't believe this. It's a decision that people are making today. The problem today is that people are deciding to give up the future of humanity to machines, and they're doing this on purpose. It's very clear that people are racing. They're trying to get to more and more intelligent, more and more autonomous systems as quickly as possible to spend as little on safety as possible. To understand this as little as possible. There's a race ongoing here. There's a strong profit motive here. Where and also a kind of libertarian ideology of like, you know, things should be free or whatever. The fundamental thing is that, like, our society has certain mechanisms to deal with new threats as they emerge, but they are very slow and they are very inefficient. Our politicians, you know, many of them, you know, are trying their best and are working very hard.
[00:03:49] Connor Leahy: They don't understand the technology. The technology moves on a cadence of months, you know, even weeks, uh, government works on the cadence of years or decades, and fundamentally, things are just moving too fast. The problem is, is that our is that technology is an exponential. It's going faster and faster. It's not just going faster, it's accelerating. And the time between breakthroughs is shrinking. While in government, the time between meaningful legislation and meaningful decision making on a social level is increasing. It's getting slower. It's getting harder to make big decisions. It's getting harder for citizens to become informed, understand what's going on and make informed decisions and have those decisions be affected within the world. And this mismatch is the core of the problem. So by default there will just be no oversight. Whoever is the least careful is going to be first. Whoever has the least moral and ethical constraints is going to leapfrog everyone else, which we're seeing, you know, with companies like Meta or Facebook, basically, you know, becoming more and more aggressive, cutting more and more and making fun of safety concerns. And so these people are going to be the ones who push technology as fast as possible, as quickly as possible. And the consequences, I think, are predictable.
[00:05:09] Dr Miah Hammond-Errey: I'm going to come back to the good future part in a second, but how can you be so sure that AGI is a real possibility, or that we are heading towards this catastrophic risk if we don't do anything?
[00:05:22] Connor Leahy: I mean, a couple hundred thousand years ago, there were chimps and there were our ancestors. And from the perspective of anyone who would like, look at this. Our ancestors looked just kind of like a slightly different chimp, right? It was just like, oh, they stand up a bit. They're a bit hairless and they look a little bit of a bigger brain. So if you had a bath time asked the question, well, what do you think this new chimp I made, you know, what's what's it going to do? You'd say like, I don't know, maybe I'll throw rocks a bit harder or something, but that's not what happened. Turns out if you take a chimp brain and you scale it up about three times, you get nuclear weapons and they go to the moon. You don't just get slightly harder rock throwing and within like a very short time frame in terms of evolutionary timescales, it wasn't like it took us millions of years of evolution to get to nuclear weapons. So fundamentally, we have to first acknowledge that intelligence is possible and then it can go very quickly compared, you know, depending on the time frame. So if the equivalent of what happened with computers, systems that are running at thousands or millions of times of our speed already, that can remember billions of facts that can be perfectly copied, hundreds and thousands of times, that can communicate with each other at light speed and so on. If you transport it, this kind of like general intelligence thing to a thing like that, you would immediately have something that was thousands or millions of times smarter.
[00:06:43] Connor Leahy: And none of our science rules this out. Like as far as we can tell, this is algorithmic progress. This is just like this is a process that can be done. I'm not talking about like consciousness or whatever because like, everyone disagrees on what that means. What I care about is competence. And if we look just empirically, you can now talk to your computer. This was completely unthinkable just a couple of years ago. And and the technologies are continuing to improve, like when they first came out, you know, we were shocked that it could spell sentences correctly. Now we have them explain, you know, quantum physics to us in Shakespeare and English. And we laugh because they made a math error halfway through. This is a boiling frog effect where the technology is progressing. Turns out the algorithms that we're using to build these systems are just incredibly general purpose. Could there be some big barrier that we currently don't know of? Totally possible. It is totally possible that next year something happens and for some reason I just stops dead in its tracks. It's Possible, but we should take seriously the null hypothesis of what happens if it just keeps going at this pace. That should be our default assumption, because so far that's been how things have gone.
[00:08:03] Dr Miah Hammond-Errey: Can you describe a little bit to me about the AI market? Can you describe what you see in the market and how that's changing.
[00:08:11] Connor Leahy: The AI market? The AI world is very weird. A lot of things happening in AI right now are not actually driven by market forces. This is actually quite confusing from the outside. It's a very, very, very strange world where a lot of the things that are happening right now are driven far more by ideology than they are by like hard nosed business logic, where you're seeing billion dollars flowing around, you know, your OpenAI's, your anthropic's, or whatever. Most of those companies do not make money. They're actually hemorrhaging billions of dollars per year. They're extremely unprofitable. They're spending ungodly amounts of money on these big supercomputers that they need to do their AI work and whatever, with no clear path to profitability, etc. and this is very confusing to people because from a Wall Street perspective, a lot of the AI stuff so far has not paid off. It's not very profitable, it's it's extremely expensive and so on. But there's kind of two reasons that these things are still being pushed. To be clear, there are some smaller AI things that are profitable. But in terms of these like very powerful and like frontier kind of stuff, there's kind of like two main reasons that this is being pushed. The one is the concept of venture capital, where in venture capital, if you don't have profitability, this is fine. You're supposed to lose money. What's important is how many users you have. It's important is how much revenue you have. How quickly are you growing. Because the venture playbook generally is you burn tons and tons and tons and tons of cash to get to an extremely dominant market position.
[00:09:42] Connor Leahy: And then once you're extremely dominant in the market, you leverage your monopoly position in order to squeeze users to become ultra profitable like this is, you know, like the Google, Facebook, Uber, you know, playbook. So part of part of what's going on here is that we see this like super venture capital driven market. There's another second component of this which I think is worth talking about briefly, which is ideological component. A lot of the people who live in Silicon Valley, who work at these companies, who build these things, not everyone, but like many, many of these people, including many of the CEOs, for them, this is a religious thing. It's not just a like a technical or a business thing for a lot of these people. They're building AI and AGI because they want to live forever, because they think AGI will be like God. It will solve all humanity's problems. It will create world peace. It will world, you know, end world hunger. It will, you know, cure all diseases. It will cure cancer. And like everyone, will live happily ever after. This is a thing that a lot of these people truly believe. We can argue about whether this is like, make sense or not? But like it's important to understand that a lot of these people really do believe this. They want to upload their brains to computers. They're transhumanists, and a lot of them have got a lot of power and a lot of money to try to fulfill this vision, because people now feel that it's maybe is within reach, within striking distance.
[00:11:05] Dr Miah Hammond-Errey: Can you outline what you see as the biggest threats and maybe opportunities of AI for security? Yeah.
[00:11:12] Connor Leahy: So the way I think of AI very much as a general purpose technology is kind of like it is the ability to mass produce cognition. In the past, if you wanted cognition, if you wanted intellectual labor to happen, you were kind of limited to a certain degree. There was a kind of there was a kind of fundamental difference between capital and labor, where like, labor was a very specific kind of capital to a certain degree, where it's like you need people and they have rights and they and they need food and they need breaks and stuff like this. Um, and this is like our capital. We're like machines where you just like you buy them and you just, like, use them. And as AI goes towards AGI, labor becomes capital in that you can convert capital into intellectual and physical ones. Robotics has figured out labor. And so you get this great liquidity between where you can just turn, you can just buy more cognitive labor for arbitrary things. My question is what does it change if the price of labor goes to zero or like towards zero, if it becomes a commodity and this changes many, many things. It makes many kinds of attacks and many kinds of systems or defenses unviable. Like in the past, it was fine to have a lock, you know, in front of your physical door because, you know, and it's like if it's an okay lock because like, how many thieves are going to try to pick your lock? Not that many.
[00:12:44] Connor Leahy: You know. So it doesn't have to be resistant to the best lock picker in the world. But if you have a server online, that's very different because the best you know, hacker can just scan all computers. They could just send millions of hacking attempts. A lot of the systems have this shape where like, it's hard. It takes time. You need to hire people. Imagine if you can now have 24 over seven workers at this level of skill, and you can have thousands or millions of them running digitally, you know, designing attacks, exploiting infrastructure, you know, producing malware, misinformation, etc. this completely breaks the economics of attack versus defence. There's there's a there's a big question is like, how does how is the offense defense balance is that offense is always cheaper than defense. Defense is hard. You're trying to, um, maintain a complex system. Well, you have to to successfully succeed as blue team. To succeed as defense, you have to ward off every single attack as the attacker. You just have to get lucky once. So it becomes much easier to cause damage than it is to defend against it. It's much easier to release a virus somewhere than it is to deploy a mass vaccine. As we build towards more and more intelligent and agentic systems, the systems themselves become actors.
[00:14:01] Connor Leahy: Once you build sufficiently powerful software that can make its own decisions, that can reason, that can learn from its environment, and that you don't control, that you don't understand, you are creating insider threats. You are creating systems that can optimize and take actions and do things that you might not be able to understand simply from the complexity. It's already the case that like if you're if you're managing a very complicated cyber physical security system, it's very hard to understand what's going on. In many cases. A very important thing to understand about AI is that it's not really software, not really the traditional sense. Traditional software is written by humans. Humans sit down, they type code. They and they tell the computer step by step what to do. This is not how AI works. It's very important to understand neural networks. Ai is more like grown than it is written. You have these huge piles of data, and you use these big supercomputers to grow a program to solve your problem. But we don't actually know how these programs work internally and what they do. And we see all kinds of extremely strange and anomalous behavior from these systems already. We're going to have more and more strange systems that have strange, unexplainable behaviors in weird edge cases, or that can coordinate or work together in various ways, potentially without a human in the loop.
[00:15:20] Connor Leahy: We're not quite there yet, but we're getting there very. We're getting really close. So this is going to spread the problem pretty clear and understandable. I think this catastrophe risk in terms of like systemic risk that is being created by deploying systems that themselves can become insider risk, that can become agentic, or that are being made to be authentic and potentially can also be to lead to a arms race where, you know, your adversary might deploy some kind of agentic AI system that they don't really understand, but it's really competent. And every time they try to make it understandable, that slows it down. So you want to take off more and more of the guardrails, because that makes it like faster and faster and stronger and stronger if you don't have humans in the loop. We're already starting to see this with like, drone warfare systems taking humans out of the kill loop. So if your enemy is taking the human out of the kill loop decision, they have an advantage because their system can react faster than the human can. So there's a this this like deadly spiral towards taking human supervision out of the system and empowering these automated decision making systems to make these decisions. And so I think this is the kind of like systemic ways where you get towards catastrophe.
[00:16:30] Dr Miah Hammond-Errey: You've mentioned control quite a few times. Do you think control is possible?
[00:16:36] Connor Leahy: I think control is like possible in theory. I don't think there's any law of physics that, like, forbids us from building systems that do what we want. I think it's just very hard. Humanity is just not very good at building complex software. And it's not just humanity's fault. It's just like software is complicated. Like building a complex cyber physical, kind of like control system is just very, very hard. And it's not something that we've been doing for a very long time. We've only had software engineering as a discipline for like 70 years. You know, it's still a relatively young science and a relatively young engineering. We're still learning so much and we are getting better. We have learned many things about how to produce formally verified software, about how to produce, you know, memory safe software and like efficient software, better programming languages, better compilers, and so on. So when people ask like, well, why do you think I will not be safe? And I'm like, wow, you think humanity, you know, being faced with like the biggest software engineering tip ever, you know, using techniques we don't understand that are like non auditable, that are like being done by like all these actors and racing environment that don't care about safety are like an adversarial race with each other. And like that will somehow produce the first piece of bug free software in history. It's just not how things work.
[00:17:50] Dr Miah Hammond-Errey: So utopian dream.
[00:17:52] Connor Leahy: It's utopian. It is ridiculous. Like this is just not how real things work. It's like it's nice to imagine sometimes people are like, well, but we could do it. And I'm like, yes, we could, but we won't. The same way we could produce formally verified, you know, like operating systems or consumer software that is like mathematically proven to be unhackable. This is a thing the military has done, especially the US military has done, like mathematical verification of the safety of like their helicopter control software and stuff like this, or like airline control software and stuff like this. And it does work. It's just unbelievably expensive. It takes like a decade and costs like hundreds of millions or billions of dollars to do for any kind of even like moderately complex piece of software. It's doable. It's just extremely hard and the same thing applies here as well. I think there's no physical reason we couldn't. I just don't think that's what happens by default. And this is a political problem. It's an economic problem. It's not just a technical problem. The thing that makes helicopter control software powerful is huge, sprawling institutions of, you know, scientific institutions, academia, bureaucracies, you know, building really grindingly, slow, tricky theoretical and bureaucratic processes over decades. This is how things work in the real world. And this is just slow and expensive and slow and stupid. And just like is not what's going to happen by default, but it is possible by default if you build, you know, computer systems, they will be sociopathic by default. It's just they lack empathy. They lack a lot of human basic like emotions and social pressures. And by default machines lack that as well. While ChatGPT might seem really nice and friendly and emotional, it's important to understand there are no emotions there. It is a simulation.
[00:19:40] Dr Miah Hammond-Errey: I'm going to pivot and go to a segment. What are some of the interdependencies and vulnerabilities in AI that you wish were better understood?
[00:19:49] Connor Leahy: I think the main one is just the interdependence with technology. I mean, this is like this goes beyond just AI, but like technology and politics and technology and like specifics. I think from the humanist perspective, there is more appreciation of this interdependency. But the interdependency from the side of the technology, I feel is underappreciated. Often they don't really think as much about if I succeed, what happens? This is the thing that I see a lot when I talk to AI people, is that they saw super excitedly about how they're going to make their AI system remember things better, or like be more agentic. And then when I asked them like, okay, but like, what if you succeed? What if it works? What if you build human level intelligence? Then what? And they just look at you completely blank stare. They're just never thought about that. But all these things are interdependent. Of course it is. A good future is a political problem. It's a civics problem as much as it's a technical problem. I think like this interdependency, but also between like what people believe is possible versus what is possible is very, very important. One more subtle thing, but like the ideology that people think we can live forever, think that AGI will get us, there is a very strong driver of these things.
[00:20:54] Dr Miah Hammond-Errey: One AI safety. What were your thoughts on the Californian governor's veto of SB 1047?
[00:21:01] Connor Leahy: The first word that comes to mind is predictable. I do think that the this was extremely predictable from just kind of like following where the money was going. In this case, I think this was pretty blatant. Do you think.
[00:21:14] Dr Miah Hammond-Errey: Shut down protocols for critical harm are really important?
[00:21:18] Connor Leahy: I mean, it's it's kind of crazy that we're even having this debate, right? The bill is basically saying, if you call, if you cause catastrophic damage, if you had caused more than $500 million of damage, you could be liable, right? Like that's what it said, right? And I'm like, that's what we're arguing about. Really. So either because so what the what these companies are saying is they want to have their cake and eat it too. Both. They say our system could never cause so much damage, but also don't don't regulate it. Like don't don't don't don't make us liable if it does happen. So like which one is it. Right. Either the technology cannot cause this much harm and then we should be able. Then we. You shouldn't care about regulating it or it can. And then we should have a really strong talk about regulating it. So I think any powerful technology that can cause $500 million plus damage, and I think it can cause way more than that, or like kill people should be regulated. You know, whether it's through emergency shutdown protocols, liability, many things like this like this is just obviously the case software has like wriggled its way around normal amounts of regulation. If we just regulate its software and like I the same way we do like airplanes or nuclear power, I think we would mostly be fine.
[00:22:38] Connor Leahy: Fundamentally, any civilization, any any society that truly grapples with technology has to grapple with the fact that software is an integral component of a functioning society. In a sense, like software is often more important than the physical side of things. You know, software controls your in my life in extremely important ways. And so the idea that something that is such an integral and huge component of society should not be, you know, regulated and integrated well into society and just like allowed to basically tumultuously grow however it wants into whatever dangerous aspects at once is just utterly fallacious. Now, I'm not saying that SB 1047, the California bill is necessarily the perfect way to do it. It's shot down protocols, the perfect way to do that. We can argue about that, but it's something. It's like it's such a, it's such a minimum even attempt to it that the fact, the thing I find most interesting about SB 1047 is how much it like bait it out, just how bad faith these lobbyists and these people really are is that they really showed their true colors.
[00:23:51] Dr Miah Hammond-Errey: We've spoken about this before, but can you just briefly outline whether you think an AI Safety Institute model would work for Australia?
[00:24:00] Connor Leahy: Yeah, I absolutely think it would. I have questions about the AI Safety Institute model. Is it the best possible model in all possible worlds? Probably not. Is that really a lot better than nothing? Yes, absolutely. Often is having an institution that like if you if you have a problem, you do want some institution whose job it is to think about that problem. This is like generally a good thing. And now there is a burgeoning international scene of these various institutes that are coordinating on an international scale. They talk to each other, they know each other, and so on. I think this is a very good. Um, this is an international problem. It is also a national problem. It's a very important national security issue, but it's also an international security issue. And one of the ways one of the most important things to start addressing international security issues is you have to open dialogue. So one of the great things that AI safety institutes give us is they give you this mechanism. They produce this like natural network of people plugged into the government, plugged into the technicals that can communicate and coordinate with each other. It's something from which other things can burgeon and grow. So I have some critiques of some of the AI safety institutes, you know, is some of the research, I think, you know, maybe not the best or could be done different. Sure. Whatever. Literally. Who cares? The important thing is, is that there needs to be open lines. There needs to be people thinking about these kinds of things. There needs to be people whose job this is, looking forward in the future to actually engage with these things and also build the government capacity to think about these issues, to make tactical decisions.
[00:25:32] Connor Leahy: It's not to say that there aren't other parts of the government that should also be thinking about, you know, all of these kinds of things. But so far, at least as an as joining international community thinking, addressing things, I think Australia would be a wonderful country to have joined. I think Australia specifically has a lot of unique things going for it. Australia has a long history in addressing catastrophic risks, from climate change to nuclear to bio risk. Pandemic preparedness. Australia has this like really great culture, which I quite admire of like diplomacy of like thinking about diplomacy, you know, both to the West but also to the east towards China and so on of like thinking about how to manage these kinds of relationships, how to address catastrophic risks. Australia also has, quite frankly, between you and me, um, quite functioning government compared to some other Anglophone countries, um, you know, like, great preference voting and mandatory voting. That's amazing. Like, wow, you did it. Like, everyone keeps talking about it, but Australia actually has it. That's amazing. I've been very, very impressed with like, the Australian civil servants and politicians I have talked to in this small domain that I'm aware of. I've been quite I'm quite impressed by people caring and like being willing to engage with this issue. And another thing is just Australia has, you know, does not have the same kind of like lobbying that exists, for example, in California, where just these tech lobbies have just stranglehold power over, uh, politicians.
[00:27:06] Connor Leahy: And so I was talking to the Esafety commissioner in Australia. Julie Inman Grant, and I was just very impressed. Australia has laws about, you know, misinformation, hate speech and so forth that it has tried to enforce and actually went to bat for actually went to fight and got absolute hell from Elon Musk threatening a non-mask X and like yelling at, you know, companies saying, we're going to pull out of Australia. We're gonna, you know, ruin your whole economy. Let's call that what it is threats, blackmail. Australia has done a really good job of just like actually standing up to this and also thinking like, wait, hold on, this isn't okay. Like, we can't have these, like, foreign companies that are under different jurisdiction telling us what laws we can enforce in our jurisdiction. This is a crazy threat to national sovereignty. And what I found and like this applies to many countries, including the US. By the way, I think the US also is being limited in what they can enforce within their own national jurisdiction by these companies. Um, but Australia, I feel like, has this feeling that like, this is wrong and we need to do something about this and has the guts to actually do something about this. So and also just like, of course, just highly educated populace, you know, extremely competent, rich country, extremely well respected, you know, great ally of us. And there's, you know, Five Eyes members. It's just there's a lot of things going for Australia that I would just really, really love to see more of the Australian perspective and the Australian, you know, voice in these conversations.
[00:28:45] Dr Miah Hammond-Errey: And thank you, Conor. We have we've just kind of segued perfectly because I did want to ask you about power and technology. What are your reflections on that and particularly about the handful of companies and frontier models being centralized in a.
[00:29:01] Connor Leahy: A Long time ago It feels like we thought the internet would make us free, that it would. You know, you think back to like the Arab Spring and stuff like this. We thought that, wow, if your information is just free, we just don't regulate. Then there will be this plurality and democracy would flourish and people would be free and take their information everywhere. And this is not what happened. This monopolization, this like concentration of power, is not new to AI, but we're seeing it happen again. We're seeing it happen again where times are an even more extreme degree, where this monopoly on power, on data and also on computing power is just is deciding who gets these decisions. If the companies can have smarter AI systems and stronger AI systems that everyone is dependent on using their hardware to build the AIS, then no, no one is free. It's like it's just concentrates the power more. They get even more decisions, making power even more control.
[00:30:02] Dr Miah Hammond-Errey: Does the current state of play blunt our ability to wage war independently, as we're so reliant on companies?
[00:30:11] Connor Leahy: Absolutely, yeah. The US military apparatus cannot function without private market. Now, this might be fine, right? Part of the reason why the American military is so powerful is because it can leverage the private markets to get the best talent and the best competition and so on. So like there are reasons why this is the case, but there are very, very concerning facts here especially, you know, you know, is Lockheed Martin going to pull out? Don't think so. You know, will Google or Microsoft you know, kind of a bit scarier, you know, a bit uncertain. It's even worse for other ones where other countries are extremely dependent upon software supplied by mostly American and Western developers and suppliers. And as I said, software is as deeply integrated and like into society. And as important as like, you know, hardware is like a thing is just like, you know, Microsoft can just like pull back their software, including deployed software to a much larger degree than you could pull back, say, deployed hardware. But to a certain degree, you can take Microsoft Windows back to a certain, you know, not perfectly, but like to a certain degree. Or at least you can insert a back door. Like, you know, if I was Russia, I would not be running Microsoft Windows on anything at stake here that I think the US has the least concern about, but should also concern about. Other countries should have like massive concerns about again, sovereignty issue like governments cannot function to varying degrees without software, without Microsoft Office. And so this is a big, big interdependency. They operate to a large degree like nation states in many in many capacities, and have geopolitical power comparable or even in excess of many nation states.
[00:31:55] Dr Miah Hammond-Errey: We've recently seen us concern around Chinese EVs with the proposed ban, as well as national security interference and data collection risks. Do you see these concerns shifting into other technology areas and becoming a greater part of this kind of software challenge?
[00:32:13] Connor Leahy: Us is very complicated, obviously, but there is obviously a Cold War. At least it's more of an economic war. And or an informational warfare at this moment between especially China and the US, but also even more directly, Russia and the West. In the modern world, if you wanted to do warfare and prepare for war, a massive or even the main thing you want to do is prepare cyber capabilities. You want to deploy deep, deep seated, you know, logic bombs you want. You would want modified hardware, cyber physical systems based on your system that you have backdoors into and stuff like this. And the thing with cyber is this correlated loss where like you can just turn off every single thing all at once. This is not the case with physical systems generally. Like if you're in a kinetic warfare, you might be able to like destroy a certain building. But what if you can turn off every building or like every building of a certain make, no matter of the physical distance? This is a very different risk profile. So I'll talk to you about like how complex cyber and cyber physical systems are. It's just it's impossible to secure these things. They're too complicated. It's just an EV is so complicated. There are so many parts. There are so much software, there are so many computer chips, and there's so many ways you can backdoor these. There is simply no way that, like anyone, could convince me 100% probability that an EV I was sold is definitely secure and hack proof, including from the US. To be clear.
[00:33:50] Dr Miah Hammond-Errey: Going to a segment on alliances. How does AI create opportunities for alliances and collaboration?
[00:33:57] Connor Leahy: I think it's very important that to never forget that this is a humanity issue. It's not just a Australia issue or a US or China issue. It is a thing that affects all of us. Whenever people talk of international treaties, everyone kind of like chuckles politely, you know? But I think this undersells how much it is possible to work together at larger scales, and how important this is, and how it can align people's interests. Long before human cloning was possible, scientists figured out it should be possible. And they came together conference, multiple conferences, and they found, wow, this could be extremely disruptive to society. Seems like a big risk. We should, like, have them at least have a moratorium and, like, not do this until we figured out how we want to handle this. And like, people took this very, very seriously. People were very concerned about this. And so they came together and they built a national international treaty. We don't clone humans. This is a massive success. This is a huge story of how humanity came together, decided this is not what we want for the future of our species. And then we actually didn't do it. This is incredible. And I think AI is an opportunity for us, you know, across the East and the West, kind of under duress, but also as an opportunity to come together and be like, how do we want to have our rules of engagement? Like, how do we want to see the 21st century play out and what do we not want to play out? We did it also with, you know, banning weapons of mass destruction, biological, chemical, nuclear.
[00:35:34] Dr Miah Hammond-Errey: I totally agree, and I think it's really important to kind of take that moment and say, this is also about inspiring people, right? It's about inspiring our friends, our neighbors, you know, our colleagues, our countries, that imagining a strategic future that we actually want to live in is really important. And working towards that is something that we need to do together. Let's go to another segment. It's called disconnect. How do you wind down and unplug?
[00:36:05] Connor Leahy: Um, I basically do three things in my life. I work, I eat good food, and I play Dungeons and Dragons with my friends. That's like my, my, my big hobby or like tabletop role playing games. I designed my own and like, write my own rule systems and like stuff like that.
[00:36:27] Dr Miah Hammond-Errey: We'll go to another segment. It's emerging tech for emerging leaders. What do you see as some of the biggest shifts for leadership from the introduction of new technologies?
[00:36:37] Connor Leahy: I think leadership is very important right now. One of the main things I found in general is just as technology progresses and as more emerging technology emerges, it accelerates, things move faster, and as things move faster, as things become more uncertain, leadership becomes more important, not less. It becomes more important for leaders to be able to take decisive actions, to take responsibility, to take ownership of like what decisions get made. So as technology progresses faster, I think it's more important. Leaders step up, take responsibility for understanding what's going on. If you don't understand it, find out who to talk to. Do something you know. Figure it out, make a decision and actually push for it. Actually talk to other leaders. Talk to other people. This generic answer, I'm afraid.
[00:37:25] Dr Miah Hammond-Errey: Coming up is a segment called Eyes and Ears. What have you been reading, listening to, or watching lately that might be of interest to the technology and security audience?
[00:37:33] Connor Leahy: So this is probably not a new recommendation, but I have recently I've just finished actually. Yuval Harari, his new book, Nexus. I thought it was really good. It falls apart a little bit in the latter half of the book, but the first half of the book is some of the best stuff I've read in a very long time. Um, it's about information and like, what is information and like, how does it shape history and human society? I highly recommend it. Uh, don't quite agree with all his suggestions in the second half of the book. I think they're a bit simplified, but overall I highly recommend it.
[00:38:10] Dr Miah Hammond-Errey: And the final segment is Need to Know. Is there anything I didn't ask that would have been great to cover?
[00:38:16] Connor Leahy: That's a great question. I mean, we could have gone, um, a lot harder into, like, what should be done in terms of, like, regulation. How much time we have, the technical things and so on. Luckily, probably by the time this is out, um, a, um, document that me and my colleagues and my friends have been working on, which goes into great detail of all of these things. From our perspective, I think this would be of great interest for people here to read. Um, if it's out, you know, hopefully I'll have a link to share.
[00:38:46] Dr Miah Hammond-Errey: I have one final question for you. What is the great future that you imagine with tech?
[00:38:51] Connor Leahy: So the the boring answer is, of course, is that I don't really know because I don't really know how the future works. At the very minimum, we can create a better humanist future. But the true answer, the true meta answer to this question is, is that I don't know, and I don't think I should get to decide the true future. I really want the thing I want more than anything else in the world. Is romantic a good choice, I think. It's like consent. And choosing is just, like, very core to like how we think about ethics. Is that I want people to be allowed to choose what they want, and I want also, I want people to make to be allowed to make choices that I disagree with. The ultimate liberal dream of, just like, you know, of people having their own pursuit of happiness. I think it's doable. It's just very hard.
[00:39:34] Dr Miah Hammond-Errey: Thank you. Connor, thank you so much for joining me today.
[00:39:38] Connor Leahy: Thank you as well.
[00:39:41] Dr Miah Hammond-Errey: Thanks for listening to Technology and Security. I've been your host, doctor Miah Hammond-Errey. If there was a moment you enjoyed today or a question you have about the show, feel free to tweet me @Miah_HE or send an email to the address in the show notes [drmiah@stratfutures.com]. You can find out more about the work we do on our website [stratfutures.com], also linked in the show notes. If you liked this episode, please rate, review and share it with your friends