[00:38:42] Dr Simon Longstaff AO: On the issue of trust, we can talk about big data, but we have to also consider the role that AI plays. And although there have been some recent breakthroughs in this until today, we still don't have what is explainable AI We can have data sets which are being interrogated. We can know that they produce results, but providing the specific mechanism by which that result was achieved, like machine vision and a whole lot of things, we cannot do that. So it's not like the old Boolean logic where you could follow it step by step and you could say, well, this happened and that happened. It's not like the old human intelligence where you could say, these are these steps were taken and this person is responsible. So in that world where we can't explain what's happening inside the black box, or we can explain is why we used the box and what we were hoping to achieve. How do you think that's going to affect trust, not just in specific agencies, but in the intelligence process? To this extent, they rely on this. And just if it's not too complicated a question, is it the problem of that that is driving those you interviewed to be so sure that there must be a human in the loop, as often as they did seem to say.
[00:39:51] Dr Miah Hammond-Errey: That in one section of the book, I talk about Unautomated, and I really talk about the fact that many participants said there were just huge intelligence processes you cannot automate, and huge parts of human intelligence are that you can definitely target better with big data sets. You can do lots of things using data, but ultimately there are some things that analytics I can't do. I actually got a sense from my research here, but also other research I've done subsequently that they think about these things deeply because they're committed to the democratic process. I mean, they're driven by service. This is a career people choose that they can't talk about. And so they're driven by trying to improve something. And so I really did get the sense they thought about it deeply because they were passionate about it. And because like Paul said, there's like lots of conundrums. I mean, I'm hoping, you know, shortly you can draw one of those out that some of the kind of really difficult ethical challenges in the actual practice of intelligence. I mentioned the concept of ethics at scale earlier and the idea that we are consciously or unconsciously coding individual human decisions ahead of time using algorithms. And obviously we see this playing out with autonomous vehicles, for example. This is a shift right from our current state, where individual humans are actors and respond in the moment. What do you think of this? And and do you have any good ways of looking at this approach? [00:41:14] Dr Simon Longstaff AO: Well, first of all, I think the issue of responsibility when it comes to AI and other things is of the same order as the concept we already embraced around the notion, somewhat weakened of late, of ministerial responsibility. So we expect an individual Minister of the Crown to be responsible for everything that is done within their department, even though it is probably impossible for them to know every single decision that's made. And it's a fiction that we must rely upon in order for our democracy to work at least the kind of representative democracy we have. We cannot have the executive held accountable by our Parliament. Unless a member of the executive will be responsible and they can say, oh, I didn't. Know I wasn't told. But the system requires them to. And for us to accept that even. Though it's not literally true, it must be institutionally true. And so I think this structure that we've got now with new technology, big data, AI and the other things where you cannot literally know every step that's taken, particularly when you start to put the human together, you know the information. What's the intersection of these things? How do we then hold on to the concept of responsibility in those areas as well? So that's that's where I sort of start with that particular problem and possibly also where it needs to to end as well in the democracy that you've got that that sort of system in place. [00:42:43] Dr Simon Longstaff AO: Back in 2014, I wrote a piece with Arzan Tarapore on emerging big data. As we saw it in 2014. And I came at it as the director of the Defense Intelligence Organization from from this perspective. And that is that when I'd walk around the workplace, our people, um, were spending so much time fighting for information, you know, systems weren't connected. And they were, you know, they were spending their day fighting for information. And the genesis of this paper we wrote was, you know, how take this further forward. What does it all mean? Well, you know, the vision I had at the time, if you like, was purely a productivity gain, and that is that, you know, if you were a if you were a North Korea missile analyst, you could come to work, you'd be immersed in this little sort of module that would know that you enjoy a cappuccino and at the same time as your cappuccino is coming through a slot, all of this information that is being triaged specifically for you, that's coming in the preceding 24 hours is presented to you on a plate. Open source, secret intelligence. You know, for all sorts of diplomatic cables. And they're all brought forward, curated, all curated for your for your identity, for your position, and for me. So there were enormous productivity gains by eventually. Now we're still not there. But first the. [00:44:07] Dr Simon Longstaff AO: Cappuccino or the curated.Data. [00:44:09] Maj Gen Paul Symon AO: Of course, the cappuccino first. We're Australians. You know, you get a, um, the interesting thing, I think about artificial intelligence and maybe I'm being idealistic here, but the potential to be able to generate for that North Korean weapons analyst so much information so quickly in their working day that potentially it gives them more time to actually check the bona fides. Think about the application of that test, the assumptions that sit behind it, ask ethical questions. If we've trained them and developed our people correctly to think not only about legality and propriety, but the ethical dimension of, you know, what they've received are what its provenance is, how it could be, how it should be presented to decision makers. So, um, I know that there's a range of challenges here, but the the optimist in me believes that we are liberating. If we train our people properly, we're liberating time so that they can use what humans do best, apply careful scrutiny, that moral compass, over the sort of work they're doing. [00:45:30] Dr Simon Longstaff AO: The great analogy is that the medical practitioner who is able to rely on a diagnosis, for example, a pathology and other things, that there is cancer with a degree of accuracy greater than a human can have, but only another human being can put their hand on your shoulder and say, I've got some terrible news. You have cancer because they know what it means to be mortal. And the person being told that knows that they understand that. So there is this I think you're optimistic view is possible, but I'm going to channel me for a minute because, well, you ask what's what's the sticky case you can talk about? You know, you're saying a practical example of where this works. Is there one that you can? [00:46:09] Dr Miah Hammond-Errey: I was hoping Paul had a step for me. [00:46:11] Dr Simon Longstaff AO: I was going, I'm channeling you to him. [00:46:14] Maj Gen Paul Symon AO: A sticky case of the application of data. [00:46:18] Dr Miah Hammond-Errey: A practical example of. Of an ethical conundrum in using technology and intelligence. [00:46:23] Maj Gen Paul Symon AO: Of using technology. Um. [00:46:28] Dr Simon Longstaff AO: Can you think of a case which you can talk about, where the technology that was able to be deployed changed the ethical landscape? In other words, what might otherwise have been understood? Is this a way, another way of reframing the question that you're trying to get to? [00:46:44] Dr Miah Hammond-Errey: I think I was trying to draw out some, maybe some real kind of practical ethical conundrums, like the way that as practitioners might actually have to approach and think about, you know, what are some of those tensions? Um, you know, so briefly, some of the examples that participants spoke to me about were things like in, in using machine learning to triage potential, um, suspects of, you know, in this case, they were talking about terrorist activity, but they were looking at how might you triage a huge list of thousands of people to look at. How might you prioritize that, essentially? Um, and they were trying to come up with different machine learning algorithms. And, you know, many of them were talking around the the boundaries of what they thought were okay. So they thought it was okay for a list to prioritize. But then, as you said, humans would need to come over and do the critical reflection from your experience, have there been situations or if they're not technology focused, but how those ethical conundrums are kind of thought about by practitioners? [00:47:51] Maj Gen Paul Symon AO: What comes to mind is ethical dilemmas our people faced when they're operating in the Middle East. And they were obtaining intelligence either through technology or through human sources or the like. And the passage of that intelligence to a partner, you know, was potentially going to lead to kinetic, uh, action against them. Uh, you know, our people, I mean, we had we had, um, very well-defined processes and steps that we needed to go through, given how important getting this right was. Uh, and so we had we certainly had solid processes in place, but I would say equally important Importantly, are our people that were at the coalface of this dealing with their own sort of ethical dilemmas and, and, and paring their, their ethical compass with the work that they were doing surfaced. You know, in the time that that I was director general, I came to Simon and the Saint James Ethics Centre deliberately for this purpose, and we bought an ethics counselor into ASIS, which we still have to this day. And the purpose of bringing the ethics counsellor in there was that if a staff member did have an ethical dilemma, they could they could talk it through, uh, address the issues, you know, and. [00:49:21] Dr Simon Longstaff AO: build the language to do so. [00:49:22] Maj Gen Paul Symon AO: And build the language to do so. And, and I gave I gave the staff the undertaking. I mean, I've said this publicly, but I gave the, the staff the undertaking that if they off the back of that conversation wanted to opt out of an activity, then I guaranteed that there would be no career detriment. That it's, you know, it's that important an issue that we get this, this ethical dimension of the work that we do, uh, that that has equal place with, um, uh, you know, the sort of roles, tasks and responsibilities that government had of us. So, um, yeah, I think that's probably the most contemporary example I can think of. I do think that, you know, a big part of the work that we do, um, because we're dealing with humans who are willing to betray the secrets of their of their country. And for us, it's a lawful activity to act unlawfully overseas. Um, causes our people to think very, very carefully about, you know, motive, um, issues of Other issues of the individual. What you know, their mental health, their their state of mind to want to have a relationship with us. And of course, we employ operational psychologists and the like to help in that conversation about, uh, the nature of the relationship that we're forming, you know, with another person. So, um, I guess that's that's code for saying that with with human intelligence in particular. I think there is always going to be that parallel ethical framework, um, that that is considered by staff from most junior through to most senior at the individual level, as you talk about the book, but at the organizational level as well. [00:51:19] Dr Simon Longstaff AO: So that's a very powerful thing that kind of we'll call it an empathic response with a person when you've got a person in front of you. I just think about the sorts of things you then see in other areas of life where people say the most terrible things online because they're anonymous, or they break up with people using a text message or something of that kind. And I just wonder whether or not the intelligence that confronts you with another person draws forth certain ethical responses, which it's more difficult to sustain when it's just data or something like that. Did you pick up any or any? Then we'll go to the audience for questions about whether or not just the the anonymity of dealing with data and things deadens that ethical sense that would be there, or is it still just as strong because of that sense of purpose you described? And people know that even if they're dealing with a data set, it still has human impact beyond what's seen on the screen. [00:52:16] Dr Miah Hammond-Errey: Or I don't know that it was necessarily as clear as that. But I think what came through for me was also just a sense of, um, not enough time to do the job. So really focusing very clearly on purpose and so not kind of drifting through data sets to try and find things, but really clearly trying to achieve quite a specific goal which has been set out usually in legislation or, you know, organizationally. But as Paul said, through very clear processes and procedures, I think what was interesting there that we probably don't have time to talk about now, but, you know, is really in terms of the idea of ethics at scale and how we might think about new ethical processes being algorithmically defined is what kind of questions have organizations actually processed through that to create the procedures? Because as we look at, you know, will the will is a simple example, but will the decisions of an Uber car versus a Google car versus a meta car, will they all have the same ethical decision framework and presumably not. Um, and they. [00:53:22] Dr Simon Longstaff AO: Might have to. I mean, when we've done work on autonomous vehicles, one of the principles we came to recognize is you have to have a clear distinction between those who consent to be in the car and those who just happen to be the hapless pedestrian. There's not a direct symmetry. There's a greater responsibility owed to the person who never consented to be at risk than those who did. And that has to be across all the car companies. I don't think you can sort of treat that as a relative obligation. It must be a general. [00:53:50] Dr Miah Hammond-Errey: I suppose when you move that conversation out to different nation states, when you're talking about autonomous weapons, for example, I think there are different approaches. [00:53:59] Dr Simon Longstaff AO: Where do you think we draw the boundaries then? I mean, did you pick up any kind of general consensus amongst the people with whom you were speaking? And they said this technology, you know, it can be wonderfully enabling. It can save us time. It can do all sorts of things. But this we would not do. That's just where we won't go there with that. [00:54:18] Dr Miah Hammond-Errey: I mean, I kind of want to draw on something each of you have said. And, you know, Paul talked really clearly about the line between intelligence and policy. And it is it is the kind of this mythical thing that has been there forever. It does exist, of course, but it has a real mythology around it. It's been challenged extensively, and it is in the intelligence community. The idea that intelligence work gets you to a certain point, but the ultimate decision to act on that is a political or a policymaker decision. And when I said to you at the beginning that many of the participants really clearly upheld that when it came to the use of data and the use of technology in intelligence, intelligence practitioners very clearly communicated. They felt that line was a policy choice. [00:55:03] Dr Simon Longstaff AO: But the reason why I wanted to press a little bit is because Paul also just gave us a very good example as to how, when he was director general and with particular context, those who were serving in the service could say, ah, but I won't do that. In other words, they had their own potential ethical red line they wouldn't cross, and there'd be no adverse implications for them in terms of their continuing service which is in other words, the decision, the ethical decision making not to do something is before the policy decision. Did you pick up any of those in relation to what people saw in the tech space? I mean. [00:55:39] Dr Miah Hammond-Errey: I think ASIS has a really different purpose. The way that tech affects each of the intelligence agencies is actually really closely linked to their legislative framework and purpose. So the other agencies don't experience a similar kind of purpose challenge. [00:55:54] Dr Simon Longstaff AO: Because that's a yeah, here I am thinking ASIS land when it's much broader. Look, it would be lovely to provide an opportunity for people within the audience to ask a question. Would anybody. Yes. Up the back there. Thanks very much. That was very interesting. [00:56:07] Audience question: So I was just going to ask both Mia and Paul. There's a company called Brainwave Sciences that's got this product called AI cognitive. And basically it's a neural device to enhance, um, interrogations, to provide some more information about neuro all you're all data. And they market it towards security services around the world. And so that might be an example of a of a kind of technology that provides some kind of maybe slightly new ethical questions. The privacy questions. And I know, I know, uh, Mia's been thinking about this. And my question is, how do you think, um, you know, of course, a decision might need to be made whether to employ such technology. It might not be much of an enhancement at the moment, but perhaps in the future it will become better. And how will, uh, security services go about evaluating such questions about whether to, to use, you know, to make use of that or other similar tech? [00:57:21] Dr Miah Hammond-Errey: I am thinking a lot about Neurotechnology. One of the things that I see inevitable for most intelligence organizations about adoption of technology is that once a technology is broadly accepted in the wider public, that that at some level it becomes more acceptable. And when I think about the level of data collection from private companies, and particularly, obviously, the big tech players, historically, even a small component of that data collection by intelligence organizations would have would have really raised red flags. And yet now we're looking at mass data collection and surveillance and targeting capabilities in the private sector. So we are seeing adoption of technologies in the public before they're adopted in intelligence agencies, not in totality. Of course. There are definite, um, research programs. And I definitely look at places like DARPA and the CIA leading the way in those kind of pilot examples. Um, So I guess that's a long way of saying, I absolutely think that intelligence agencies, like companies and individual members of the public are going to use new technologies that come up. They do have a different ethical decision making framework, I think, rather than as an individual. You can think, you know, do I want to consent to this or have I already lost the right to, you know, my privacy? Which many people would, you know, kind of lament? I do think intelligence agencies, of course, they're going to try and access, like the rest of us, access new and emerging technologies to try different things. [00:58:58] Dr Miah Hammond-Errey: I did see in my research a real desire to do that in a in a way that reflects the Australian, you know, the commitment to democracy, the kind of values that we have. Does that mean they get it right all the time? Absolutely not. You know, and I do think that every agency is going to have a different, um, framework for making those kind of decisions. You know, Paul talked before about the kind of things that the AFP might be interested in versus ASIS versus ASIO. They are legislatively different, so they have different interests in using technology. I think it would be remiss of them not to explore technologies just the same way that a company would be interested in using technologies. You know, I'd probably throw to you, Paul, for a more granular thought about how those decisions might actually progress the. [00:59:45] Dr Simon Longstaff AO: Arrangement we have at the moment. I don't think he's quite fit for purpose in keeping track of and surveillance of all of the emerging technologies. I think some agencies are better at this than others. As a general rule, the Australian intelligence a number of Australian intelligence agencies maintain relationships with sort of venture capital and startups because that's right at the forefront. And they're looking at technologies with a view to, um, Which technologies can we use to protect ourselves, defend ourselves, give ourselves more resilience? And which can we? Can we use for the purpose for which we've been designed? So? So we do that. But it's done pretty, pretty quietly. And I and I think not very comprehensively, I think we are fast reaching the point because of the heft of the intelligence community now, but also the speed with which technology is coming on. Like you talk about, um, this sounds a bit pedestrian, but just as we've been used for many, many years to having defence trade shows and the like that, you know, the Navy shows and army shows and all of that, that where these players come together and and use it as a forum to, uh, to display technologies, emerging technologies and the like. I actually think that the Australian intelligence community is building the heft which the US and the Brits have and other countries have. I actually think we ought to be trying to set up a situation where some of these, uh, some sort of show can be, can be put on and we can actually display these things, and we can get quietly some of the analysts that are much younger than, you know, the me are actually looking at some of these things and having quiet conversations about its utility going forward. [01:01:36] Audience question: Can I bounce off that conversation about industry? And I want to use an example where I think up until recently, industry has been providing the technology that empowers the intelligence agencies. What we saw last year was examples of where they've gone beyond just providing the technology to now being the collector and the analyzer of this of information. And one of the examples that was, I guess, picked up a little bit by the media last year, was around the facial biometrics and basically scraping the internet of all the Australian faces against, you know, their names and then selling it. Now the private. The Australian. The office of the Australian Privacy Commissioner took exception to that but wasn't really able to enforce any change because this was an offshore company. Their servers were offshore, so it was outside their jurisdiction. What it prompted me into the discussion that we're having here is here you have this this case, a US based company that was building a business model about scraping the internet. So in the language you just had there with Paul, the collection, using AI to then identify the face and attach it to a name, and their business model was to sell it solely to law enforcement and national intelligence. It then prompts me to go all of a sudden, now we've got a software as a service player outside Australian jurisdiction, potentially empowering other foreign intelligence agencies to actually almost leapfrog some of our existing, you know, you know, foes and the like. How do we see that playing out, and what are our abilities to rein in some of those ethical considerations where they are outside our jurisdiction?
[01:03:21] Maj Gen Paul Symon AO: That's a really difficult question.
[01:03:25] Dr Simon Longstaff AO: Gnarly question.
[01:03:25] Maj Gen Paul Symon AO: Because what distinguishes. I mean, you're hinting here at the fact that we're sort of potentially in the future going to be operating with one, you know, with our hands tied behind our back compared to others offshore who have far more freedoms than us. But I guess channeling Simon here and what we've learned from him over the years is that we have no choice but to act legally and with propriety. And. And if the and if the margin widens, in other words, if we're getting left behind, um, there is there is clearly a conversation we would have to have with government, you know, director generals would need to have with their ministers and the like to sort of say under the under the current rule set, we're falling behind. And we know that there are some very interesting and difficult ethical issues that we have to cross here, but ultimately it falls to the government of the day, its own risk appetite to adjust the sort of settings and the rules by which we we abide by. And, you know, you're right. I think there are examples where, um, you know, we don't have the freedoms that other other players offshore have. And I don't know what the answer is, but it is our obligation to to be across what's going on. It is our obligation to ensure that, you know, in our own national interest, we can deliver for government the best, you know, the best intelligence that we possibly can. And and if we're falling behind, give them options as to how we might sort of reduce that shortfall. I think that's.
[01:05:15] Dr Simon Longstaff AO: A delicate balance because particularly for liberal democracies. So if you go back to the world of the use of kinetic force in an asymmetric situation, you can absolutely bring about a desired effect. But the cost of doing so may be that you lose your moral authority. And there's a wonderful chapter in the US Marine and Army Corps Counter-Terrorism manual, the chapter on ethics. And it's got a picture, actually, of the French in Algeria. And it says lose moral authority, lose the war. It's kind of lessons come. So what you might pick up in terms of technical proficiency by, say, buying into that if it becomes disclosed because it wasn't adequately managed and thought through in a way that Paul was saying, that's when you start not merely to threaten trust, but also potentially legitimacy. And as I said, you can lose a certain amount of trust, which can be offset by other deadweight costs. But once legitimacy is lost, it's a broken thing. And so I think, as Paul say, wisely saying, if you if you are encountering a serious deficiency in your own capacity such that it threatens the purposes for which, say, an agency exists, you've got to go and start to think about how do we manage this? Is it a new inflection point we introduce, which re tilts the table away, that does away with our disadvantage. Or is it the honest conversation that brings those on whose moral authority is you have to preserve along with you to that point, but simply to say, are we getting left behind? Let's do it. Or else, I think is a really dangerous position to follow. And I don't think anyone would be foolish enough to do that.
[01:06:57] Dr Miah Hammond-Errey: I do also think that governments are trying to connect up with other alliances to, to reduce that kind of that grey space of operation, if you like. So particularly with the recent cyber autonomous sanctions, the recent dual announcements about the Lottle cyber attacks you are seeing, you are seeing, in this case, Five Eyes. But in the future, I think also places like Asean and India coming together to say we won't accept that kind of activity in our jurisdiction. And I would say that many of the cases, the case that you're talking about and future cases like it will kick start our, you know, legislative reform because I think it has highlighted there are significant gaps where players are operating in a gray space that we actually we're not comfortable with domestically, but we're still steps away from that. And I'd say when I started researching big data and I said I researched big data and national security, people were like, they're connected. And now, you know, it's a it's a really easy conversation to have. So I think it's just takes time as people pick up that there are actually a lot of periphery areas still unmanaged.
[01:08:09] Audience question: Thank you, everyone for a very interesting discussion and congratulations to me on the book. I really like the theory actually, that no one would ever do a PhD if they knew what they were getting into. I think I can relate to that, but my question is sort of you said, you know, now big data is everywhere, and there are some people who said, in actual fact, what we should have is like a separate open source intelligence agency. You can do everything with open source. You could potentially even outsource to a commercial provider. Um, and I guess I'm interesting views is, you know, is the future of intelligence or open source or is there still a role for secret intelligence, and if so, do they need to be tightly coupled or should they be separated?
[01:08:38] Dr Miah Hammond-Errey: I know Ben Scott has written a paper on this recently, and I just want to say I think it's a big call to make a request for a new intelligence agency, and he's put a lot of work into that proposal. So, you know, I do respect the kind of intellectual rigor behind it. Um, I think there are some challenges. Firstly, the greater vast source of information globally doesn't render secret intelligence less important. In fact, it places a premium on what you choose to collect and keep secret. The challenge of outsourcing all of that research to another agency, in my mind, at least in the early days, creates many bureaucratic challenges. There is a there is a kind of a challenge here about do you outsource all of this or do you try to build it in? And there are technological bureaucratic challenges with building it in. In the book, I kind of talk a lot about how technology is going to change the role of the analyst in intelligence, which is kind of what Paul was talking about before. The analyst is currently very much the driver of the analytical work, and in the future you'd see a lot more technical capacity driving analytical capabilities. So in that sense, maybe you would have a much broader, perhaps open source kind of technology that is not outsourced but is actually available to everyone. Personally, I think that what is kept secret needs to be kept secret, and you keep that premium on that, and then you allow, I guess, a greater consideration for the fact that there is just more information in society. You know, it is it's possible to know or infer many things just because that wasn't possible 50 years ago. Doesn't change that fact.
[01:10:20] Dr Simon Longstaff AO: One of the interesting factors in this is that there's a growing movement and a growing technical capacity to support individual data sovereignty, where you will hold your data, it will only be released with your permission for specific purposes. It will go through exchanges that validate the data without necessarily tracking back. And so I think that a lot of the big data sets will they'll begin to shrink to that degree and the open source won't be there. And so there'll be some very interesting policy questions. Are there backdoors for these things so that certain people, under certain circumstances can bypass the data sovereignty of the individual? Uh, Um, or are there other ways in which you covertly fish out what you need? So I think there's I think the balance at the moment that we see around big data and open source and all the rest is going to shift quite profoundly, and that's going to have implications for the whole intelligence network as well, whether.
[01:11:17] Dr Miah Hammond-Errey: It's intelligence or our own phone data. It's cheaper and easier to exist on the current model. So protecting our data differently is going to cost more.
[01:11:29] Dr Simon Longstaff AO: My view is that I just think that even a country like Australia, for political reasons, will enable this as a political agenda. In other words, the cost of it will not be so burdensome to deny citizens that opportunity. So I think this is going to be an interesting shift that is driven by politics and democratic pressure. Look, we have time for one more question, I think. And then if there's anyone or anyone else got another angle they'd like to explore. Yes. Down the front here, please. Fantastic discussion. And, um, I've just got a question around.
[01:12:08] Audience question: I'll try and paint a scenario, because I think the question is, is if you've got access to data in one agency, and then another agency can use that to build some kind of AI model. So, for example, one agency's got all of the recordings of people from a particular ethnic group. And then another agency says, like, I'd like to be able to translate language from overseas citizens. I'll use that model to train a model, use that data to train a model, and I will take the weights and be able to then interpret language. Does that end up with an ethical obligation because you've not violated the second agency, hasn't violated the privacy of the first people, but you've taken away the characteristics from those people. You've essentially used that data. You might do this from a health data. You might take all of the biometric data and learn the patterns of people or how they behave, and then use it to sort of go go after a different set of people who. Are not bound by the same kind of controls. Is that one of the ethical problems you might have in the intelligence sharing, where you're not sharing the data, but you're sharing the insights? This is a.
[01:13:08] Dr Simon Longstaff AO: General point before they answer. The work we've done with a number of organizations sees the responsible or ethical use of data. The principles applying irrespective of the source. It doesn't distinguish between open source, public or personal data that the obligation goes there. So that's one particularity I don't want to. That's just a general kind of thing about how one can think about that.
[01:13:35] Maj Gen Paul Symon AO: I mean, the way I. Firstly, you've got a very fluid imagination and I applaud you for that. But I think it's incumbent if I think about my own responsibilities. When I was in the job, I think it was incumbent on me and my people that if I was obtaining, if I was doing work in the nature that you're talking about, I've got to satisfy myself that firstly, it's within the purpose of the organization. And to your point, if. If I'm then going to, which I would frequently do with with different data sets depending on what the nature of the issue was. But, you know, we are a community because we share information to assist other agencies with their purpose. But it's incumbent on me before so doing that I satisfy myself that I understand, uh, you know, for what purpose it's being, being used. Um, and, you know, it becomes a, it becomes a conversation between two agencies, whether it's at the senior level or more junior levels or whatever. Uh, recorded, um, for governance and accountability reasons, given the way we've designed ourselves as a community so that the inspector general can satisfy herself or himself that, again, this information is being used or this development, uh, is is in accordance with the law, the acts and the proper purpose of each of the agencies. So I think it for me, those questions are relevant to everyday life and including the movement of of information, data and the like. So I think it would fall very much into the same class of decisions as we make pretty well routinely anyway. It's just a more sophisticated idea that you've developed.
[01:15:25] Audience question: I just think that there's kind of a in the past, you copied data from one agency to the other, and now you don't. And that's a privacy breach. But now I can gather the insights for that particular people and understand. And I'm not breaching your data privacy, but I'm actually understanding the fundamental characteristics that are about you. So whether it be I identify the physical traits of a set of families that have got connections to the Middle East, and then use that to try and find them in the past instead of taking their DNA. I just find the common attributes of that. So I can basically fingerprint people somewhere else or something like that, and then you go, well, I haven't breached your privacy, but I have taken away something intrinsic about you. It's it's your it's one of these examples where you should be able to actually use this, but it the only reason you wouldn't it would be this ethical gray area. Have you actually broken someone's privacy? All you've done is taken away. Or have you used something.
[01:16:14] Dr Simon Longstaff AO: That belongs to another without their permission?
[01:16:16] Audience question: I did use it without their permission, and that's not a privacy breach.
[01:16:18] Audience question: It may be.
[01:16:19] Audience question: Just it's not the collection of the data. It's how you use it.
[01:16:22] Dr Miah Hammond-Errey: I mean, another another way of thinking about it from a technical perspective too, would be, is that the most efficient and technically robust way of achieving that goal? So. You know, the vast array of classified systems generally have less capability than the big tech analytical players in terms of they are incredible in niche capacities. Right. But to build models, certain models, they're excellent at building. But other models, it may actually be cheaper and easier to build a model out in the open source and try it on data, rather than trying to actually go and access data and use it in a in a classified environment. It's totally possible, of course, but it's not always going to be your first option because of complexity and the challenge of accessing the analytical and technical capacity needed to do that job in that environment. There are definitely going to be situations where I think that's really important, but I think overall we see a large portion of the innovation, the technical capacity and the data capabilities actually resident in industry, and it's about pulling those in and using them when it's most important. It is very difficult for intelligence agencies to keep up. I mean government broadly to keep up with technical capacity across all the areas. And so they actually need to really narrow down. I'm just saying that in terms of like every business, you know, they have to think about what's the most cost effective solution. And sometimes that will be to do what you've suggested. But I think oftentimes that means a solution will be develop a model in, you know, in public domain and try and adapt it in other domains.
[01:18:03] Dr Simon Longstaff AO: We're at the end of our time, believe it or not. Um, would you like to have the final word before we say our final?
[01:18:10] Dr Miah Hammond-Errey: Thank you. Technology is transformative, not always in the way that we think. The technology ecosystem is a global and interconnected one. And almost all our industries are digital now. It's a complex ecosystem. And we've kind of touched on a few little of the ethical dimensions of that, but we're going to see more of that. The way that a small shift in something like Twitter can have a huge, profound change in an election or in another environment. It necessitates that we think in systems and that we build work collaboratively to build solutions. So thank you for coming and having a chat about this. I'm hoping that the book and my podcast, Technology and Security can help shine a light on some of these transformative shifts for technology and national security. We really need more diverse perspectives to come together to help make this happen. Thank you, Simon and the Ethics Center for hosting for offering and hosting this event. And thank you, Simon and Paul, for joining me in such a fascinating conversation. And thanks to the audience for coming and joining this conversation with us.
[01:19:11] Dr Simon Longstaff: Thank you.Paul, and thank you, Miah.
[01:19:16] Dr Miah Hammond-Errey: Thanks for listening to Technology and Security. I've been your host, doctor Mia Hammond. If there was a moment you enjoyed today or a question you have about the show, feel free to tweet me at m I h underscore h e or send an email to the address in the show notes. You can find out more about the work we do on our website, also linked in the show notes. If you liked this episode, please rate, review and share it with your friends. Given the intense interest in the role of technology in intelligence production and Security decision making, from time to time I’ll be adding special additions with a purple logo highlighting intelligence specific conversations. Reach out and let me know how you find them.