Signed in as:
filler@godaddy.com
Signed in as:
filler@godaddy.com
Dr Miah Hammond-Errey talks with Australia’s eSafety Commissioner, Julie Inman Grant, about AI’s rapid rise, online harms, where banking has led the way in technology regulation, and much more. In this episode they discuss eSafety’s world-leading regulation work, why we can’t just regulate algorithms and how eSafety collaborates with industry to drive change. They also cover the TikTok bans and some of the wild technologies coming down the pipeline.
As Australia’s eSafety Commissioner, Julie Inman Grant focuses on keeping Australia’s citizens safe online. She has previously worked across the public and private sector in the US technology space, including at Microsoft and Twitter. Her work has helped drive world-first regulatory regimes under the Online Safety Act 2021, and positioned eSafety to harness proactive, systemic approaches and address online user safety through a range of schemes, education tools and strategies.
Technology and Security is hosted by Dr Miah Hammond-Errey, the inaugural director of the Emerging Technology program at the United States Studies Centre, based at the University of Sydney.
Resources mentioned in the recording:
Making great content requires fabulous teams. Thanks to the great talents of the following:
This podcast was recorded on the lands of the Ngunnawal people, and we pay our respects to their Elders past, present and emerging — here and wherever you’re listening. We acknowledge their continuing connection to land, sea and community, and extend that respect to all Aboriginal and Torres Strait Islander people.
Please check against delivery
Miah Hammond-Errey: [00:00:02] Welcome to technology and security. TS is a podcast exploring the intersections of emerging technologies and national security. I'm your host, Dr. Miah Hammond-Errey. I'm the inaugural director of the Emerging Technology Program at the United States Studies Centre, and we're based in the University of Sydney. My guest today is Julie Inman Grant. Thanks for joining me.
Julie Inman Grant: [00:00:25] Thank you for having me, Miah.
Miah Hammond-Errey: [00:00:27] Julie Inman Grant is the Australian E-Safety Commissioner. She has more than 30 years’ experience working in technology, public policy and online safety. She's worked for Microsoft, Twitter and Adobe, as well as for government advisory and regulatory roles across the United States, Australia and in the Asia Pacific. She is an incredibly influential voice in technology, regulation and governance in Australia as well as globally. And her career has been bookended by government service in the US and in Australia. We're coming to you today from the lands of the Gadigal people. We pay our respects to elders past, present and emerging, here, and wherever you're listening, we acknowledge their continuing connection to land, sea and community and extend that respect to all Aboriginal and Torres Strait Islander peoples. Julie, the e-Safety has a huge mandate. Can you briefly take us through the scope of your organisation and its work?
Julie Inman Grant: [00:01:17] Right. Well, nobody's handed me a card saying you need to regulate the entire internet for a range of harms, but that's effectively what we're up against. And it's not just the technology companies and the search engines and the app stores. We're also trying to regulate human behaviour to a certain degree and the fact that people are weaponizing platforms to deliberately cause harm to others. So that's a pretty big mandate. No matter how you no matter how you scope it or scale it. And I don't think any agency can be proactively moderating, moderating the Internet. And that's not how we were set up or what we're meant to do. We're here to serve as a safety net for Australians, particularly with our regulatory schemes. When things fall through the cracks, which they inevitably do, because even with humans in the loop, content moderation algorithms are very imperfect.
Miah Hammond-Errey: [00:02:22] You've just highlighted one of those, content moderation. But 2023 is shaping up to be huge in new technology development, as well as the tech policy space. What are you watching most closely?
Julie Inman Grant: [00:02:34] Right. Well, obviously, generative AI has come on the scene very, very rapidly. And I think what concerns me about that is we've been talking about bias and algorithms for quite a long time and in AI and in large language models. But I don't think we were really prepared with how quickly the democratisation of this technology would be. And it feels like we're not learning the lessons from Web 1.0 and Web 2 and we're back to moving fast and breaking things rather than moving mindfully.
Miah Hammond-Errey: [00:03:08] You have written that you recently wrote, 'We're back to moving fast and breaking things rather than taking a human-centred safety by design approach'. You're obviously talking about GPT4 there. I guess a follow on from that is, how would you like to see digital guardrails embedded before AI tools like this are released?
Julie Inman Grant: [00:03:26] Right, well, I don't think we're ever going to be able to anticipate every single risk and harm that there is. One thing I've learned over my 30 years is that human beings have ingenuity and they'll be able to find creative ways to misuse technology. But it is up to the companies that are developing, designing and deploying this technology to erect the digital guardrails, to assess some of the potential risks and to try and engineer out misuse. But instead, you are seeing this race to be out there first and, you know, things we've already seen go off the rails. It's pulled back and they're put in retrospectively. And this is the whole principle of safety by design is, you know, embedding safety protections at the front end as a forethought rather than an afterthought when things do go wrong.
Miah Hammond-Errey: [00:04:18] Yeah. Thank you. That's a really good way of explaining it. You are a bit of a rarity with decades of experience in both the technology world and the regulatory space. How does this affect your perspective or approach?
Julie Inman Grant: [00:04:32] Right. Well, I do wonder how regulators, particularly as we're entering a much more complex environment with algorithms and AI and quantum and even things like neurotechnology coming onto the scene. If you don't really understand how technology companies think, what is driving them, what the real limitations are, I think it would be very, very hard to regulate this, this industry not having that foundational understanding or only taking, say, a big stick approach to regulation. I'm trying to walk that fine line between the carrot and the stick and collaborating to the extent we can with the industry. You know, I think we are aligned in certain areas. None of these companies want abusive material on their platforms or harmful conduct being enabled. So, where we can actually collaborate and work informally, that actually gives us the best outcomes and the most expeditious ones. So to give you an example, with our cyberbullying scheme, we've got a 90% success rate through cooperative engagement with the sector. When, when the cyberbullying content meets the threshold, we'd say the same thing around our image-based abuse scheme. We've got a 90% success rate and almost none of this illegal and harmful content is based in Australia or hosted in Australia because we've had such strong laws in place for so many years.
Miah Hammond-Errey: [00:06:11] I wanted to ask you about the legal notices that your office recently issued to large tech companies to answer questions about how they're tackling on online child sexual abuse.
Julie Inman Grant: [00:06:21] Right. Well, we've been asking some of these companies for six years what technologies they're using on what services. We've obviously been working with our hotline partners, such as the National Centre for Missing and Exploited Children in the US to explain some of the anomalies in numbers. So to give you an idea, back in 2021, there were 29 million instances of child sexual abuse reported to NECMEC. Twenty-seven million of that came from Meta. But something like 40,000, I think, from Microsoft and 160 from Apple. Now, you can't tell me that with billions of handsets out there all connected to iCloud and iMessage, that there were only 160 instances of child sexual exploitation. So we tried to have these conversations and we couldn't get the answers that we needed to form a full picture. And so in working with the government to shape the Online Safety Act, we said we really need these tools to compel greater transparency because we really don't understand the scale and scope of what we're dealing with. And I think we got a lot of pushback through this initial process. It was very uncomfortable for the companies because nobody's been able to lift the hood and really shine the sunlight where it needs to be. But it showed a lot of interesting things. And it is embarrassing for some of them because they've been putting put putting forward or signing up to voluntary principles to keep children safe online. And what this showed is they're not living up to those principles.
Miah Hammond-Errey: [00:08:07] And it's one thing to have standards, too, but to be able to enforce them or, you know, to ensure compliance is another matter altogether. I want to talk about alliances. So normally we ask our intelligence and security leaders about nation state alliances, but how do you see technology, digital infrastructure and the regulation of social harms impacting specifically the Australia and the United States alliance, but also more broadly?
Julie Inman Grant: [00:08:33] Right. Well, as someone who worked in Washington DC, kind of at the beginning of the tech policy movement and having shaped Section 230 of the Communications Decency Act, it was a very different time. The companies that were around, Microsoft was there, Novell, AOL, Prodigy, CompuServe. Very different environment. And we truly believed in industry at the time. If the Internet was overregulated and overtaxed, it wouldn't reach its full potential. Unfortunately, you're seeing a lot more polarisation, I would say, and in US politics, which, you know, the debate here around online safety has been very robust, but it's largely been bipartisan and hasn't been tainted by politics. You know, the Australian Parliament decided to legislate based on minimising online harm. But you've got you know, in the US they're talking about conservative voices, about progressive voices. It makes it very difficult to actually arrive at a consensus on how or whether to regulate the Internet. And of course so many of these companies have, are American companies that have, you know, generated tremendous innovation and growth of these industries. So I was just at the White House last week. I was at the Senate and talking to members of Congress last week. The US is very interested in what we're doing, and they're very interested to see that we have not only a regulatory role, but we have a coordination and education and prevention role. And I think that's the type of thing that they're looking at. They, they like the concept of safety by design because it's understandable to people. You know, we had to legislate seatbelts and airbags and safety features in cars over a half century ago. Of course, we have safety standards around food. So we're not making people sick or consumer protection laws. I think the question is why do you have this technological, technological exceptionalism, particularly when the Internet is becoming almost an essential utility and we're seeing more and more evidence that unbridled access to the Internet without certain limitations or without the guardrails in preventing harm, can be very damaging to mental health and well-being.
Miah Hammond-Errey: [00:11:15] How does the Australian approach to technology harms compare to global regulation? And I guess you mentioned it just before about, you know, the new e-Safety commissioners in Ireland and Fiji, but I guess more broadly we are. Well, you are the first e-Safety Commissioner globally. What does that mean for Australia and how does that mean our regulation is comparative to other countries?
Julie Inman Grant: [00:11:40] Um, well, it was really interesting being part of Australia's official delegation to the UN Commission on the Status of Women. The focus this year was on cracking the code and around what we call technology facilitated gender-based violence. For shorthand, we call it technology facilitated abuse or TFA. And I was in a number of fascinating conversations, but so many people from around the world were talking about the types of things they would like to do or we should do. And I was able to actually stand up there and say, we're actually doing those things. We have a program called e-Safety Women that helps women who are experiencing coercive control or technology facilitated abuse as an extension of the coercion and control in domestic and family violence situations. I was able to say we have a program called Women in the Spotlight, where we're providing social media self-defence, training for journalists, business women, for politicians, so that they know how to protect themselves online and so that they aren't silenced, but they are engaging in protective behaviours. We were able to say we have a legislative image-based abuse scheme and serious adult serious adult cyber abuse scheme so we can tackle things like doxing and cyberstalking from a regulatory perspective.
Miah Hammond-Errey: [00:13:12] Do you think technology is imbued with the cultural values of the context in which it was created and what does that mean for companies that operate globally?
Julie Inman Grant: [00:13:24] Having worked for three companies that are American companies, I remember having conversations in particular with folks at Twitter because it was designed as that free speech platform, you know, around the world and that, the First Amendment doesn't exist anywhere, and even the First Amendment has limitations when harm is created. And so many of the arguments were, as I watched, how women and those with intersectional factors, whether those with disabilities or LGBTIQ+ or indigenous Australians were being disproportionately targeted. And we know that those groups are twice as likely to receive online hate vis-a-vis the entire population. That's resulted in the silencing of voices. So the undermining of free speech. And that's precisely what targeted online abuse is designed to do. And we're seeing more and more cases of what is often referred to as gendered disinformation or persistent gendered trolling, where it's like death by a thousand cuts. There may be there may be a few elevated tweets or posts that are delivering direct threats of harm. But often it's about undermining a woman's credibility, her intelligence, focusing on her appearance, her supposed virtue or her fertility, not about the substance of what she's done or what she's what she's said. So it manifests very differently against women than it does men.
Miah Hammond-Errey: [00:15:12] Do you think that varies from culture to culture?
Julie Inman Grant: [00:15:14] Well, we certainly do see in countries where there are autocracies or there isn't the same level of gender inequality, that the gendered online abuse can be much more vicious and have much more serious implications or repercussions for women in her everyday life. So you do see these online harms spilling into real world dangerous scenarios.
Miah Hammond-Errey: [00:15:46] Thank you. It's a topic really close to my heart, so I'm interested to hear your thoughts. What do you see as Australia's tech strengths as a nation from a user and an innovation standpoint?
Julie Inman Grant: [00:15:58] Well, I'm sure everybody holds up Atlassian and Canva as two really important jewels in the Australian crown. But I do think the mindset in Australia is, is quite innovative. I mean, you see so many companies using Australia as a test bed because we're early adopters of technology. I guess you could you could sort of look at the differences between what created a Silicon Valley versus a Silicon Beach. But I think we're definitely punching above our weight on the on the world stage when it comes to technology.
Miah Hammond-Errey: [00:16:43] We have a world leading e-Safety office. And I asked you a little bit before about, you know, a responsible AI network. And I had a question in here about building out a responsible tech community ecosystem, and I'd be interested in your thoughts. And I know that you're interested in moving beyond networks and actually moving to action to not just that, but it feels in many ways like that community engagement and driving from the community needs to start first. So how would you see, how would you see that Australia can kind of continue to engage in responsible tech?
Julie Inman Grant: [00:17:17] Well, I'd like to see organisations like the Technology Council of Australia really taking and embracing initiatives like Safety by Design, where there are a set of principles that we agreed to with technology companies and risk assessments. You know, I wish VCs and investment companies, when they're funding new start-ups would use due diligence clauses and checklists that you can get these early-stage companies thinking about how their technology might be misused. I've got to give credit to the Australian banks to, to really move on financial safety by design when it was brought to them that their platforms, their online banking systems were being weaponised through microaggression microaggressions, particularly when former partners were sending over child support payments. And Westpac was the first, and they started by applying our safety by design framework. And AI and machine learning to try and detect and prevent these. And now you've had every major bank follow in its footsteps. So I think that's great. I would like to see us leaning in more on safety and responsible and ethical tech and showing people how we're doing it. Principled frameworks are great, but they're only effective if they're implemented.
Miah Hammond-Errey: [00:18:40] Yeah, absolutely. And you hit on something there that I think is really important. It's a bit tangential, but you talked about banks and I feel like so often when we talk about digital or tech, we're really we're talking about tech companies. But actually, so much of our data and so much of the digital infrastructure is provided by a range of industry that don't fall into this traditional tech. Most companies now need huge tech departments and they need innovation. So I think it's a really important point and I guess must be a challenge for the e-Safety office to get across online harms that may be come from outside of traditional tech spaces.
Julie Inman Grant: [00:19:15] Yeah, no, I think that's right. And you know, some might even argue that the banks are sort of moving online and may, you know, they actually already are competing to a certain degree with some of these online payment systems, whether you're talking Apple Pay or Google Pay or I think Libra was a failed experiment. But I think you, you get the picture. We're going to see more convergence. And I also think we're going to see much more invasive forms of technology. I mean, we've already seen that with things like drones flying over safe houses or the fact that surveillance devices are so small as to be imperceptible. So in some technology facilitated abuse situations, there may be there may be surveillance devices under shoes in prams and teddy bears or gifted to a child through spyware on a phone. I was just freaked out by a talk I heard on brain transparency. Obviously, there are some tremendous sort of gains we could get with neurotechnology, and I gave the scenario of a truck driver driving and starting to fall asleep at the wheel, something that can wake him up when that happens could help minimise, you know, tragic car and truck accidents. But when you start, if employers start using it to monitor productivity or efficiency, you know – the brain should be the last bastion of invasive technology. But it's companies are starting to use these technologies now. And so I think we need to get ahead of what this actually means for us as humans.
Miah Hammond-Errey: [00:21:10] Thank you. I have a question in there about what excites and terrifies you about technology. And you've kind of answered both in one question.
Julie Inman Grant: [00:21:17] I think the other thing to be mindful of, and this goes back to safety by design. I remember doing our first tech trends and challenges brief around deepfakes. We couldn't get any of the mainstream media to pick up the story on what deepfakes and what they might mean or how it might be misused. Now you don't pick up an IT rag without seeing something about deepfakes, but we know that deepfake detection technologies are way behind the democratisation of the technologies. And we've just seen this with Chat GPT4 I was just listening to Nicholas Thompson of The Atlantic talking about how they were starting to red team after things went off the rails, but why weren't they red teaming before it was unleashed to the public? And so you can't always let that genie out of the bottle once you're there. And we just really need to put more of a focus on technology companies being responsible and assessing and mitigating the risks that they can at the design, deployment and development process.
Miah Hammond-Errey: [00:22:27] We're about to move to a segment, but wanted to get one last question in. There's a debate in Australia about data localisation, and I wondered if you could share your thoughts on data localisation and security.
Julie Inman Grant: [00:22:37] This is the challenge of regulating the Internet. The Internet is global, laws are local. So we need to find a way to ensure that that we're not undermining the efficiencies and the benefits that cloud computing provides. With the questions around data localisation, if you have data stored in every country, then that sort of invalidates some of the model. So I don't have an answer for that. Of course, after the Patriot Act, you saw a lot more focus on that. Of course, any national government has a right to demand that kind of sovereignty, but over time, it might erode some of the benefits of this distributed network.
Miah Hammond-Errey: [00:23:36] So I wanted to go to a segment we have called Emerging Tech for Emerging Leaders. You've held some leadership roles during big tech developments. Can you give insight into how you have led others to navigate major tech changes in your career?
Julie Inman Grant: [00:23:51] I was just thinking about 1999 and being terrified at what was going to happen with the year 2000 and trying to encourage people within Microsoft to, you know, to really take it seriously. Of course, we didn't know if it was going to be a tech Armageddon or not. Nothing happened. You know, again, I think what I'm trying to do with e-Safety is to make sure that we are one step ahead, that we're looking 18 months ahead, where we're seeing the technology paradigm shifts and what's coming. You know, another example would be immersive technologies and what, what do full-body haptic suits and hyper-realistic headsets mean for online harm in the metaverse? Clearly, if you're experiencing online harassment or sexual assault, it's going to feel more visceral, more extreme, more real. And part of my role is to keep needling the companies to say safety by design now, mean how are you going to deal with these issues? Just having a blocking and muting button is not going to help you in the metaverse. This is going to be happening in private spaces in real time. So how are you going to mitigate those risks and harms?
Miah Hammond-Errey: [00:25:19] You said that you look at those technologies kind of, I think 12 to 18 months out. Are there other technologies kind of akin to the ones you've just mentioned that you're thinking about, that you know, leaders of today and tomorrow really need to know about?
Julie Inman Grant: [00:25:33] Well, this seems obvious to say now that generative AI is really, again, come on the scene and we've been talking we've all been talking about AI for a long time. And of course we see AI being used in things like content moderation where that can be really a good and positive thing as long as there are humans in the loop and vis-a-vis recommender systems and harmful algorithms. And we just put out a tech trends brief on that. But I do think we need to be thinking about quantum and we do need to be thinking about neurotechnology. You know, everything is going to be, you know, IoT devices, you know, cars are becoming more like computers. So everything around us, we need to think about how we prevent harm and how we harness benefits. And I'm not sure that that's still at the heart or the core of the technology design and development process today.
Miah Hammond-Errey: [00:26:38] Thank you. What are some of the key transferable skills as someone that has worked in both technology and security or regulatory roles?
Julie Inman Grant: [00:26:47] Well, I think you have to have a degree of technical fluency, but also understand frameworks. I think you have to understand human nature as well. So one of the things I've really tried to do with our investigative team, so we need to hire investigators that have significant technology skills and can do OSINT and, but also, also people who who can be compassionate and can listen and can help because people come to us in a great deal of distress, and delivering that compassionate citizen service is really important, you know, because there has been no other online harms regulator, we've had to really selectively hire people with a whole broad range of experiences and train them on the job.
Miah Hammond-Errey: [00:27:47] TikTok bans are a pretty major topic in the US at the moment and obviously there's a distinction to be made here between TikTok on government devices and more broadly. But legislation is going ahead in the US to make, to make it possible for the President to ban TikTok in the United States. What are your thoughts on that?
Julie Inman Grant: [00:28:09] My thoughts are that Minister O'Neil has a report before her and is going to make a determination about what she thinks is right for the broader populace. You know, the way I'm engaging with TikTok is around a range of online harms. Like it or lump it, you know, so many teens and tweens are on TikTok. And it is a cultural juggernaut. And I think we need to make sure we're understanding how the content and influence is, you know, influencing our young people over time. And we've just issued a legal notice to TikTok, asking them about their approach to detecting child sexual exploitation, grooming, sexual extortion and harmful algorithms. And I guess in terms of thinking about how we regulate in the future, we're giving a lot of thought to what are the, what are the skills we need and the technologies we need to regulate. We're going to look at outcomes based. I don't think we're ever going to be able to break that black box. I think that's, that's unrealistic. There just aren't the people out there who have experience in doing technical audits around algorithms. So we're going to have to find creative and smart ways to understand what's actually happening under the hood and make sure that these companies are not building algorithms solely for engagement, that they're using the technology and nudges to detect harmful, harmful content and aren't sending people down not only a rabbit hole, but a death spiral and are using those technologies for good.
Miah Hammond-Errey: [00:30:03] My colleague Tom and I recently published a piece looking at trust and distrust in technology. Do you see that tech companies build and maintain user trust, and what sorts of things can erode or damage that trust?
Julie Inman Grant: [00:30:17] Well, it takes years and years to build trust, and probably a minute or a single data breach to break that trust. And I think about Zoom as a company, how they, they scaled so beautifully at the beginning of the pandemic. I think in December 2019, they had 10 million daily active users and it moved up to 300 million by April. But then you started to see 'Zoom bombing' and security and privacy limitations, and entire school systems and government agencies and national government started to pull out. And that actually gave Microsoft and Apple and others the opportunity to really gain traction in that space over time because they had built built that level of trust. So how do you actually measure that? I think the level of trust in technology companies writ large is probably lower than I've ever seen it. And that's saying something because I worked for Microsoft in Washington, DC during the midst of the antitrust trial, when, whenever I told people where I worked, they'd say I'd get the 'evil empire' diatribe. I also worked for Twitter at a time when they really weren't doing the optimal work around safety and ISIS was proliferating. So I've been working for companies at crisis moments and I don't think I've ever seen – and we're seeing a sea change and really tectonic shifts happening from technology companies being really glorified to vilified in many, in many ways. And governments are starting to act and they're starting to regulate, but across the board, not just for online harms, but in the competition space and the consumer space, privacy and security space. Obviously, with disinformation. So there are a lot of tricky issues that these companies need to navigate, but they need to do a better job in ensuring that their, their technologies are used as tools rather than weapons.
Miah Hammond-Errey: [00:32:32] The US and China appear to be in a race for AI dominance, and we're seeing tech decoupling in some areas between the US and China. What are some of the tensions from your perspective and where might that leave users?
Julie Inman Grant: [00:32:45] I mean, you've seen I just met Doreen Bogdan-Martin, who is the first female Secretary-General of the ITU in its 158-year history. That chair was controlled by China before Doreen, who's an American, took the role. And you know, this, this controls telecoms and internet standards. And so China is really good at playing in these standards bodies and, you know, they're good at tapping into smaller markets as we saw happening in the Pacific Islands. They're a, you know, quite formidable competitor in the technology space.
Miah Hammond-Errey: [00:33:42] I'm going to jump to a segment here. It's called Eyes and Ears. What have you been reading, listening to or watching lately that might be of interest to our audience?
Julie Inman Grant: [00:33:50] Well, I wish I could read more for pleasure. Over Christmas I read 'Lessons in Chemistry', which I really, really enjoyed. Right now I'm reading Dr. Kirstin Ferguson's book about 'Head and heart' leadership. And so, so much of it resonates with me. But she's also a wonderful storyteller and an authentic leader herself.
Miah Hammond-Errey: [00:34:13] I get from that, that you don't have a lot of downtime, but what do you do when you disconnect and wind down?
Julie Inman Grant: [00:34:19] I try and spend time with my three kids and my dog and my husband. Um, I'm fortunate to live near national parks, so I like to to get out in nature and trail run and just decompress. And, you know, I love catching up with friends, and cooking is where I am able to exercise what creativity I have.
Miah Hammond-Errey: [00:34:48] Sounds fabulous. We've got a segment called Need to Know, which is our final segment. Is there anything I didn't ask you that would have been great to cover?
Julie Inman Grant: [00:34:59] Just that we want to reach more Australians with our resources and our services to help them when online abuse happens to them and that is through e-safety.gov.au
Miah Hammond-Errey: [00:35:13] Excellent. Julie, thank you so much for joining me today. It's been such a pleasure.
Julie Inman Grant: [00:35:17] Thank you. I hope I made sense.
Miah Hammond-Errey: [00:35:19] Absolutely. Thanks for listening to Technology and Security. I've been your host, Dr. Miah Hammond-Errey. I'm the inaugural director of the Emerging Tech Program at the United States Studies Centre, based at the University of Sydney. If there was a moment you enjoyed today or a question you have about the show, feel free to tweet me @Miah_HE or send an email to the address in the show notes. You can find out more about the work we do on our website, also linked in the show notes. We hope you enjoyed this episode and we'll see you soon.