Recording: The State of AI Governance in Australia Webinar
Today, Professor Nicholas Davis and Sophie Farthing discussed the key trends in corporate governance and the existing obligations that apply to organisations using AI today. This discussion took place following the launch of HTI’s The State of AI Governance in Australia report.
The human implications of corporate use of AI
Around two-thirds of Australian businesses report using or actively planning to use AI systems in their business operations. While there are a range of existing laws of general application pertain to the design, development and use of AI systems. Australia does not yet have AI-specific laws or regulations. Without proper governance, the rapid deployment of AI systems exposes organisations, employees, consumers and the broader community to severe harms and significant risks.
Professor Nicholas Davis and Sophie Farthing led a thought-provoking discussion on the human implications of corporate use of AI. The webinar presented the findings of HTI’s ground-breaking report: The State of AI Governance in Australia by Lauren Solomon and Professor Nicholas Davis.
23.06.02 State of AI Governance in Australia Webinar
Friday, Jun 02, 2023
SPEAKERS
Sophie Farthing, Nicholas Davis, Edward Santow
Sophie Farthing
Welcome, everyone. For those who don't know me, I'm Sophie Farthing. I'm head of the Policy Lab here at the Human Technology Institute. Before we kick off, I would like to acknowledge the Gadigal people of the Eora Nation, which is where I am joining this call from. And I appreciate everyone is possibly all over Australia and maybe even beyond, but I would like to acknowledge the Gadigal people and pay my respects to the elders past and present. And as well as acknowledging emerging leaders and elders. We have a jam packed hour.
Sophie Farthing
We are here today to talk about a very exciting report that HTI published this week on the state of AI governance in Australia.
Sophie Farthing
Just one housekeeping thing to note is that we are recording this session so just bear that in mind. And I will hand over now to our Co-Director, Professor Nicholas Davis to start us off, and give us the framework for what we're discussing today. Over to you, Nick.
Nicholas Davis
Thank you, Sophie, and welcome, everyone, I'm joining from Ngunnawal land down here in Canberra. And it's great to be with you all. It's amazing how literally just putting a link on LinkedIn can get so many people interested and engaged in AI governance, but it is a bit of the topic du jour. Today, we're really going to do only two things together. First of all, I'm going to have a chat with Sophie about some of the elements in the report, we thought we'd do that in a bit more of a conversational style than death by PowerPoint for 15 minutes or so. And then by the time we get to the half hour, and maybe even before, we'd love to start engaging you all in conversation about this, because you are all experts in this area from different perspectives as users of technology, as leaders in your organisations as non executive directors or senior executives, or as members of the media and commentators in this space. So we'd love for you to really push us and one another, particularly as to where all this goes. So we've got a lot of talk around risk and harms today around the shortfalls in corporate governance. But it's quite good to think about - what is next, how does this get taken forward. So I will have a couple of slides running along as Sophie and I talk, I'll stop that when we finish. There are a couple of ways that you can contribute directly in this session, you can throw your hand up if you're interested to jump in and speak. I think we'll leave it so that everyone can unmute themselves at the moment. But if for any reason, we end up getting 500 people on the call, we might go to a little bit higher level of moderation. But I think we're pretty good with this group. Second, there is the chat function. And I can say that there's already a bit of chat coming in. So please do feel free to ask questions or exchange. And then as well if you want to ask a question that gets kind of recorded and moderated. There's the Q&A function as well on your panel there in Zoom. Well, first, maybe before I pass back to Sophie on this, I will say that
Nicholas Davis
the work that we're doing here in the corporate governance programme is a three year programme that's supported by the Minderoo foundation. We're also incredibly grateful to our HDI advisory partners at Atlassian, KPMG and Gilbert and Tobin. I know that some of you are here today. So thank you for joining.
Nicholas Davis
But really, this is just the first phase of a longer conversation around the corporate governance of AI. And I think the word conversation is our trigger. So, Sophie, time for me to pass back to you for a chat.
Sophie Farthing
Thanks, Nick. So yeah, as Nick mentioned, this is a chat about the content of this report. And we are really hoping you will join us in this conversation about halfway through. So my job is to keep Nick to time. But Nick, can we start off just about the timing of this project? And this report this week is pretty incredible. I think we're all keenly aware of the kind of week we've had. So on Tuesday, we've had an open letter signed by AI experts from around the world, which was pretty alarming in terms of talking about extinction and human extinction and the risk of AI HCI. We published our report on Wednesday and yesterday of course, we had the Australian government open up a pretty wide ranging public consultation on what Australia needs to do to regulate AI. So briefly, Nick, can you tell us you know why this project and why now?
Nicholas Davis
Yeah, thanks, Sophie. Well, first, I guess it's important to recognise that we are really focused on the promise of AI as much as we are on the risks. This is really the Human Technology Institute taking a human centred approach to looking at these issues. And it's good timing in the respect that the launch happened kind of inconsequentially, or in coincidence with those other aspects. There's the launch, as you mentioned, but we've been working on this since September. And the reason why we focused on corporate governance here is partly in recognition that about 90% of all major research work in AI, and more than 95% of applications that you and I would encounter in day to day life are managed and delivered, you know, dreamt up and invested in by the private sector. So, we have this wealth of engagement and design, power, marketing power, and technical understanding that rests in the private sector. To that, you know, in given that that being the case, we really wanted to take a hard look at how authority management and governance worked inside private organisations as the key players that are actually using and deploying these systems. So that's kind of the first answer to the question. The second answer, the question is, we do know, and we're keenly aware that there's a couple of key gaps going on at the moment with corporate leaders. One is a general sense of a lack of awareness on current obligations, and even on the actual use of these systems. And the second is, as you say, this increasing set of calls for regulation, many of which had untethered from actual policy or legal experience in these areas. So, we're hoping to bring those things together through this work.
Sophie Farthing
Yeah, certainly the calls can be pretty alarming. And when you get into the detail of them, there's a lot of nuance in these discussions, which, and this report fills in a lot of those gaps. So, in terms of looking at risk, so we've heard for the last few years that there's a range of risks posed by AI, and we keep hearing of them. You know, we've got evidence of algorithmic bias, entrenching inequality, and discrimination, you know, alarming things on social media algorithms, undermining our democratic processes. And here in Australia, we've seen the pretty horrific impact of Robodebt, which is obviously less about machine learning and more about oversight and what needs what is effective oversight of automated systems. So, going back to the report you've published or we published this week, so what new ideas does this report contribute to our view of risk? And AI?
Nicholas Davis
Yeah, yeah, it was really, it is a really important and interesting part of, I guess, shaping the narrative around artificial intelligence is what we really mean when we say risk, and one of the key and critical distinctions that we drew. And I should, by the way, here, recognise a person who isn't able to be on the webinar, but is the lead author of this work, which is our colleague, Lauren Solomon, this is really, her deep insight was, unless you are really clear about distinguishing between a harm and a risk, you can use the word risk in ways that really take away the human being or take away the kind of irreversible damage that AI systems can do. And so I guess the first key thing that we do is really take that point to heart and draw a clear distinction between what is a harm to an individual or to a group that is, in many cases irreversible, and it's hard to provide compensation, versus what is a risk in terms of something perhaps financially quantified, potentially far in the future, and potentially occurring more to an organisation or a group, which kind of yeah, it dehumanises, in a way, what could happen. So that's the first thing is really focusing like many others have done in this in this space ethnographically on, there are real people involved when, when systems like the Robodebt system go wrong. I think the second big kind of movement for us was and I'll maybe I'll pull up a slide here. I'll come up to come back to those ones there, Sophie was, we did find that when people talk about risk and harm in AI systems, they generally provide a laundry list of a variety of different things that can go wrong or do go wrong or how people get harmed, and we found that talking to policymakers and particularly the corporate leaders that we engaged in this project. So the corporate leaders we defined as non-executive directors and senior executives, leading organisations, using or planning to use AI systems. So when speaking to them, we found that they didn't have an organising concept for how AI risks develop and what those components look like in terms of both the harms to individuals, and the risks to the organisations. So we spent a lot of time coming up with essentially a typology, which looks a little bit like this. The first one is, a lot of AI risks and harms float from when an AI system fails, it doesn't do what it's supposed to do. Completely different sorts of risks that can lead to similar harms, is when an AI system is used in malicious or misleading ways. But obviously, the intention, the failure points of those two sources of risk are quite different. And the third is thinking about the overuse, the inappropriate, the reckless use the downstream impacts of AI systems as externalities. And so by kind of stretching these out into different harm categories, you can see in that second top box, they're biased performance. That's where your algorithmic bias comes from, it's when the errors of an AI system are distributed in such a way as to harm a certain group of people and not others. And if that harm is distributed across a protected characteristic that is, you know, in breach of discrimination law in Australia, and so we kind of produce examples there. The same can be done for malicious or misleading systems. The Trivago case was an example of a misleading algorithmic system that produced really bad financial outcomes for consumers. So dark patterns, but you also have other sources in here. And then there are there are a lot of kind of more society wide and environmental wide impacts in overuse. But there's also just the use of AI systems when it's not warranted. And I think we're seeing a lot of that where systems are not being used maliciously, and they're not failing, they're working as intended. But do you really need to capture everyone's licence plate entering your car park and match that up against a store card and gather and all that information in order to do the job that your store is doing? That's a really important question that currently isn't really covered by our law.
Sophie Farthing
Obviously in the report, there are so many risks that organisations are grappling with. So can we get practical, which is what this report does incredibly well. So thinking about organisations, how can a generative AI system pose risks to an organisation? And how are companies governing those risks at the moment?
Nicholas Davis
Yeah, well, maybe I'll just I'll just I'll jump back a few slides to just show the data that we've gathered that showed where organisations are using the risks because I think that shows, it gives a good insight into where those risks come from with an example like generative AI. So if I, if I roll back to here, we found that really, two thirds of Australian organisations say they are using or planning to use AI in the coming year. Now, when we dive deeper into this, I could not find a single organisation that wasn't using AI. When you think about how employees are actually going about their day to day business. We found that about half of the people that I spoke to who said they were using generative AI at work had not told their bosses about it. And that stacks up, it's about the same order of magnitude as the recent research from February this year, which shows that about 30% of people reporting of professionals reporting using generative AI at work, haven't told their boss. And you can kind of understand why when people stumble across something that kind of feels magic and saves them a lot of time, it's kind of uncomfortable to tell your supervisor or your manager, gosh, I actually spent a third of the time that I used to spend on that task because, often, efficiency gets eaten up in other tasks in different ways and you're not quite sure whether or not that's allowed or appropriate at that moment. The second thing I'll say is that when we kind of think about risk at the organisational level, you are often trading off your risk appetite is what you want to gain out of it versus what risks you're incurring. And you can see here on the right hand side of this diagram that that business leaders, senior executives in the dark blue, versus non executive directors had quite different perspectives on what they expected the benefits to be. They're not expected directors there were really focused on customer experience, whereas the senior executives were really focused on business process efficiencies. So it's quite interesting to think about, well, what risks you might take in different areas in order to get those that upside. And then just in terms of the data here, if you look across the top five uses that our survey revealed in terms of, of where AI was being applied at an organisational level more than the individual employee level. Three of the top five are really, really touching important stakeholders. So customer service, marketing and sales and human resources are all systems that can make really critical decisions on behalf of the individuals that interact with your company. And so, it was in that kind of view that we started to think well, how do organisational risks in this area evolve? So I'll just step forward to this example here for generative AI, that I think illustrates really nicely the risks of using this in your organisation. So about two weeks ago, the one of the national organisations that did telephone help for suicide prevention in the US called helpline, they introduced a bot called Tessa. They've been testing it since February. But about two weeks ago, they said, actually, we're going to fire 120 volunteers and let go all our helpline staff because Tessa can replace them. Now, what was terrible about that, for someone like me that's worked in other areas is that this was not just the fact that Tessa wasn't really the fact that Tessa was better than the helpline staff. It was this that unionisation of those employees was a threat to the, to the wage bill of the company and so they said, no, we're going to replace it with a bot. About a week, 10 days later, they shut down the bot, because it just could not perform. It was producing really terrible outputs that put people at risk. And obviously, when you're talking about suicide prevention, that kind of edge is it's just, I think, incredible to think about this as a as a live example. The other thing that I want to kind of mention in this is that, we do see that, that AI that examples like that amplify risks in three different ways. So that Tesla example, first of all, it just provided a terrible service. So that's bad for your commercials, it's bad for your operational efficiency. If the AI system just doesn't perform as intended, you're going to lose business, you're going to be less efficacious, just because it's a worse product. But second, you're going to expose yourself to some pretty big regulatory risks at all, particularly if you get into any of those areas where there are duties on your business to perform at a certain level, or if those errors are distributed in ways that that result in unlawful discrimination. And of course, these risks often co-occur, all three of them, you get reputational hit because those, those headlines look pretty bad. And just to mention, Sophie, to finish up on this point, a really interesting finding of our report was that when you ask organisations and executives in general, what they think about the risk of these types of things going wrong, people with less knowledge, who are just coming to the party with AI, will tend to cluster their answers in the middle of the spectrum, it'll be a bell curve, where most of the answers are low to moderate risk, that's the kind of the modal answers are in the middle of that distribution. But once you start speaking to organisations and executives, corporate leaders, who have spent a lot of time or more time at least with AI systems, the distribution goes bimodal, you get quite a lot of people clustering in the very low area, because these are systems either that they're very used to and maybe they're a bit complacent, or on the other hand, they are genuinely low risk systems like mapping, optimization, etc, that are not touching stakeholders in a way or they're not core to the business. But at the other end of the spectrum, you actually see quite a big spike in people thinking, Oh, no, there are a whole bunch of systems here that I view as very high risk in our organisation, and that for people like us at HTI who also work heavily in the policy space, that's a really interesting and positive finding, because it means that risk based regulation could work really well, because you can separate out those different use cases and create a clear line between those systems, which are lower risk, and those which are higher risk also shows that there is that recognition evolving in the market.
Sophie Farthing
And that, of course, as the head of the Policy Lab at HTI is something I'm thinking a lot about this question about regulation. So, Nick, obviously, we've got I've mentioned before, and I'm sure it's at the front of a lot of people's minds on this call, all these calls for regulation and what that might look like. So can you talk to us a little bit about how you deal with this in the report or these calls for regulation and what regulation might look like?
Nicholas Davis
Yeah. But I think, it's first to first important to recognise that despite a lot of chat about the risks and awareness of the risks, there's been very little action actually going on in terms of organisations, changing their behaviour and investing in governance systems to deal with these risks, so that's a key finding of the report that essentially corporate governance of AI is unsystematic, it’s unstrategic, it's unequal to the risks that are emerged. And this is actually from some McKinsey data, global data. But it's entirely backed up by what we've been looking at as well. And I might come back to this governance approaches slide if people are interested a little bit later. But I'll take it to the point about existing obligations. The fear that we have and that I think, really crystallised during our interviews and workshops, was when people hear public calls for regulation of AI, their assumption is that there is currently no regulation of AI. So there's this kind of unstated sense of oh well, if it's needed in the future, there must not be much there today. And while it is true that in Australia, we don't have like AI specific laws, we do have like a range of existing laws of general application that span a huge range of areas that are directly applicable to how organisations should be managing and governing their AI systems. And from a corporate governance perspective, of course, the most important of those are those duties that go directly to the director out of section 180, section 181 of the Corporations Act, as well as the common law fiduciary duties that are owed to the company. Those are, you know, due care and diligence, good faith, proper purpose, they're about, you know, being the kind of reasonable awareness, skill and capability that you bring as a director. And I think a lot of non executive directors we were speaking to on this hadn't really thought about their directors duties in the way of in a similar way as they might have with say, cybersecurity as starting to encompass taking care of what might, what could go wrong, these kinds of critical, critical risks. And I will also say that the AI system use is not just growing, it's also becoming more core to organisational business models. So it's not just about saying, oh, do directors need to be aware of the fact that we're using AI and recruitment? It's actually the fact that we're actually our company is using AI in multiple areas, and often mission critical areas that become very strategic, very important and expose us to a whole range of risks, which could engage those duties. But beyond the director's duties, and those Corporations Act duties, we have a number of areas that we present in the report where there are really important specific legal obligations to look out for, because they are particularly pertinent to AI systems. And so consumer protection, particularly around misleading or unfair misleading information, unfair dealing, cybersecurity, of course, anti-discrimination, I mentioned that, you know, operating at multiple levels, duty of care, work, health and safety, a really interesting one, and of course, privacy and data use given first, that AI systems do tend to engage data sources from a variety of, of different areas than your traditional IT system. But also that as soon as you are using an AI system to solve problems that are directly consumer facing, you are almost by definition, you are pulling in personal information, and sometimes sensitive information and you are exposed to a higher burden of data management. If those systems are not yours, they're not in your control, but you're still responsible for them, you need to be really careful. And so we I guess another thing we found in the existing obligations is that both senior executives and company directors weren't really on top of how their organisation was using third party services that deployed AI. And they were not at all aware of the risks, the legal obligations about how they applied through those third party services as well.
Nicholas Davis
Did we lose Sophie? Right. Well, in that case, so if you might have had some internet problems, I might just take us down to one final question that I know she's gonna ask, which is around what can we actually do with all of this? And it's really important that we give people a way forward out of these kind of, ‘gosh, people aren't doing enough’. It seems that corporate Australia isn't really on the kind of early end of the maturity curve in dealing with the risks and harms we can already see. And yet, the explosion of use, you know, is threatening this big gap that presents these risks commercial risks, regulatory risk reputational risks. We came down with these four actions that we thought were particularly pertinent. The first one really comes out of the fact that directors and senior executives repeatedly told us that they didn't think that their organisation had strategic expertise in artificial intelligence. Many of them said, look, we do have quite a good data and analytics group, we have some technical assets, some of which we borrow some of which we outsource. But internally across our teams like procurement, or HR process super deeply across senior management, and definitely on the board, we're not sure how to leverage this, what it means where the risks are, how it fits into our strategy. And only 10% of the people we surveyed, or I'll speak to even had AI, or related to AI, in their strategy or an AI strategy at all. So these first two points are basically around skilling up on the strategic side of artificial intelligence, not becoming data scientists and knowing how to specifically deploy or manage or operate it, but knowing how to decide when it's appropriate and how to govern it well, internally. And then literally just having an AI strategy, which sets out your risk tolerance, your risk appetite, was critical for the board members. And that was the biggest request from boards was, we don't see an AI strategy in our organisations and we want that present to help us do our job as company directors. The third thing is that when we asked our group of people, our group of surveys, so we surveyed about 268 people, when we asked them, those of them who had used who have been currently using AI systems, how they governed those systems, about a third of them said they had some form of assessment or governance system in place. But when we dived into what those systems actually looked like, we found that they were hugely diverse and fragmented and really unsystematic to the point where, probably, apart from the number one answer being, we don't have a system in place, that's two thirds of all organisations that we spoke to, the ones that did have an organisation and governance system in place, a lot of them were using just an Excel spreadsheet to record risks. And this includes some of Australia's biggest corporations using a spreadsheet just to kind of record risk controls. Many, many, many organisations reported that they used a form of governance that, that I've named guru-led governance, which is just where there's one person in the organisation who everyone points to and says, you know, Sally, you know about AI should we do this project or not. And that's remarkably common, including a big feature of one of the world's biggest tech companies that we spoke to has a good rule and governance of AI systems in our organisation. And then a lot of companies reported that they were sending AI plans through either IT processes that weren't suited to the specifics of AI, or to legal teams and privacy teams for sign off where those legal teams and privacy teams had no training and no real experience with these systems. So that action three is really around, starting to kind of cut away at those deficiencies in terms of getting something that's more integrated and fit for purpose. And then finally, as we might pivot towards discussion with you all, we know from corollaries in other areas of governance and management, particularly well work, health and safety and financial services. At the end of the day, this really does come back to how people behave even when they're not being watched, and even when they don't have to fill out a compliance checklist. And so having a really human centred AI culture where your frontline staff are trained and know when an error affecting a customer is actually a big deal that should be investigated and could be systematic, as opposed to just an unfortunate ‘computer says no’ outcome. Those things are really important to be inculcated and part of the way that organisations work. And yet, unfortunately, in some of the big tech driven companies that we spoke to and work with on this, business model drivers are currently at odds with a lot of the kind of human centred AI approach that we'd like to see. That doesn't have to be the case here in Australia for companies using and deploying AI. And I had some really encouraging conversations, particularly with startups, who were basing their organisation around AI driven processes and platforms and their first question was, how do we make this human centred? How do we build an organisation around these platforms that is completely kind of safe, inclusive, and protects people, particularly marginalized and people with with less voice? So I'm encouraged by this. But we are at the very beginning of this journey. So I promised, maybe even only tacitly that we would move over at half past. Sophie, are you back with us?
Sophie Farthing
I am back. My apologies for dropping out. But yeah, we do want to switch in. And we want to hear what you have to say. I've already got a couple of questions in the q&a. So I might just draw everyone's attention to that. We've got a pretty good big group now. So perhaps if people can pop their questions into the chat, and we'll do our best to get to them all. So Chris has put a question in that I think relates to the point that you were just making, Nick, about that were that were kind of impact - you're encouraged by some of the feedback that you'd had from organisations. Chris is, I guess not so encouraged, because in terms of we've just had this consultation and the question in the chat Nick I think it's a really pertinent one for us, that we've had some Australian federal government consultations, you know, Chris pointed to the one last year, which a lot of people contributed to, and in which we haven't had a government response yet. So Chris has commented that he feels like we're starting this conversation again, can you reflect a bit on that, especially because you have been talking really practically and what this report does is give a really practical day to day view of that regulatory framework and some of the gaps. Can you speak to a little bit about what you think this, this latest government consultation will do to contribute and move this conversation along?
Nicholas Davis
Yeah, I think for current consultation, it's first of all, and an indication that the minister and the department are finally moving into a bit more of a policy focus rather than an industry focus in this area, there seems to be to me to be kind of a signal that that perhaps, unlike or different from previous consultations, that this is feeling like now there are there's the resources and the focus to move on this. And, Chris, you probably might say, well, that's a little bit belated, and not using the work that's already been done. I'm sure many of much of that will be rolled in. But I have been encouraged by the fact that that the department has certainly been much more proactive in reaching out and engaged in this. I might also ask Ed to comment on this, because I, I want to make sure that we put this regulatory discussion also in the frame of another project that we're working on, which is the future of AI regulation, which goes hand in hand with corporate governance. Ed how do you view this recent consultation? And is it meaningful? Is it a signal of actual change?
Edward Santow
Oh look, I mean, I think I think it's positive. I think the federal government has not said anything really significant up until this point about what it wants to do, by way of reform in AI. So this is the first I think, major mark that they’ve put so that's a good thing. But I also think I share some of the frustration of Chris and others, that there are some really important reform processes that have set out clear, actionable reforms. And they just are sort of sitting on a shelf somewhere. I say that, you know, with the bruises of having worked at the hair Rights Commission, and with Sophie Farthing and Lauren Perry and others, to deliver on the Human Rights Commission's report, which has some really clear reforms, but also this privacy reform and others. So I think what the government should do is two things. One, it should do a clear audit about what it wants to take forward in terms of really carefully considered reform that is already on its plate. And two, I think it should identify where those gaps are. So I think one of the things I really like in the new discussion paper is that it kind of acknowledges if tacitly that we need to move from high level ethics principles to practice. And that's something that's really exciting. We've seen, for example, here in New South Wales, the AI assurance framework, applied to government agencies, is designed to do just that. It's not perfect, but it is, I think, a really good first effort in that regard.
Nicholas Davis
Maybe, yeah, Sophie do you mind if I just bounce straight into Charles's question in the chat from there? Yep. So I'm sure you're about to do so just to Charles wrote in the chat that he leads a government legal working group in the New South Wales Government and asked a question there about the role of government to legislate codes of practice, etc. I think we're firmly on the side here, at least I speak for myself here, that it is time for government to step in and create a series of quite firm guidelines that still allow organisations to do all the great innovation we want, but provide really positive rights and positive guardrails for systems that can go awry. Even if that regulation is just forced reflection and transparency, as is the case of the AI assurance framework, most of it really, so basically, forcing organisations or saying that in order to deploy a system, which is of a certain risk, you need to have done a review, you need to have thought about it, you need to have registered it and gone through this process. In the model law, the facial recognition model law that Lauren Perry and Ed and I worked on and published last September, we proposed a model law for one subset of AI. So facial recognition technologies, that would be risk based and would clearly prohibit a certain set of system uses. So not the actual underlying technology, but the use in certain cases, so for surveillance, mass surveillance and public surveillance, etc, and biometric facial analysis and drawing characteristics for anything other than entertainment. So you can I certainly think there's a case for government to put those kinds of high risk guardrails in place, but also to promote the use of mandatory instruments like assurance, or even if they're not mandatory instruments to support the use of international standards that encourage organisations really firmly, particularly from the market, or even if in legislation to their mandatory standards, to put in place governance systems that are fit for purpose. And this year, we will see published towards the end of the year, the 42001 ISO standard, which is the artificial intelligence management standard, and that is a set of, of guardrails and activities that an organisation can put in place precisely, to better govern AI systems. So there's certainly a role for that. And by the way, if you ask Australians and if you ask people around the world, they don't want voluntary regulation for AI. They're asking for government or independent regulators to step in and manage this. So it's not just the view of eggheads that this should be the case or technocrats it's the view of the general public that they don't trust industry, to regulate themselves. Nick, can
Sophie Farthing
I draw on a question that Jackie has put into the chat? Which I think is that really interesting finding that how companies are thinking about risk? You know, I think we're what we are concerned about at HTI is pretty much encapsulated in the eating disorder chatbot, that the person who wasn't really considered when that company was rolling out a chatbot, perhaps was the person who needed the help. So Jackie, is asking – I’ll read it as it’s well put – is that - Do companies I guess need the guidance through AI assurance processes or through standards to make sure that humans are front of mind and when these AI systems are developed? And also that question Jackie's put there, you know, it seems that they're being developed, developed primarily by technical teams. So is there a lesson there in corporate governance for an interdisciplinary or multi professional approach in the way that they roll out and adopt AI systems?
Nicholas Davis
Yeah, so it was really interesting in review of this work, and that data, which we kind of sought to validate in different ways, some of the expert commentary that came back on that, that finding. So that's the finding, Jackie's talking to that diagram, whereby the more experienced you are, the more bimodal the distribution of perception of risk is. The kind of two primary explanations that came back were well, maybe people are attuned to the risks, but and they are just using two different types of systems, the systems themselves are accurately categorised into two systems. But a really strong suggestion that we have investigated a bit further is what you imply there, Jackie, is that maybe there's a whole bunch of people that are just complacent, because they're technical teams that are invested in their current systems, that development and they use them so they view them as low risk. One commentator came back and said, actually, this could be an example of the Dunning Kruger effect. That's where people with low levels of expertise or experience, in this case in human centred AI risk, they overestimate their ability or knowledge and therefore assign it lower risk. Look it’s something to dive into. And it'd be it would be an interesting one to kind of restart the clock now and do from systems that are taking place now with a higher level of awareness of risks, rather than a backdating it. I certainly see the case at the moment with many organisations we work with were one of the most common requests I get from data teams, is the data and analytics team will say to me, Nick, the next time you speak to senior management, can you just tell them these three things about our system, because they just don't get what we're trying to do. And that, and often, that is, the purpose of them asking me to say that is either because there's a misapprehension about the system, that actually means that there's a higher risk perception at the senior levels than is warranted, that they think is warranted, generally. Or that there's an opportunity that's being overlooked. So I think the other aspect of that curve could be actually just failures in communication between various teams, which strips out some of the nuance of, of how these systems could go wrong, and also the benefit that they can bring.
Sophie Farthing
We’ve got two questions in different chat windows, about this regulatory conversation that we're certainly about to have in Australia. So I want to just pause on Katie's point that she's made there, which is a really important one, which is, what role can and should the social sector play in this and I think what Katie’s referring – I come from a civil society background before here - ,is all the analogue expertise and experience that those organisations and individuals can bring. And recognising when you're doing service delivery, for example, resources, time incredibly limited. So Katie's question is, where does civil society or the social sector best invest their time in the context of this conversation?
Nicholas Davis
Yeah, look, it's a great question. I think there's, there's a whole plethora of opportunities to support in here. But but as Katie, you point out, and as we all realise, the resource question is a really live one. So one angle here is, there's groups within the social sector that do already and need to continue to hold organisations account, whether they are government or business, to the kind of expectations that are built up both through regulation or through good practice. And this is particularly important for AI where a lot of the harms may not be evident to people, because they are happening through decisions that are opaque, or they may be happening to people who have less voice. So advocacy organisations or support organisations, like, particularly on the legal side, in terms of human rights law centres, they're actually critically important here to be able to identify represent when things go wrong. And look, that's a role that had played for a long time at PIAC and I know that many of you here as well. So I think that's, I think that's the first one. The second thing is there are a lot of people who will find it hard to come up to speed with the level of governance that is needed of AI systems. And there will be there'll be investment in Govtech along the way, or governance tech, that will be available to organisations. You know, Reg tech, you could call it if it's about regulatory climate compliance technology, but making that available to organisations which are less resourced in order to be able to get over the hurdle of actually using these systems, well, is actually going to be important as well. So supporting this market of bringing everyone up to safe use, it's not going to be inconsiderate, when you think about, you know, the often the biggest companies call for additional regulation, knowing full well that that's anti competitive, because they've got reams of legal teams of lawyers and compliance experts, whereas many of our startup competitors don't and find it hard to litigate or comply with those, those aspects. And organisations like access now which Brett Solomon which provide critical support to NGOs, in the case of cybersecurity attacks, etc, we're going to need similar supporting services in governance and remedy and remediation when it comes to AI systems. I think there's a big role there. And I think third, a lot of training, like I think there's a big role here for the social sector to play on in kind of upskilling. And we need to upskill the social sector as well. So we need funding to support Australia's biggest nonprofit entities to be able to use AI to their own benefit. Because otherwise, we will see a further widening of the gap of capability between the private and nonprofit sectors. And as someone who spent, you know, the last 16 years in in Europe, I find it really upsetting how weak Australia's nonprofit sector is and how, how few resources do flow from the private sector and other and government and other groups into supporting those organisations in Australia compared to the US and Europe.
Sophie Farthing
And we've got a couple of questions. I'll try and squish them together. But we've got questions about speed. I think in this conversation about regulation, we always
Sophie Farthing
go back to technology comes at us at such speed that regulation can't keep up. So N's question is pertinent at the moment is how do we work, as you know, from all the different perspectives we come from, how do we work at speed to get the policies, regulations and practices in place? And how do we make sure that regulation can adapt? And I might squish a question in there, because I know part of the work that you and your team undertook in this was about looking at international regulatory trends. Australia is in a good position, because we've got some experiences to draw from overseas. But can you comment just on has regulation keep up with technology that's coming, or how's regulation made so that it can adapt pretty readily to new tech that comes into play? Yeah.
Nicholas Davis
So I'm a big believer here in the idea that more haste means less speed. So if we try and do knee jerk immediate reactions to every technological development or outcome, we're going to end up with just really poor and conflicting and ineffective policy, that is either, that could be any one of three major failures, it could be impossible to implement, it could be ineffective in actually getting to the heart of the problem that you're trying to solve for in policy, OR, AND, OR it could isolate Australia and our market and not be at all in harmony with practice, as an organisation as a country that really does import a lot of its technology services and cloud services in particular, from overseas. And that would be a bad outcome for us as well. So I do think we need to, I think, I've got this, this sense, and maybe Martin you and I should take this away and talk for a little bit. But I understand the premise of technology moves fast and regulation moves slowly. But part of the point of regulation is to introduce stability and certainty, and, and give protections, which are long lasting, and are kind of a foundation for innovation. And so we really don't want to be updating core regulation every year, you know, every week, etc, what we want is, is really thoughtful, broad based, and in many cases, outcome based regulation, that allow hundreds of technologies, new technologies to emerge every day and new AI programmes to emerge every minute as we're currently seeing on Twitter. But nevertheless, for them to, you know, avoid creating harms for all of us. And for me, that's a different problem than thinking purely about that speed mismatch. I don't have a particular answer about it. But I do worry that hand wringing over speed itself might be the wrong, the wrong framing, it might be that we should be hankering much more over nuance, and about being able for our laws to be thoughtfully designed, debated and implemented in ways that actually, you know, adapt and cover that regulation, rather than thinking about how quickly they themselves actually change. Again, I might open that up to Ed because it is very much a regulatory based question, which links to our other projects that Ed and Sofie are running.
Edward Santow
Look, I don't think I have anything else to add there. I might just let the flow continue. Okay.
Sophie Farthing
Thank you. One, one factor that Chris has just raised, and that came out in the report itself is that there are laws that are in place. So can you talk a little bit about that? That was one thing that you found in the research, and certainly the background legal research that was done. And Chris is asking, you know, there are this principle based legislation, so is what we should be talking about is authoritative guidance? So through the regulators that are currently working in these spaces and adapting themselves to the new environment. Yeah,
Nicholas Davis
100%. Look, if you're a financial institution, and you're governed, so you're subject to adverse regulation, right, you can look to the law, but you also have a wealth of authoritative guidance from APRA, about how to set up and manage your governance systems. Now, that is all should, right, because it's flexible to what you need to do. But by goodness, if you diverge from a lot of that, and something goes wrong, you know, the courts and the regulator will interpret this as being you know, you you weren't paying attention there. So I think, you know, Ed’s call for an AI Safety Commissioner in the Human Rights technology report from 2021, I think is a really good way forward here because it's not saying we need a new regulator at all. It's not a new regulatory authority. It's rather a commission that is like gives that authoritative guidance that then can be taken into account by the regulators who have the power to enforce or to issue decisions against that regulation, knowing full well that organisations have had the chance to make sense of this with some really thoughtful, nuanced guidance, that can, as you point out, Chris, that can be adapted, you know, month by month as new systems emerge and clarifications can be added to that, you know, we've just seen a new type of generative AI that melds the spoken word with music. Just be aware that that's captured by section 22 over here, so you just need to - don’t be fooled, it's still the same thing kind of thing that that's often that's all that's needed so that people don't think oh gosh, we can do this
23.06.02 State of AI Governance in Australia
Friday, Jun 02, 2023
SUMMARY KEYWORDS
ai, risks, systems, organisations, regulation, governance, harms, sophie, report, australia, question, corporate governance, terms, senior executives, non executive directors, work, speak, talk, company, bit
SPEAKERS
Sophie Farthing, Nicholas Davis, Edward Santow
Sophie Farthing
Welcome, everyone. For those who don't know me, I'm Sophie Farthing. I'm head of the Policy Lab here at the Human Technology Institute. Before we kick off, I would like to acknowledge the Gadigal people of the Eora Nation, which is where I am joining this call from. And I appreciate everyone is possibly all over Australia and maybe even beyond, but I would like to acknowledge the Gadigal people and pay my respects to the elders past and present. And as well as acknowledging emerging leaders and elders. We have a jam packed hour.
Sophie Farthing
We are here today to talk about a very exciting report that HTI published this week on the state of AI governance in Australia.
Sophie Farthing
Just one housekeeping thing to note is that we are recording this session so just bear that in mind. And I will hand over now to our Co-Director, Professor Nicholas Davis to start us off, and give us the framework for what we're discussing today. Over to you, Nick.
Nicholas Davis
Thank you, Sophie, and welcome, everyone, I'm joining from Ngunnawal land down here in Canberra. And it's great to be with you all. It's amazing how literally just putting a link on LinkedIn can get so many people interested and engaged in AI governance, but it is a bit of the topic du jour. Today, we're really going to do only two things together. First of all, I'm going to have a chat with Sophie about some of the elements in the report, we thought we'd do that in a bit more of a conversational style than death by PowerPoint for 15 minutes or so. And then by the time we get to the half hour, and maybe even before, we'd love to start engaging you all in conversation about this, because you are all experts in this area from different perspectives as users of technology, as leaders in your organisations as non executive directors or senior executives, or as members of the media and commentators in this space. So we'd love for you to really push us and one another, particularly as to where all this goes. So we've got a lot of talk around risk and harms today around the shortfalls in corporate governance. But it's quite good to think about - what is next, how does this get taken forward. So I will have a couple of slides running along as Sophie and I talk, I'll stop that when we finish. There are a couple of ways that you can contribute directly in this session, you can throw your hand up if you're interested to jump in and speak. I think we'll leave it so that everyone can unmute themselves at the moment. But if for any reason, we end up getting 500 people on the call, we might go to a little bit higher level of moderation. But I think we're pretty good with this group. Second, there is the chat function. And I can say that there's already a bit of chat coming in. So please do feel free to ask questions or exchange. And then as well if you want to ask a question that gets kind of recorded and moderated. There's the Q&A function as well on your panel there in Zoom. Well, first, maybe before I pass back to Sophie on this, I will say that
Nicholas Davis
the work that we're doing here in the corporate governance programme is a three year programme that's supported by the Minderoo foundation. We're also incredibly grateful to our HDI advisory partners at Atlassian, KPMG and Gilbert and Tobin. I know that some of you are here today. So thank you for joining.
Nicholas Davis
But really, this is just the first phase of a longer conversation around the corporate governance of AI. And I think the word conversation is our trigger. So, Sophie, time for me to pass back to you for a chat.
Sophie Farthing
Thanks, Nick. So yeah, as Nick mentioned, this is a chat about the content of this report. And we are really hoping you will join us in this conversation about halfway through. So my job is to keep Nick to time. But Nick, can we start off just about the timing of this project? And this report this week is pretty incredible. I think we're all keenly aware of the kind of week we've had. So on Tuesday, we've had an open letter signed by AI experts from around the world, which was pretty alarming in terms of talking about extinction and human extinction and the risk of AI HCI. We published our report on Wednesday and yesterday of course, we had the Australian government open up a pretty wide ranging public consultation on what Australia needs to do to regulate AI. So briefly, Nick, can you tell us you know why this project and why now?
Nicholas Davis
Yeah, thanks, Sophie. Well, first, I guess it's important to recognise that we are really focused on the promise of AI as much as we are on the risks. This is really the Human Technology Institute taking a human centred approach to looking at these issues. And it's good timing in the respect that the launch happened kind of inconsequentially, or in coincidence with those other aspects. There's the launch, as you mentioned, but we've been working on this since September. And the reason why we focused on corporate governance here is partly in recognition that about 90% of all major research work in AI, and more than 95% of applications that you and I would encounter in day to day life are managed and delivered, you know, dreamt up and invested in by the private sector. So, we have this wealth of engagement and design, power, marketing power, and technical understanding that rests in the private sector. To that, you know, in given that that being the case, we really wanted to take a hard look at how authority management and governance worked inside private organisations as the key players that are actually using and deploying these systems. So that's kind of the first answer to the question. The second answer, the question is, we do know, and we're keenly aware that there's a couple of key gaps going on at the moment with corporate leaders. One is a general sense of a lack of awareness on current obligations, and even on the actual use of these systems. And the second is, as you say, this increasing set of calls for regulation, many of which had untethered from actual policy or legal experience in these areas. So, we're hoping to bring those things together through this work.
Sophie Farthing
Yeah, certainly the calls can be pretty alarming. And when you get into the detail of them, there's a lot of nuance in these discussions, which, and this report fills in a lot of those gaps. So, in terms of looking at risk, so we've heard for the last few years that there's a range of risks posed by AI, and we keep hearing of them. You know, we've got evidence of algorithmic bias, entrenching inequality, and discrimination, you know, alarming things on social media algorithms, undermining our democratic processes. And here in Australia, we've seen the pretty horrific impact of Robodebt, which is obviously less about machine learning and more about oversight and what needs what is effective oversight of automated systems. So, going back to the report you've published or we published this week, so what new ideas does this report contribute to our view of risk? And AI?
Nicholas Davis
Yeah, yeah, it was really, it is a really important and interesting part of, I guess, shaping the narrative around artificial intelligence is what we really mean when we say risk, and one of the key and critical distinctions that we drew. And I should, by the way, here, recognise a person who isn't able to be on the webinar, but is the lead author of this work, which is our colleague, Lauren Solomon, this is really, her deep insight was, unless you are really clear about distinguishing between a harm and a risk, you can use the word risk in ways that really take away the human being or take away the kind of irreversible damage that AI systems can do. And so I guess the first key thing that we do is really take that point to heart and draw a clear distinction between what is a harm to an individual or to a group that is, in many cases irreversible, and it's hard to provide compensation, versus what is a risk in terms of something perhaps financially quantified, potentially far in the future, and potentially occurring more to an organisation or a group, which kind of yeah, it dehumanises, in a way, what could happen. So that's the first thing is really focusing like many others have done in this in this space ethnographically on, there are real people involved when, when systems like the Robodebt system go wrong. I think the second big kind of movement for us was and I'll maybe I'll pull up a slide here. I'll come up to come back to those ones there, Sophie was, we did find that when people talk about risk and harm in AI systems, they generally provide a laundry list of a variety of different things that can go wrong or do go wrong or how people get harmed, and we found that talking to policymakers and particularly the corporate leaders that we engaged in this project. So the corporate leaders we defined as non-executive directors and senior executives, leading organisations, using or planning to use AI systems. So when speaking to them, we found that they didn't have an organising concept for how AI risks develop and what those components look like in terms of both the harms to individuals, and the risks to the organisations. So we spent a lot of time coming up with essentially a typology, which looks a little bit like this. The first one is, a lot of AI risks and harms float from when an AI system fails, it doesn't do what it's supposed to do. Completely different sorts of risks that can lead to similar harms, is when an AI system is used in malicious or misleading ways. But obviously, the intention, the failure points of those two sources of risk are quite different. And the third is thinking about the overuse, the inappropriate, the reckless use the downstream impacts of AI systems as externalities. And so by kind of stretching these out into different harm categories, you can see in that second top box, they're biased performance. That's where your algorithmic bias comes from, it's when the errors of an AI system are distributed in such a way as to harm a certain group of people and not others. And if that harm is distributed across a protected characteristic that is, you know, in breach of discrimination law in Australia, and so we kind of produce examples there. The same can be done for malicious or misleading systems. The Trivago case was an example of a misleading algorithmic system that produced really bad financial outcomes for consumers. So dark patterns, but you also have other sources in here. And then there are there are a lot of kind of more society wide and environmental wide impacts in overuse. But there's also just the use of AI systems when it's not warranted. And I think we're seeing a lot of that where systems are not being used maliciously, and they're not failing, they're working as intended. But do you really need to capture everyone's licence plate entering your car park and match that up against a store card and gather and all that information in order to do the job that your store is doing? That's a really important question that currently isn't really covered by our law.
Sophie Farthing
Obviously in the report, there are so many risks that organisations are grappling with. So can we get practical, which is what this report does incredibly well. So thinking about organisations, how can a generative AI system pose risks to an organisation? And how are companies governing those risks at the moment?
Nicholas Davis
Yeah, well, maybe I'll just I'll just I'll jump back a few slides to just show the data that we've gathered that showed where organisations are using the risks because I think that shows, it gives a good insight into where those risks come from with an example like generative AI. So if I, if I roll back to here, we found that really, two thirds of Australian organisations say they are using or planning to use AI in the coming year. Now, when we dive deeper into this, I could not find a single organisation that wasn't using AI. When you think about how employees are actually going about their day to day business. We found that about half of the people that I spoke to who said they were using generative AI at work had not told their bosses about it. And that stacks up, it's about the same order of magnitude as the recent research from February this year, which shows that about 30% of people reporting of professionals reporting using generative AI at work, haven't told their boss. And you can kind of understand why when people stumble across something that kind of feels magic and saves them a lot of time, it's kind of uncomfortable to tell your supervisor or your manager, gosh, I actually spent a third of the time that I used to spend on that task because, often, efficiency gets eaten up in other tasks in different ways and you're not quite sure whether or not that's allowed or appropriate at that moment. The second thing I'll say is that when we kind of think about risk at the organisational level, you are often trading off your risk appetite is what you want to gain out of it versus what risks you're incurring. And you can see here on the right hand side of this diagram that that business leaders, senior executives in the dark blue, versus non executive directors had quite different perspectives on what they expected the benefits to be. They're not expected directors there were really focused on customer experience, whereas the senior executives were really focused on business process efficiencies. So it's quite interesting to think about, well, what risks you might take in different areas in order to get those that upside. And then just in terms of the data here, if you look across the top five uses that our survey revealed in terms of, of where AI was being applied at an organisational level more than the individual employee level. Three of the top five are really, really touching important stakeholders. So customer service, marketing and sales and human resources are all systems that can make really critical decisions on behalf of the individuals that interact with your company. And so, it was in that kind of view that we started to think well, how do organisational risks in this area evolve? So I'll just step forward to this example here for generative AI, that I think illustrates really nicely the risks of using this in your organisation. So about two weeks ago, the one of the national organisations that did telephone help for suicide prevention in the US called helpline, they introduced a bot called Tessa. They've been testing it since February. But about two weeks ago, they said, actually, we're going to fire 120 volunteers and let go all our helpline staff because Tessa can replace them. Now, what was terrible about that, for someone like me that's worked in other areas is that this was not just the fact that Tessa wasn't really the fact that Tessa was better than the helpline staff. It was this that unionisation of those employees was a threat to the, to the wage bill of the company and so they said, no, we're going to replace it with a bot. About a week, 10 days later, they shut down the bot, because it just could not perform. It was producing really terrible outputs that put people at risk. And obviously, when you're talking about suicide prevention, that kind of edge is it's just, I think, incredible to think about this as a as a live example. The other thing that I want to kind of mention in this is that, we do see that, that AI that examples like that amplify risks in three different ways. So that Tesla example, first of all, it just provided a terrible service. So that's bad for your commercials, it's bad for your operational efficiency. If the AI system just doesn't perform as intended, you're going to lose business, you're going to be less efficacious, just because it's a worse product. But second, you're going to expose yourself to some pretty big regulatory risks at all, particularly if you get into any of those areas where there are duties on your business to perform at a certain level, or if those errors are distributed in ways that that result in unlawful discrimination. And of course, these risks often co-occur, all three of them, you get reputational hit because those, those headlines look pretty bad. And just to mention, Sophie, to finish up on this point, a really interesting finding of our report was that when you ask organisations and executives in general, what they think about the risk of these types of things going wrong, people with less knowledge, who are just coming to the party with AI, will tend to cluster their answers in the middle of the spectrum, it'll be a bell curve, where most of the answers are low to moderate risk, that's the kind of the modal answers are in the middle of that distribution. But once you start speaking to organisations and executives, corporate leaders, who have spent a lot of time or more time at least with AI systems, the distribution goes bimodal, you get quite a lot of people clustering in the very low area, because these are systems either that they're very used to and maybe they're a bit complacent, or on the other hand, they are genuinely low risk systems like mapping, optimization, etc, that are not touching stakeholders in a way or they're not core to the business. But at the other end of the spectrum, you actually see quite a big spike in people thinking, Oh, no, there are a whole bunch of systems here that I view as very high risk in our organisation, and that for people like us at HTI who also work heavily in the policy space, that's a really interesting and positive finding, because it means that risk based regulation could work really well, because you can separate out those different use cases and create a clear line between those systems, which are lower risk, and those which are higher risk also shows that there is that recognition evolving in the market.
Sophie Farthing
And that, of course, as the head of the Policy Lab at HTI is something I'm thinking a lot about this question about regulation. So, Nick, obviously, we've got I've mentioned before, and I'm sure it's at the front of a lot of people's minds on this call, all these calls for regulation and what that might look like. So can you talk to us a little bit about how you deal with this in the report or these calls for regulation and what regulation might look like?
Nicholas Davis
Yeah. But I think, it's first to first important to recognise that despite a lot of chat about the risks and awareness of the risks, there's been very little action actually going on in terms of organisations, changing their behaviour and investing in governance systems to deal with these risks, so that's a key finding of the report that essentially corporate governance of AI is unsystematic, it’s unstrategic, it's unequal to the risks that are emerged. And this is actually from some McKinsey data, global data. But it's entirely backed up by what we've been looking at as well. And I might come back to this governance approaches slide if people are interested a little bit later. But I'll take it to the point about existing obligations. The fear that we have and that I think, really crystallised during our interviews and workshops, was when people hear public calls for regulation of AI, their assumption is that there is currently no regulation of AI. So there's this kind of unstated sense of oh well, if it's needed in the future, there must not be much there today. And while it is true that in Australia, we don't have like AI specific laws, we do have like a range of existing laws of general application that span a huge range of areas that are directly applicable to how organisations should be managing and governing their AI systems. And from a corporate governance perspective, of course, the most important of those are those duties that go directly to the director out of section 180, section 181 of the Corporations Act, as well as the common law fiduciary duties that are owed to the company. Those are, you know, due care and diligence, good faith, proper purpose, they're about, you know, being the kind of reasonable awareness, skill and capability that you bring as a director. And I think a lot of non executive directors we were speaking to on this hadn't really thought about their directors duties in the way of in a similar way as they might have with say, cybersecurity as starting to encompass taking care of what might, what could go wrong, these kinds of critical, critical risks. And I will also say that the AI system use is not just growing, it's also becoming more core to organisational business models. So it's not just about saying, oh, do directors need to be aware of the fact that we're using AI and recruitment? It's actually the fact that we're actually our company is using AI in multiple areas, and often mission critical areas that become very strategic, very important and expose us to a whole range of risks, which could engage those duties. But beyond the director's duties, and those Corporations Act duties, we have a number of areas that we present in the report where there are really important specific legal obligations to look out for, because they are particularly pertinent to AI systems. And so consumer protection, particularly around misleading or unfair misleading information, unfair dealing, cybersecurity, of course, anti-discrimination, I mentioned that, you know, operating at multiple levels, duty of care, work, health and safety, a really interesting one, and of course, privacy and data use given first, that AI systems do tend to engage data sources from a variety of, of different areas than your traditional IT system. But also that as soon as you are using an AI system to solve problems that are directly consumer facing, you are almost by definition, you are pulling in personal information, and sometimes sensitive information and you are exposed to a higher burden of data management. If those systems are not yours, they're not in your control, but you're still responsible for them, you need to be really careful. And so we I guess another thing we found in the existing obligations is that both senior executives and company directors weren't really on top of how their organisation was using third party services that deployed AI. And they were not at all aware of the risks, the legal obligations about how they applied through those third party services as well.
Nicholas Davis
Did we lose Sophie? Right. Well, in that case, so if you might have had some internet problems, I might just take us down to one final question that I know she's gonna ask, which is around what can we actually do with all of this? And it's really important that we give people a way forward out of these kind of, ‘gosh, people aren't doing enough’. It seems that corporate Australia isn't really on the kind of early end of the maturity curve in dealing with the risks and harms we can already see. And yet, the explosion of use, you know, is threatening this big gap that presents these risks commercial risks, regulatory risk reputational risks. We came down with these four actions that we thought were particularly pertinent. The first one really comes out of the fact that directors and senior executives repeatedly told us that they didn't think that their organisation had strategic expertise in artificial intelligence. Many of them said, look, we do have quite a good data and analytics group, we have some technical assets, some of which we borrow some of which we outsource. But internally across our teams like procurement, or HR process super deeply across senior management, and definitely on the board, we're not sure how to leverage this, what it means where the risks are, how it fits into our strategy. And only 10% of the people we surveyed, or I'll speak to even had AI, or related to AI, in their strategy or an AI strategy at all. So these first two points are basically around skilling up on the strategic side of artificial intelligence, not becoming data scientists and knowing how to specifically deploy or manage or operate it, but knowing how to decide when it's appropriate and how to govern it well, internally. And then literally just having an AI strategy, which sets out your risk tolerance, your risk appetite, was critical for the board members. And that was the biggest request from boards was, we don't see an AI strategy in our organisations and we want that present to help us do our job as company directors. The third thing is that when we asked our group of people, our group of surveys, so we surveyed about 268 people, when we asked them, those of them who had used who have been currently using AI systems, how they governed those systems, about a third of them said they had some form of assessment or governance system in place. But when we dived into what those systems actually looked like, we found that they were hugely diverse and fragmented and really unsystematic to the point where, probably, apart from the number one answer being, we don't have a system in place, that's two thirds of all organisations that we spoke to, the ones that did have an organisation and governance system in place, a lot of them were using just an Excel spreadsheet to record risks. And this includes some of Australia's biggest corporations using a spreadsheet just to kind of record risk controls. Many, many, many organisations reported that they used a form of governance that, that I've named guru-led governance, which is just where there's one person in the organisation who everyone points to and says, you know, Sally, you know about AI should we do this project or not. And that's remarkably common, including a big feature of one of the world's biggest tech companies that we spoke to has a good rule and governance of AI systems in our organisation. And then a lot of companies reported that they were sending AI plans through either IT processes that weren't suited to the specifics of AI, or to legal teams and privacy teams for sign off where those legal teams and privacy teams had no training and no real experience with these systems. So that action three is really around, starting to kind of cut away at those deficiencies in terms of getting something that's more integrated and fit for purpose. And then finally, as we might pivot towards discussion with you all, we know from corollaries in other areas of governance and management, particularly well work, health and safety and financial services. At the end of the day, this really does come back to how people behave even when they're not being watched, and even when they don't have to fill out a compliance checklist. And so having a really human centred AI culture where your frontline staff are trained and know when an error affecting a customer is actually a big deal that should be investigated and could be systematic, as opposed to just an unfortunate ‘computer says no’ outcome. Those things are really important to be inculcated and part of the way that organisations work. And yet, unfortunately, in some of the big tech driven companies that we spoke to and work with on this, business model drivers are currently at odds with a lot of the kind of human centred AI approach that we'd like to see. That doesn't have to be the case here in Australia for companies using and deploying AI. And I had some really encouraging conversations, particularly with startups, who were basing their organisation around AI driven processes and platforms and their first question was, how do we make this human centred? How do we build an organisation around these platforms that is completely kind of safe, inclusive, and protects people, particularly marginalized and people with with less voice? So I'm encouraged by this. But we are at the very beginning of this journey. So I promised, maybe even only tacitly that we would move over at half past. Sophie, are you back with us?
Sophie Farthing
I am back. My apologies for dropping out. But yeah, we do want to switch in. And we want to hear what you have to say. I've already got a couple of questions in the q&a. So I might just draw everyone's attention to that. We've got a pretty good big group now. So perhaps if people can pop their questions into the chat, and we'll do our best to get to them all. So Chris has put a question in that I think relates to the point that you were just making, Nick, about that were that were kind of impact - you're encouraged by some of the feedback that you'd had from organisations. Chris is, I guess not so encouraged, because in terms of we've just had this consultation and the question in the chat Nick I think it's a really pertinent one for us, that we've had some Australian federal government consultations, you know, Chris pointed to the one last year, which a lot of people contributed to, and in which we haven't had a government response yet. So Chris has commented that he feels like we're starting this conversation again, can you reflect a bit on that, especially because you have been talking really practically and what this report does is give a really practical day to day view of that regulatory framework and some of the gaps. Can you speak to a little bit about what you think this, this latest government consultation will do to contribute and move this conversation along?
Nicholas Davis
Yeah, I think for current consultation, it's first of all, and an indication that the minister and the department are finally moving into a bit more of a policy focus rather than an industry focus in this area, there seems to be to me to be kind of a signal that that perhaps, unlike or different from previous consultations, that this is feeling like now there are there's the resources and the focus to move on this. And, Chris, you probably might say, well, that's a little bit belated, and not using the work that's already been done. I'm sure many of much of that will be rolled in. But I have been encouraged by the fact that that the department has certainly been much more proactive in reaching out and engaged in this. I might also ask Ed to comment on this, because I, I want to make sure that we put this regulatory discussion also in the frame of another project that we're working on, which is the future of AI regulation, which goes hand in hand with corporate governance. Ed how do you view this recent consultation? And is it meaningful? Is it a signal of actual change?
Edward Santow
Oh look, I mean, I think I think it's positive. I think the federal government has not said anything really significant up until this point about what it wants to do, by way of reform in AI. So this is the first I think, major mark that they’ve put so that's a good thing. But I also think I share some of the frustration of Chris and others, that there are some really important reform processes that have set out clear, actionable reforms. And they just are sort of sitting on a shelf somewhere. I say that, you know, with the bruises of having worked at the hair Rights Commission, and with Sophie Farthing and Lauren Perry and others, to deliver on the Human Rights Commission's report, which has some really clear reforms, but also this privacy reform and others. So I think what the government should do is two things. One, it should do a clear audit about what it wants to take forward in terms of really carefully considered reform that is already on its plate. And two, I think it should identify where those gaps are. So I think one of the things I really like in the new discussion paper is that it kind of acknowledges if tacitly that we need to move from high level ethics principles to practice. And that's something that's really exciting. We've seen, for example, here in New South Wales, the AI assurance framework, applied to government agencies, is designed to do just that. It's not perfect, but it is, I think, a really good first effort in that regard.
Nicholas Davis
Maybe, yeah, Sophie do you mind if I just bounce straight into Charles's question in the chat from there? Yep. So I'm sure you're about to do so just to Charles wrote in the chat that he leads a government legal working group in the New South Wales Government and asked a question there about the role of government to legislate codes of practice, etc. I think we're firmly on the side here, at least I speak for myself here, that it is time for government to step in and create a series of quite firm guidelines that still allow organisations to do all the great innovation we want, but provide really positive rights and positive guardrails for systems that can go awry. Even if that regulation is just forced reflection and transparency, as is the case of the AI assurance framework, most of it really, so basically, forcing organisations or saying that in order to deploy a system, which is of a certain risk, you need to have done a review, you need to have thought about it, you need to have registered it and gone through this process. In the model law, the facial recognition model law that Lauren Perry and Ed and I worked on and published last September, we proposed a model law for one subset of AI. So facial recognition technologies, that would be risk based and would clearly prohibit a certain set of system uses. So not the actual underlying technology, but the use in certain cases, so for surveillance, mass surveillance and public surveillance, etc, and biometric facial analysis and drawing characteristics for anything other than entertainment. So you can I certainly think there's a case for government to put those kinds of high risk guardrails in place, but also to promote the use of mandatory instruments like assurance, or even if they're not mandatory instruments to support the use of international standards that encourage organisations really firmly, particularly from the market, or even if in legislation to their mandatory standards, to put in place governance systems that are fit for purpose. And this year, we will see published towards the end of the year, the 42001 ISO standard, which is the artificial intelligence management standard, and that is a set of, of guardrails and activities that an organisation can put in place precisely, to better govern AI systems. So there's certainly a role for that. And by the way, if you ask Australians and if you ask people around the world, they don't want voluntary regulation for AI. They're asking for government or independent regulators to step in and manage this. So it's not just the view of eggheads that this should be the case or technocrats it's the view of the general public that they don't trust industry, to regulate themselves. Nick, can
Sophie Farthing
I draw on a question that Jackie has put into the chat? Which I think is that really interesting finding that how companies are thinking about risk? You know, I think we're what we are concerned about at HTI is pretty much encapsulated in the eating disorder chatbot, that the person who wasn't really considered when that company was rolling out a chatbot, perhaps was the person who needed the help. So Jackie, is asking – I’ll read it as it’s well put – is that - Do companies I guess need the guidance through AI assurance processes or through standards to make sure that humans are front of mind and when these AI systems are developed? And also that question Jackie's put there, you know, it seems that they're being developed, developed primarily by technical teams. So is there a lesson there in corporate governance for an interdisciplinary or multi professional approach in the way that they roll out and adopt AI systems?
Nicholas Davis
Yeah, so it was really interesting in review of this work, and that data, which we kind of sought to validate in different ways, some of the expert commentary that came back on that, that finding. So that's the finding, Jackie's talking to that diagram, whereby the more experienced you are, the more bimodal the distribution of perception of risk is. The kind of two primary explanations that came back were well, maybe people are attuned to the risks, but and they are just using two different types of systems, the systems themselves are accurately categorised into two systems. But a really strong suggestion that we have investigated a bit further is what you imply there, Jackie, is that maybe there's a whole bunch of people that are just complacent, because they're technical teams that are invested in their current systems, that development and they use them so they view them as low risk. One commentator came back and said, actually, this could be an example of the Dunning Kruger effect. That's where people with low levels of expertise or experience, in this case in human centred AI risk, they overestimate their ability or knowledge and therefore assign it lower risk. Look it’s something to dive into. And it'd be it would be an interesting one to kind of restart the clock now and do from systems that are taking place now with a higher level of awareness of risks, rather than a backdating it. I certainly see the case at the moment with many organisations we work with were one of the most common requests I get from data teams, is the data and analytics team will say to me, Nick, the next time you speak to senior management, can you just tell them these three things about our system, because they just don't get what we're trying to do. And that, and often, that is, the purpose of them asking me to say that is either because there's a misapprehension about the system, that actually means that there's a higher risk perception at the senior levels than is warranted, that they think is warranted, generally. Or that there's an opportunity that's being overlooked. So I think the other aspect of that curve could be actually just failures in communication between various teams, which strips out some of the nuance of, of how these systems could go wrong, and also the benefit that they can bring.
Sophie Farthing
We’ve got two questions in different chat windows, about this regulatory conversation that we're certainly about to have in Australia. So I want to just pause on Katie's point that she's made there, which is a really important one, which is, what role can and should the social sector play in this and I think what Katie’s referring – I come from a civil society background before here - ,is all the analogue expertise and experience that those organisations and individuals can bring. And recognising when you're doing service delivery, for example, resources, time incredibly limited. So Katie's question is, where does civil society or the social sector best invest their time in the context of this conversation?
Nicholas Davis
Yeah, look, it's a great question. I think there's, there's a whole plethora of opportunities to support in here. But but as Katie, you point out, and as we all realise, the resource question is a really live one. So one angle here is, there's groups within the social sector that do already and need to continue to hold organisations account, whether they are government or business, to the kind of expectations that are built up both through regulation or through good practice. And this is particularly important for AI where a lot of the harms may not be evident to people, because they are happening through decisions that are opaque, or they may be happening to people who have less voice. So advocacy organisations or support organisations, like, particularly on the legal side, in terms of human rights law centres, they're actually critically important here to be able to identify represent when things go wrong. And look, that's a role that had played for a long time at PIAC and I know that many of you here as well. So I think that's, I think that's the first one. The second thing is there are a lot of people who will find it hard to come up to speed with the level of governance that is needed of AI systems. And there will be there'll be investment in Govtech along the way, or governance tech, that will be available to organisations. You know, Reg tech, you could call it if it's about regulatory climate compliance technology, but making that available to organisations which are less resourced in order to be able to get over the hurdle of actually using these systems, well, is actually going to be important as well. So supporting this market of bringing everyone up to safe use, it's not going to be inconsiderate, when you think about, you know, the often the biggest companies call for additional regulation, knowing full well that that's anti competitive, because they've got reams of legal teams of lawyers and compliance experts, whereas many of our startup competitors don't and find it hard to litigate or comply with those, those aspects. And organisations like access now which Brett Solomon which provide critical support to NGOs, in the case of cybersecurity attacks, etc, we're going to need similar supporting services in governance and remedy and remediation when it comes to AI systems. I think there's a big role there. And I think third, a lot of training, like I think there's a big role here for the social sector to play on in kind of upskilling. And we need to upskill the social sector as well. So we need funding to support Australia's biggest nonprofit entities to be able to use AI to their own benefit. Because otherwise, we will see a further widening of the gap of capability between the private and nonprofit sectors. And as someone who spent, you know, the last 16 years in in Europe, I find it really upsetting how weak Australia's nonprofit sector is and how, how few resources do flow from the private sector and other and government and other groups into supporting those organisations in Australia compared to the US and Europe.
Sophie Farthing
And we've got a couple of questions. I'll try and squish them together. But we've got questions about speed. I think in this conversation about regulation, we always
Sophie Farthing
go back to technology comes at us at such speed that regulation can't keep up. So N's question is pertinent at the moment is how do we work, as you know, from all the different perspectives we come from, how do we work at speed to get the policies, regulations and practices in place? And how do we make sure that regulation can adapt? And I might squish a question in there, because I know part of the work that you and your team undertook in this was about looking at international regulatory trends. Australia is in a good position, because we've got some experiences to draw from overseas. But can you comment just on has regulation keep up with technology that's coming, or how's regulation made so that it can adapt pretty readily to new tech that comes into play? Yeah.
Nicholas Davis
So I'm a big believer here in the idea that more haste means less speed. So if we try and do knee jerk immediate reactions to every technological development or outcome, we're going to end up with just really poor and conflicting and ineffective policy, that is either, that could be any one of three major failures, it could be impossible to implement, it could be ineffective in actually getting to the heart of the problem that you're trying to solve for in policy, OR, AND, OR it could isolate Australia and our market and not be at all in harmony with practice, as an organisation as a country that really does import a lot of its technology services and cloud services in particular, from overseas. And that would be a bad outcome for us as well. So I do think we need to, I think, I've got this, this sense, and maybe Martin you and I should take this away and talk for a little bit. But I understand the premise of technology moves fast and regulation moves slowly. But part of the point of regulation is to introduce stability and certainty, and, and give protections, which are long lasting, and are kind of a foundation for innovation. And so we really don't want to be updating core regulation every year, you know, every week, etc, what we want is, is really thoughtful, broad based, and in many cases, outcome based regulation, that allow hundreds of technologies, new technologies to emerge every day and new AI programmes to emerge every minute as we're currently seeing on Twitter. But nevertheless, for them to, you know, avoid creating harms for all of us. And for me, that's a different problem than thinking purely about that speed mismatch. I don't have a particular answer about it. But I do worry that hand wringing over speed itself might be the wrong, the wrong framing, it might be that we should be hankering much more over nuance, and about being able for our laws to be thoughtfully designed, debated and implemented in ways that actually, you know, adapt and cover that regulation, rather than thinking about how quickly they themselves actually change. Again, I might open that up to Ed because it is very much a regulatory based question, which links to our other projects that Ed and Sofie are running.
Edward Santow
Look, I don't think I have anything else to add there. I might just let the flow continue. Okay.
Sophie Farthing
Thank you. One, one factor that Chris has just raised, and that came out in the report itself is that there are laws that are in place. So can you talk a little bit about that? That was one thing that you found in the research, and certainly the background legal research that was done. And Chris is asking, you know, there are this principle based legislation, so is what we should be talking about is authoritative guidance? So through the regulators that are currently working in these spaces and adapting themselves to the new environment. Yeah,
Nicholas Davis
100%. Look, if you're a financial institution, and you're governed, so you're subject to adverse regulation, right, you can look to the law, but you also have a wealth of authoritative guidance from APRA, about how to set up and manage your governance systems. Now, that is all should, right, because it's flexible to what you need to do. But by goodness, if you diverge from a lot of that, and something goes wrong, you know, the courts and the regulator will interpret this as being you know, you you weren't paying attention there. So I think, you know, Ed’s call for an AI Safety Commissioner in the Human Rights technology report from 2021, I think is a really good way forward here because it's not saying we need a new regulator at all. It's not a new regulatory authority. It's rather a commission that is like gives that authoritative guidance that then can be taken into account by the regulators who have the power to enforce or to issue decisions against that regulation, knowing full well that organisations have had the chance to make sense of this with some really thoughtful, nuanced guidance, that can, as you point out, Chris, that can be adapted, you know, month by month as new systems emerge and clarifications can be added to that, you know, we've just seen a new type of generative AI that melds the spoken word with music. Just be aware that that's captured by section 22 over here, so you just need to - don’t be fooled, it's still the same thing kind of thing that that's often that's all that's needed so that people don't think oh gosh, we can do this
How can the generative AI system pose risks to an organization? And how are companies governing those risks at the moment?
Sophie Farthing, Head of Policy Lab
One of the key distinctions that we drew, our lead author, Lauren Solomon’s deep insight was unless you’re really clear in distinguishing between a harm and a risk, you can use the word risk in ways that really take away the human being or take away the kind of irreversible damage that that AI systems can do. The first key thing that we do is really take that point to heart, and draw a clear distinction between what is a harm to an individual or to a group that is, in many cases irreversible and it's hard to provide compensation versus what is a risk in terms of something perhaps financially quantified potentially far in the future, and potentially occurring more to a an organization or a group?
Professor Nicholas Davis, HTI Co-Director
We – the humans, not our machines – are responsible for how our organisations develop and use AI, so we need to take charge. That means that our systems of governance and oversight should enable us to ensure that AI is used safely and effectively.
Professor Edward Santow, HTI Co-Director