'What is AI' video series
HTI has partnered with CSIRO’s National Artificial Intelligence Centre to produce a series of training videos aimed at people who are currently using or looking to use AI in their organisation. These videos provide guidance on how they can make responsible, human-focussed decisions regarding AI technology.
This video series is designed to give a crash course in human-centred AI in easy-to-understand language, including why AI is so transformative, why governance matters, and how you can ensure that AI is used responsibly in your business. The aim of the videos is to equip decision makers with the strategic AI skills that they need to consider how new technologies can be designed, implemented and used in ways that embed human values.
This work is part of HTI’s Skills Lab which was established to build Australia’s capability in strategic skills associated with AI and other technology, by building skills in procurement, implementation and oversight of AI.
What is AI?
Artificial Intelligence is core to how businesses operate today. It's the technology powering your phone, targeting your customers on social media, and helping you recruit. Whether you know it or not, your organization relies on multiple AI systems every day. They might employ AI to help them be more efficient, or to improve how they engage with customers.
The human technology Institute's research reveals that almost every Australian business relies on multiple AI systems today. You may only be aware of a fraction of the AI applications that your employees use at work, often without any official sign off from management or it. Some of these AI systems are valuable and low risk, for example, AI powered navigation systems, but others can introduce a range of risks and challenges from cybersecurity concerns to the threat of physical harm.
As AI becomes an essential part of doing business, every business leader needs to cultivate what we call a minimum viable understanding around AI. And this starts with understanding how AI systems work, and why managing them carefully is critical to your organization's success.
AI is challenging to define, partly because what our understanding of AI is changes over time. When the field of AI began in the 1950s AI systems tried to mimic how humans make decisions. These became known as expert systems. Thanks to massive increases in data and computing power, the last decade has seen the rise of machine learning. This is where digital systems apply algorithms to large historical data sets to learn deep patterns. This allows them to make predictions when applied to new situations.
Most recently, generative AI has changed the way we think about the possibilities of AI systems. Applications like chat GPT and darly rely on models trained on huge amounts of data to produce fluent text, novel images, and even video from simple text prompts.
It's critical to remember that all AI systems are based on maths, not magic. Machine learning is underpinned by statistics, linear algebra, probability theory and calculus. While impressive AI systems are powered by complex algorithms, and vast amounts of computing power, these systems do not possess common sense, interpersonal skills or a true understanding of the world. They can and do fail in many different ways.
As a business leader, you can think of AI as being a very broad collective term for digital computer systems that have three characteristics. First, AI systems do impressive things. AI systems combine algorithms and data to do things we have traditionally expected only of humans, such as predicting outcomes, classifying complex information, optimizing processes, or generating content. Many of the largest large language models are also remarkably flexible, able to do many of these tasks through the same interface.
Second, AI systems tend not to be explicitly programmed. Neural networks in particular work by learning from data, often finding patterns and relationships that would be impossible for a human being to discern. They are therefore deeply influenced by the data in which they are trained, which can result in errors and biased outputs.
Third, AI systems tend to be unpredictable and opaque. They produce different outputs depending on how they've been trained, and the input they are given. It takes special effort to understand how and why they come to a particular conclusion or decision. All of this means that AI systems are more than just another it application for your business. They offer huge promise, but also require special attention to manage safely and responsibly.
This video series is designed to give you a crash course in human centered AI. In the videos that follow, we will cover why AI is so transformative, why governance matters, and how you can ensure that AI is used responsibly in your business.
AI is crucial for modern business operations, driving technologies like smartphones, social media targeting, and recruitment. While some AI systems, like navigation tools, pose minimal risk, others introduce cybersecurity and safety concerns. As AI becomes fundamental to business, leaders must develop a basic understanding of its workings and the importance of managing it carefully.
AI encompasses a broad range of digital systems that perform human-like tasks, learn from data without explicit programming, and can operate unpredictably and opaquely. Managing these risks necessitates careful management to harness AI’s potential safely and responsibly.
What is human-centred AI and why do you need it?
Australian organisations expect AI systems to lift productivity, boost efficiency and improve customer service. When AI systems perform well, the benefits are exciting. A study by Stanford professor Erik Brynjolfsson found that customer service agents using an AI assistant to support their work solve problems 14% faster each hour, while also making customers and employees happier.
Yet, in order to take advantage of these benefits, we need to take the necessary steps to make sure that AI systems are responsible. That's why it's essential that they draw on the principles and philosophy of human centred design.
Human Centred Design is a methodology that places people at its core. It's about designing objects, processes and systems in a way that responds to a deep understanding of human desires, contexts, capabilities and needs. Human Centred Design aims to create solutions that are transparent, ethical and adaptable.
The idea of having empathy for users and other people affected by an AI system is at the heart of human centered design. But it doesn't stop there. Human Centred Design requires a collaborative approach that starts with a defined problem and seeks input from diverse perspectives, stakeholder representatives and multidisciplinary teams to co-create solutions. It also champions iterative design and prototyping, which means that solutions are tested and refined in repeated cycles that draw in user feedback to progressively improve this system.
If you take this approach, your AI system will have its best chance to be useful, appropriate and functional, and meet your organisational goals. Human Centered Design also gives your AI system the best chance of being responsible to. That's because building empathy for the people your system engages with from the ground up will provide insights that help you comply with Australia's AI ethics framework.
Take, for example, the challenge of fairness in AI applications that make or recommend decisions. Human Centred Design can help teams understand whether your data sets are truly representative of the groups your system is meant to serve, helping to eliminate bias.
As a decision maker in your business, your choice will help shape the future of AI. Whether you're purchasing, customising or building an AI solution from scratch, you should ensure your teams use human centred design to make it fit for purpose. Your organisation may well be struggling with enterprise software that is robust, but challenging to navigate. A powerful AI system that doesn't take humans into account will not just frustrate users and leave money on the table. It could create significant risks for your business.
A good question to ask repeatedly is this AI solution designed with our users, employees and stakeholders at heart?
Our next video on the potential harms of AI systems. We'll explore what can go wrong if your system isn't human centred.
To realise the benefits of AI, we need to take steps to ensure it is responsible and fit-for-purpose. That’s why it’s essential that they draw on the principles and philosophy of human-centred design, which is a methodology that places people at its core. It's about designing objects, processes and systems in a way that responds to a deep understanding of human desires, contexts, capabilities and needs.
Managing the risks of AI
Why is AI so promising for business? A big part of the appeal of AI systems is that they tend to be more powerful and more flexible than other technology systems we're familiar with. This means that AI systems can tackle problems that other technology systems can't solve. The same technological features also make them less predictable and more difficult to understand than regular IT systems. This means that most businesses' IT governance approaches aren't robust enough to ensure that they are developing, buying or using AI systems responsibly.
Let's think about what you need to consider before using AI systems in your business. You already know it's important to ensure that an application is cyber secure, and that it isn't in breach of your privacy policy. But are you sure that your pre-trained AI system will continue to perform equally well a year from today? Are you confident that it won't produce dangerous outputs or cause harm? Are you certain that it isn't inadvertently breaching human rights?
Because AI systems are often used in ways that can have big impacts for your stakeholders, business leaders need to be across these kinds of questions. If not, you might find your organisation exposed to significant commercial, reputational and regulatory risks.
So let's look at the risks more closely. Commercial risks occur when poor AI system performance leads to extra cost. Ill-suited system design and security weaknesses can cost you time and money. For example, many AI systems are trained on data that becomes less relevant over time. If the training data isn't updated, the system will not be fit-for-purpose.
Reputational risks can be even more concerning. Only 34% of Australians say they are willing to trust AI systems. This means you need to be extra careful that your systems are designed and implemented ethically and responsibly. If a system is found to have biased results, or is found to mislead or manipulate, this can severely damage your brand. If you're relying on automated decisions that aren't explained, well, you may be at risk of the same reputational damage.
Finally, AI systems can create regulatory risks. Australia has a range of laws covering consumer rights, employee safety and data privacy, to name a few, that apply to your use of AI in the same way they apply to any other product or process in your business. If you're not managing the risks associated with AI, you might mislead customers or endanger workers and be in breach of Australian law. And improper use of data could be in breach of privacy law, while a biased AI system could produce decisions that are in breach of discrimination law.
Commercial risk, reputational risk and regulatory risks often arise at the same time. Imagine that a group of doctors in a general practice implement an AI system that helps them diagnose illnesses and recommend treatment plans. Initially, the system seems to work well. It makes it far easier for doctors to access detailed patient histories, take notes and advise on treatment options. It gives GPs more time to focus on connecting with patients. However, over time, a few doctors notice an unsettling trend. Despite the AI system not having access to records of patients ethnicity, it appears to be systematically recommending a set of expensive private treatments to people of a certain ethnicity at far higher rates than other groups.
This situation presents all three types of risk. By disproportionately recommending more expensive treatments in this way, the clinic risks driving away patients without private health insurance. When it becomes public knowledge that the clinic relies on an AI system that produces racially biased results, the clinic's reputation would be damaged. Lastly, this type of bias puts the clinic in breach of Australian discrimination law and medical ethics, exposing the doctors to claims of professional misconduct. An AI initiative that began as a technological advancement has now become a costly mistake.
So how can you ensure that these risks don't materialize for your organisation? The human-centred approach is to start by focusing on harms and linking these to risks that may arise later. Harms are real, tangible, negative impacts to people. They can range from small inconveniences to serious life-threatening consequences. Responsibly managing AI system risk means first thinking carefully through the potential harms that can result from the systems you use, then implementing strategies that avoid these harms. This is true whether you're using AI to analyse data, summarize research, draft legal contracts, operate machinery, recommend a product or respond to your customers online. After all, it's only once you understand the potential harms that you can develop controls to prevent them from occurring.
In the next video, we will explore how you can do this in ways that make sure your AI system delivers.
The same technological features that make AI systems so powerful also make them less predictable, and more difficult to understand. This introduces new commercial, reputational and regulatory risks for organisations using AI, and result in significant harms to their stakeholders.. To ensure that these risks don’t happen for your business, use a human centred-approach by identifying effective prevention and mitigation strategies on harms, and linking these to risks that may arise later.
Addressing AI system harms
The latest AI systems are very impressive. Large language models such as ChatGPT have ushered in a new era of AI systems that are not just generative, they are general, able to provide a fluent response to almost any question you ask. The fact that many generative AI applications are free means that anyone with internet access can do things that most people thought impossible only a year ago.
Generative AI might rule the headlines. But other forms of AI are also becoming increasingly accessible to organisations. It's now common for someone wanting to analyse data with the latest AI tools to do this in a few clicks on an MLOps platform. Once data is uploaded, the system automatically trains, tunes and ranks models without any coding whatsoever. As a result, more businesses across Australia are experimenting with AI applications as ways of solving problems that until recently, were thought to be too hard or required an entire team of data scientists and software engineers.
As we discussed in the last video, with this explosion of interest comes a range of new and expanded risks to organisations as well as potential harms to people. That's because organisations are using AI systems in ways that involve and impact people directly. Some of the fastest growing uses for AI systems are for customer service, for marketing and for recruitment.
To use AI systems responsibly, you need to understand potential harms. Identifying potential harms is referred to as an impact assessment. The negative impacts will depend on what the system is trying to achieve, how it is designed, where it is deployed and who encounters it. To avoid the harms you've identified and thereby manage the related risks to your business, you need to put in place appropriate controls.
HTI's research shows that the majority of potential harms of an AI system come from three sources.
First, an AI system can fail to perform as expected. For example, if an autonomous vehicle doesn't accurately identify objects on the road, it can crash and hurt someone. An algorithm that performs well for one group, but poorly for another can end up unlawfully discriminating. A system that isn't robust could fail at a critical moment. Without appropriate security measures, your system could inadvertently release private data.
All these are different categories of poor performance. An important question to ask is, what are all the ways that this AI system could fail, and what would be the consequences of such a failure?
Second, an AI system can be used in ways that deliberately harm people. For example, an employee might use a facial recognition system to invade someone's privacy. If your website claims to show customers the cheapest deals, but you instruct your recommender algorithm to prioritise more expensive offers, you may be misleading your customers and breaching consumer law. A targeted recruitment algorithm might purposely exclude older or younger people in breach of discrimination law. To assess this category of harm, ask yourself how could someone use this to intentionally cause financial, psychological or physical damage.
Third, even if AI systems perform well, and there's no intent to harm, they can still have negative impacts and create risk for your organisation. For example, a facial recognition system used in a publicly accessible place can limit people's rights to privacy. A fleet of self-driving vehicles might create congestion when empty. Energy hungry algorithms used at scale could produce excess amounts of carbon. In many ways, this is the hardest category to address. Managing this category of harm requires you to think about the second order impacts of your AI system, particularly when used at scale.
To understand this best, you might ask yourself if this application ended up being used by everyone everywhere, what issues might arise?
In the next video, find out what's required of you as a business leader when you're using AI in your business.
One way to identify potential harms is to conduct an “impact assessment.” To avoid the harms you’ve identified – and thereby manage the related risks to your business – you need to put in place appropriate controls.
Obligations and responsibilities of AI deployers
Businesses everywhere are racing to adopt AI systems. As your organisation implements AI, are you ready to navigate your ethical, legal and regulatory responsibilities? Whether you're a company director, or senior business leader, or play a key role in developing, procuring or implementing AI systems, you should know your obligations under Australian and international law. If you don't, your business may be exposed to significant risks, and you might face individual consequences too.
Of course, managing risks and seeing opportunities require you to understand what AI systems are operating within your organisation and how they create value. This is fundamental to ensuring that your AI use is legally compliant.
So how do you know what obligations apply to you? First, it's important to remember that all the traditional laws that govern how your business operates also apply to AI systems. Your AI solution must therefore comply with the full range of consumer privacy, anti discrimination, workplace, intellectual property and any other laws that apply to your industry or products.
For example, if your AI system accidentally leaked customer information, you and your business could be in breach of privacy and cybersecurity laws. Misleading statements made by AI-driven advertising or promotions could violate consumer laws, while harms caused by an AI system in the workplace could trigger workplace safety issues.
Of course, additional rules will apply depending on the industry or sector you work in. For example, AI used in legal practice must not breach client privilege and confidentiality rules. If you're working in healthcare, AI systems must comply with patient safety and privacy laws. Moreover, if you are a company director, you personally have a duty under Section 180 of the Corporations Act that requires you to act with care and diligence. This includes ensuring that adequate governance systems exist to manage the risks created by your AI systems.
Second, international regulations can also impact how you operate your AI system, especially if you're doing business with overseas clients. A prime example is the European Union's General Data Protection Regulation, or GDPR, which applies to EU citizens globally. If your business interacts with EU citizens, your AI systems must also be GDPR compliant. As more countries explore and develop AI specific regulations, you are more likely to be exposed to international regulation.
Third, you should make sure you understand the expectations that your customers and other stakeholders have for you and your organisation. The more you engage with the people and communities who will be impacted by your AI systems, the better you will understand the concerns and expectations that may point to legal risk.
Working directly with those affected by your systems will also help to build trust and confidence in your organisation's use of AI. In some cases, it will be essential that you actually co-create AI systems with your customers. A useful reference of what Australians care about regarding how AI systems behave can be found in Australia's eight AI ethics principles.
So how can you ensure you're doing your duty as a director, executive or manager? Most importantly, if you're unsure of any of your legal obligations concerning AI systems, you should always seek legal advice, starting with your organisation's legal team. However, even the world's best legal counsel won't be able to help unless you can comprehensively explain to them the purpose of the system, the data it relies on and how it works in practice. This means that you or someone in your organisation must understand how the system works, and how it could cause harm. This is the case even if a third party provides your AI system.
As we will discuss in the next videos, you should also ensure that your organisation has implemented policies, guidelines and standards to help ensure you can meet your legal obligations and customer expectations.
Finally, ensuring that your AI systems are compliant is not a set and forget process. The way that AI systems are trained means that they can go out of date and become less effective. It's important to monitor the performance of AI systems to ensure they are working as intended.
In the next video, you'll learn about the elements you will need to responsibly procure AI systems.
If you are involved in procuring or implementing AI systems, you need to understand what AI systems are operating in your organisation and how they create value, as well as your ethical, legal and regulatory obligations under Australian and international law.
Procuring AI
More organisations are opting to buy rather than build AI systems, hoping to tap into the latest, most advanced applications offered by technology partners. Given the nature of AI systems, building a third party solution requires new procurement strategies.
What do executives need to know to plan, source and manage AI systems in today's evolving technology landscape? One, recognise that AI systems need additional care. Remember that AI systems differ from most other IT systems your business relies on. The special properties of AI applications mean that while existing IT risks and procurement considerations remain, there are a range of amplified and emerging risks that you and your suppliers should discuss.
For example, most software, including most advanced cloud-hosted systems relies on traditional, rules-based programming that can be understood and debugged efficiently. This is not the case for many AI applications, which tend to be less transparent to everyone, including your technology partner. To ensure that your customers and business are protected, additional care and controls are required.
Two. Have a clear problem statement. Before talking to suppliers, you should be absolutely clear on the problem you are trying to solve. It might be that an AI application is not the only, nor the best solution, or there may be an existing fix already being used within your organisation.
Three. Understand what the system does, and how it uses data. Remember that AI is maths, not magic. As impressive and complex as AI systems are, you should make sure that you or your colleagues have what we call a minimum viable understanding of what you might buy. For example, given the potential for AI systems to reveal confidential and personal data, make sure you understand the data that the AI application uses. This includes appreciating the quality and relevance of the data used to train the system, as well as how the data your employees and customers might enter into the system is managed.
Here's an important test. If a technology partner can't explain, in an understandable way, how the system works, and the data on which it relies, don't buy it.
Four. Do a structured impact and risk assessment. Make sure you appreciate the risks of the system in your context, your stakeholders and for your purpose. For example, you should take the time to consider what would happen and to whom, under three circumstances if the system fails, if it is deliberately used to create harm, and if it is used in an inappropriate context. Of course, if you have any doubts about the legal implications of the risks you uncover, you should seek legal advice.
Five. Invest in upskilling diverse teams. Right now very few people are experienced in how to buy, test and use AI systems. Getting the most out of AI and using it responsibly requires skilled operators. That means you should invest in the training and skills development for those responsible for procuring, running and overseeing a prospective AI application. If your organisation is big enough to have separate procurement, IT and business teams, ensure that a diverse, multidisciplinary team is engaged throughout the procurement process. Having multiple perspectives, skills, abilities and experiences in your procurement project team will make you more likely to arrive at a truly robust, resilient and fit for purpose solution.
Six. Stay vigilant. Finally, remember that AI solutions are not set and forget projects. Make sure your supplier provides suitable ways to monitor, maintain and manage any AI system.
In the next video, you'll learn about good governance and how this relates to AI systems within your business.
With more organisations opting to buy, rather than build their own AI systems and given the nature of AI systems, new procurement strategies are required. What do executives need to know to plan, source and manage procurement of AI systems in today’s evolving technology landscape?
What does good governance of AI systems look like?
What does good governance look like when it comes to AI systems?
Governance is the set of rules, systems and structures that help organisations make good decisions and maintain accountability to their stakeholders. Good governance is a critical driver of innovation. It's not just about compliance and risk management. It's about ensuring your organisation gets the most out of your investment by protecting stakeholders and practising strategic quality control.
Given that AI systems may offer both huge benefits and potential risks to business, AI governance should be a key topic for your board and a priority for you and your executive leadership team. Even though AI technologies are changing and improving on a daily basis, the principles of good governance remain the same. In fact, you can apply best practice principles of governance from other areas of your business.
One, incorporate AI into your business strategy. Ensure your use of AI systems is a core part of every element of your business strategy in terms of both reward and risk. This means deciding the most important problems that AI applications may be able to solve and the value they will deliver as a result. It also means deciding how you won't be using AI. For example, you and your board may not want to risk alienating customers by using their personal data for targeted sales.
Two: build a fit-for-purpose AI governance system. Good governance is not just an intention. It's a series of policies, processes and practices. This means your business should put in place a fit-for-purpose governance system, including clear responsibilities for oversight and accountability if things go wrong. Some of these policies and processes will determine steps people and organisations must take when dealing with AI solutions. For example, you might create a policy that states that a select group approves all AI systems before procurement. Other policies and processes will determine elements of the AI solution itself. For example, if you decide that all of your company data going into a supplier system must be encrypted, this would need to be reflected in your solution.
Three: build strategic AI skills. Make sure that your senior executives, operational teams and frontline employees all have appropriate training around the use and management of AI systems. Even if your organisation is lucky enough to have talented technical experts who understand AI, you will still need a wide array of team members who have a minimum viable understanding of both your strategic properties for AI and its operational uses for supporting a culture of good governance. Recent Australian corporate experience tells us that good governance is heavily influenced by your corporate culture. Take a moment to reflect on your organisation's purpose and values. Does your proposed use of AI align with those? Consider your senior leadership. Do they exemplify best practice behavior regarding the uptake and use of AI? Good culture is created by modeling the right behaviors from the very top of your organisation. You can reinforce good culture through sharing open and honest communication and establishing training and coaching that rewards and reinforces good behavior.
Remember, maintaining a strong culture will help you use AI in ways that creates value for all stakeholders, including employees and customers; prioritises ethics and fairness; places people at the centre of the experience. In the next video we'll discuss why trustworthy AI systems are important.
Governance is the set of rules, systems and structures that help organisations make good decisions and maintain accountability to their stakeholders. Given that AI systems can offer both huge benefits and potential risks to your business, AI governance should be a key topic for your board and a priority for you and your executive leadership team. This video describes how to apply best practice principles of AI governance in your business.
Why are trustworthy AI systems important?
Why are trustworthy AI systems important?
Are your artificial intelligence systems trustworthy? An incredible? one-third of Australians think that the potential harms of AI applications outweigh the benefits. As AI becomes increasingly important to how your organisation operates, if you want your customers to trust your business, they'll need to trust how and when you use AI.
When it comes to trustworthiness, a good place to start is by thinking about the AI powered apps you use every day without much concern. These may include a ride sharing app, or internet search service. What about them makes you comfortable when using them? Now consider a technology or system that you don't completely trust. What are the factors that make you feel suspicious, uncertain, or scared? When you were first exposed to a new piece of technology, you were probably wondering things like, can I rely on this system to do what it promises? Will I, and my information, be safe if I use this? Does the company that offers this have my best interests at heart? What support do I have if things go wrong? Questions like these would have helped you figure out whether you can trust the system you're about to use. These are the same questions that people ask about AI systems every day.
One reason for such low levels of trust in AI today is that people feel they don't have the information or assurance they need to truly trust AI systems.
So what does it mean for an AI system to be trustworthy today? Key characteristics of trustworthy AI systems are that they're reliable, safe, secure and resilient, accountable, and transparent, explainable and interpretable, privacy enhanced and fair, with harmful bias managed.
Let's break down what each of these characteristics mean. A system is generally reliable if it performs as intended. An AI enabled image generator that gives you poor quality results would be unreliable, as it fails to consistently do what you expect it to do.
A system that is safe must not endanger human life, health, property or the environment. Some AI systems, for example, those used in air traffic control, require special attention. A failure to guarantee the safe operation of these systems would lead to disaster. In general, you should consider how your AI system could harm people should it fail, be used maliciously, or be used in the wrong context. Systems that are secure and resilient are able to operate as intended, even when subjected to stress.
For example, an AI system that processes personal data should not only be built to withstand cyber attacks, but you'd also anticipate how data could be exposed if someone makes an error while using it.
An AI system would be considered transparent when the people managing and interacting with it can access useful information about the AI system's functionality and outputs. For example, if an AI system is used to determine which customers are eligible for a bank loan, transparency could involve first informing customers that AI was being used to make the decision and providing an explanation about how the system uses individuals' data to determine their loan eligibility. For the system to be considered accountable and transparent, it would also allow a customer to question the outcome of the system's decision and feel supported if something has gone wrong.
We would consider an AI system to be explainable if the way the system works is represented clearly in a way that allows a human to understand how the output was created. Explainability, transparency and accountability often go together. If a system fails to function correctly, an affected user should know that this is the case. The organisation using it needs to be able to identify what went wrong, and everyone should know who was responsible for fixing the problem and supporting the user accordingly.
Privacy enhanced AI systems embed privacy at all levels of AI system design, development and implementation.
Finally, fair systems are ones that avoid harmful or unlawful bias against people. While fairness can be tricky to define, their systems select the definition that best suits the context in which the AI system operates. Managing harmful bias involves considering systemic bias. For example, prejudice against a particular group, computational or statistical bias, often arising through algorithmic processes, or from the result of non-representative data sets, and human cognitive biases which relate to how people perceive an AI system to work or function within a broader system of operations.
As a business leader, you will likely be responsible for procuring, managing, or overseeing multiple AI applications across your organisation. This means that you have a key role in building trustworthy AI systems and building confidence among the people who use or are affected by them. Luckily, you can simply put yourself in the shoes of people engaging with your AI applications, and look for opportunities to increase trustworthiness at every point of the system. Your ability to influence the purpose of an AI system, the way it operates, and how users experience it in the real world are all critical to increasing trust and delivering value to your customers.
As an AI becomes increasingly important to how your organisation operates, if you want your customers to trust your business, they'll need to trust how and when you use AI. Key characteristics of trustworthy AI systems are that they're reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy enhanced and air.