Building Australia’s artificial intelligence capability
Artificial intelligence (AI) is reshaping our world – with profound social and economic implications.
We face a critical point as policy makers, ethicists, human rights activists, technologists and concerned citizens the world over grapple with the impact of technology on every facet of our lives.
We are implementing world-leading training & masterclasses on AI strategy and implementation.
Find out more: Powerful, accurate and fair: Preparing Australia for next generation artificial intelligence
Limitless potential
Use cases for AI are limited only by our imagination. Government and the private sector are exponentially growing their use of AI; we are already using AI for quality control in manufacturing, supply chain optimisation, human resources, marketing, customer engagement, fraud detection, infrastructure maintenance, finance, legal and many other areas. Recent research suggests approximately half of all businesses are using AI in at least one function.
Our economy could see a $2.2 trillion boost if Australian workers are supported effectively as workplaces take an accelerated path to using automation. But a successful transformation requires significant investment in our AI capability. Australia needs to build strategic knowledge and skills to deploy AI – and educational institutions have a critical role to play.
Technology doesn’t happen in a social vacuum
Social, political and economic inequality is just as present in the world of technological innovation as it is elsewhere. Negative impacts of emerging technologies – foreseen and unforeseen, visible and invisible – will often compound the effects of existing disadvantage or disparities.
Addressing these impacts allows us to progress towards a socially just and inclusive society. As a new market develops in Australia for education and training on AI, universities are uniquely placed to help achieve this. We can help fill the knowledge gap and ensure that human rights principles form the bedrock for the development and deployment of new technologies.
Human rights and ethics, already a core part of our education offerings and practices at UTS, are a critical foundation for promoting social justice and improving society as AI use increases.
AI education and training
Ed Santow, the former Australia Human Rights Commissioner, joins UTS from 1 September 2021 as the Industry Professor — Responsible AI to create education and training courses in three different markets:
-
Bespoke leadership development for senior government and private sector leaders
-
Targeted training in AI-exposed sectors, such as financial services, to support good decisions regarding development, procurement and use of AI
-
General workplace training for employees in all sectors to gain an understanding of how AI is relevant to their work.
Why now? Regulation, reputation and risk
The ‘first generation’ of AI is at an end, marked by data-driven decision making that has enhanced efficiency but also created scandals like Cambridge Analytica and Robodebt – which showed up risks that affect fundamental human and consumer rights. Community trust in AI was radically eroded.
Now, we are moving into a more mature era of ‘second generation AI’—where accuracy, ethics and rights protections will be crucial in restoring community trust. This relies on getting three areas right:
-
Regulation
AI has been affected by ‘regulatory lag’. Laws have not been effectively enforced – too many things have ended up in the ‘too hard basket’. But this is changing. Reform processes are currently underway, and stronger regulation in Australia is likely in the next 2–3 years.
-
Reputation
Algorithmic bias has led to unfairness and even discrimination in areas as diverse as banking to social welfare to policing, and there is deep community concern regarding certain uses of facial recognition technology. There is now a heightened awareness of the reputational damage that can happen when AI goes wrong, and companies and organisations want to invest in protecting themselves against this.
-
Commercial risk
Poor personal data handling and algorithmic bias do not only cause human rights problems, they can be costly. This brings commercial risks for any company or government that fails to identify and address such problems.
These three factors mean that businesses, governments and workers need to grow their skills and understanding of AI, and especially its risks and opportunities.
How Australia and the world responds now will help set the path for how we live in an AI-powered society, including how we safeguard our economic prosperity and human rights.
Lead the way, ethically
UTS partnered with the Australian Human Rights Commission (AHRC) on a three-year project to explore the human rights implications of new technology.
An interdisciplinary group of researchers, academics and students generated important advice, feedback, and recommendations on the issues of human rights and technology
UTS has applied this interdisciplinary approach to the three courses we have designed for DFAT’s International Cyber and Critical Technology Engagement Strategy. The enterprise learning project includes: Foundations of Ethics and AI, Ethical AI Leadership and Foundations of Data Governance.
The demand for masterclasses and short forms of learning in AI and ethics will continue to grow as a leading technological university with ambitious social justice goals, UTS is excited to be part of this future.