A world first for ethical AI
Using artificial intelligence technology to make important HR workforce hiring decisions fair, accurate and equitable.
Companies, organisations, and governments around the world face an incredible challenge to rebuild their workforce at scale, and with the confidence that the technology they use to make important decisions is fair, accurate and equitable.
The Data Science Institute at UTS and our industry partner Reejig have delivered a world first process where algorithms within an Artificial Intelligence-driven intelligence platform have been independently assessed against key ethical criteria of transparency, privacy, bias and accountability.
Reejig, a leading workforce intelligence platform, partnered with Distinguished Professor Fang Chen, Executive Director Data Science UTS to deliver the ‘non-biased talent shortlisting algorithm validation’ project a pioneering independent validation of ethical AI.
Over two years, the research team lead by Professor Chen has developed, tested and iterated the ground-breaking assessment process before use by industry partners to confirm that the AI outputs are fit for purpose and deliver actionable results.
When you are talking about AI and workforce or HR data you are dealing with sensitive information about real people, so building trust into that process is critical. Combined, these two have the power to transform the way we think, engage and work. AI for good needs to be the standard. But there has been no way to properly assess that until this project.
Reejig CEO and co-founder, Siobhan Savage said the benefits that data and AI are bringing to the professional workforce are phenomenal, but AI is not immune to bias in the data or in the algorithms. Previously, the decision making has been hidden in a black box, meaning until now, there has been no clear, defensible, independent, and objective validation demonstrating ethical AI.
Frameworks provide guidance however we believe that’s like marking your own homework. Boards, organisations, and decision makers are exposed to real risk that they may unwittingly be causing harm or bias. Given what’s at stake, we were astounded that there was no independent assurance that the AI an organisation adopts is ethical and unbiased.
Mark Caine, Artificial Intelligence and Machine Learning Lead, World Economic Forum said we absolutely need to minimise the risk of AI to humanity, otherwise the public will lose trust in AI and its capability to do good. Whilst globally, there are over 200 AI ethics frameworks and guidelines, few have been operationalised and this is a milestone in bringing independently audited certification to an innovative AI product.
A key barrier to the adoption of AI, and thus it’s potential to do good, has been lifted. This is significant for organisations who want to do the right thing and minimise risk to their customers, their stakeholders, and their reputation.
Reejig™ uses big data and verified Artificial Intelligence (AI) to help organisations understand and analyse the skills and capabilities across their talent ecosystem. It connects existing HR systems, cleanses and aggregates talent data, and unifies data from across the entire enterprise. This coupled with market, industry and competitor intelligence and skills mapping, is helping companies design their workforce of the future.
A key part of the Reejig™ workforce intelligence platform is automated matching of potential candidates or employees to opportunities in a way that removes negative unconscious bias from the process and assists the HR users to explain why talent has been recommended to ensure it complies with Equal Opportunities and employment law.