Shaping our Future Symposium – Breakout Activities
During small breakout discussions on Day 1 of HTI of HTI’s Shaping our Future Symposium, over 90 thought leaders shared their experience to unpack the essential components of AI governance and explore the challenges of stakeholder engagement.
Breakout Activity 1: essential components of AI governance
Building on the panel discussion of the essential components, and HTI’s AI Governance Snapshot 1, participants discussed which components of AI governance were getting the most attention and which components were the most challenging in organisations today. The most activity was reported in the development of principles and policies, and new governance structures (such as Councils or Committees). We heard that whilst organisations ‘love governance’ and have put in place principles, policies and governance structures, fewer have translated these into practical actions. Participants underscored the dangers of relying on policies without complementary transparency and accountability measures, and a baseline understanding of how and where AI is being used across organisations.
Policies without accountability are meaningless
The more challenging components of AI governance were highlighted as: stakeholder engagement; people, skills, values and culture; and monitoring, reporting and evaluation. Stakeholder engagement emerged as a neglected area that ‘wasn’t on the radar until fairly recently’ despite the need to hear community voices and ‘not bake in’ our past failures and biases.
How can you govern if you don’t know what you’re measuring?
Many organisations are experimenting with AI governance and are at different levels of maturity. But whilst some advocated people should ‘just jump in and iterate as you go’ others were more cautious arguing you should ‘wait and see in a changing landscape.' Either way, AI governance practices need to be flexible to ensure they can apply to rapid developments. Moreover, there is clearly a strong desire for more practical advice in this space, including from peer-to-peer learning and guidance from regulators.
Breakout Activity 2: Impacted communities and stakeholder engagement
The second breakout activity explored the challenge of engaging stakeholders and assessing how AI systems affect impacted communities. Discussions built upon the prior panel discussion as well as AI Governance Snapshot 2.
In a challenging area of AI governance, participants reported their fears associated with engaging with impacted communities in this word cloud:
This set the scene for the small group discussions that delved deeper into the opportunity and uncomfortable truths of stakeholder engagement.
There needs to be governance mechanisms ‘to share truths that leadership doesn’t want to hear'
When asked what organisations can learn from engaging stakeholders potentially impacted by AI systems, participants reported the value in better understanding ‘what the actual problem is’, what stakeholders really want from AI systems, as well their experiences, expectations, and perspectives. Organisations can also discover the harms, hidden dangers and unintended consequences of their AI systems.
Stakeholder engagement ‘wasn’t on the radar until relatively recently'
Meaningful engagement was seen to provide a powerful learning opportunity for organisations to learn ‘what works well’ to help them improve their AI systems and respond to the needs of their customers, employees and community. However, it requires openness, transparency and ‘listening without judgement’.
When tackling one uncomfortable truth about engaging stakeholders around AI that’s not being said, participants reported:
Engaging stakeholders around AI is a time-consuming and resource-intensive process, often requiring significant investment in people, expertise and organisational support.
Those deploying AI systems can be anxious about what people will say and be ‘afraid they might bite’. There can be concern about being criticised or receiving feedback they don’t want to hear.
A need to keep asking ‘who are we missing?’ but also that ‘everyone is different’ and ‘you have to draw the line somewhere’ when broadening engagement - highlighting the challenges of ensuring representative consultation.
Concern that engagement could be tokenistic, used inappropriately (i.e. ‘ethics washing’), and is potentially even exploitative of disadvantaged groups.
The AI Corporate Governance team at HTI are developing a series of resources for participants and the broader AI governance community building on the insights from these breakout activities. These include an AI Governance Snapshot Series that will be published throughout 2024 that will dive into the essential components of AI governance in more detail.