AI - who is responsible?
This week, in a submission to the government’s consultation paper on safe and responsible AI, we set out our thinking on how to approach the roll-out of AI in journalism and its potential impact on the public sphere. As Monica mentions above, newsrooms need to be alert to, if not alarmed about, the risks brought by AI. This includes the need to protect their own interests as well as those of the public. There is risk as well as opportunity, for example, in recent moves to license news content to AI developers, including licensing public-interest content for exclusive use, which may serve as a means of forestalling copyright challenges or an expansion of the News Media Bargaining Scheme, as discussed this week by Evana and David.
GenAI opens a much broader range of use cases – and deeper risks – than older tools. It is important that responsibility is appropriately shared. News businesses are ultimately responsible for what they publish, and existing self- and co-regulatory frameworks provide a generally accepted, if imperfect, approach to holding them accountable. As part of this approach, the industry should be encouraged to review its codes and to develop AI-specific guidelines to ensure editorial processes are sufficiently robust to deal with AI risk. This includes ensuring that journalists understand the capabilities and limitations of the tools they are using. In turn, developers should be required to certify AI tools against a set of independent standards that address the risks of propagating misinformation or biased data.
The risks AI poses to the broader information environment go beyond journalism to implicate digital platforms and their users. Digital platforms should be responsible for implementing safeguards against AI-assisted manipulation and more broadly for promoting a high-quality information ecosystem on their services. Digital platforms, after all, are amongst the biggest developers and users of AI tools. In our view, the government should consider the impacts of AI alongside its current focus on misinformation on digital platforms to encompass a holistic approach to the news and information environment. The potential impact of AI is of such a scale that a narrow or piecemeal approach is unlikely to be effective. Read our full submission here.
Michael Davis, CMT Research Fellow
This was featured in our Centre's fortnightly newsletter of 11 August - read it in full here and/or subscribe.