Truth and consequences
On Monday, X, formerly Twitter, was expelled from the Australian Code of Practice on Disinformation and Misinformation following a complaint by advocacy and research organisation Reset Australia. The complaint centred on X’s failure to supply a means for users to report electoral misinformation during the Voice to Parliament referendum, despite having a policy that disallows the posting of certain categories of misleading information during a referendum campaign.
X was ejected from the code after a decision from the code’s complaints subcommittee, which comprises only members that are independent of both code signatories and DIGI, a tech industry body that administers the code. The complaint was made against code provision 5.11, which states, ‘Signatories will implement and publish policies, procedures and appropriate guidelines that will enable users to report the types of behaviours and content that violates their policies under section 5.10.’
The decision turned solely on the lack of means for users to report content that violates X’s own policies. As an outcomes-oriented, voluntary code, it is entirely up to signatories what policies they implement, as long as, under code outcome 1a, signatories contribute to reducing the risk of harms that may arise from the propagation of misinformation and disinformation.
In this sense, the case is a clear example of the intended focus of the proposed misinformation bill, namely platform policies and processes, rather than individual content. By the same token, the consequences of the decision – X’s removal from the code – illustrate the fundamental purpose of the bill: to provide genuine accountability through a set of enforcement powers, including the ability to issue fines for non-compliance. Instead, as a voluntary code, there are no substantial consequences for X besides another hit to its reputation, which seems of no consequence at all to its management. It’s a bit like expelling a student for truancy. Indeed, X failed to respond to Reset’s complaint and pulled out of the meeting of the subcommittee. This demonstrates contempt not merely for an established governance framework and the other platforms which participate in it, but also for the public – both in their role as users of the platform, and in the interest they have in a public sphere that enables the free and civil participation of all.
In her speech at the National Press Club last Wednesday, communications minister Michelle Rowland reiterated the purpose of the misinformation bill, to ‘increase the transparency and accountability of platforms’, and noted that it is crucial to understand that the virality and connectedness of social media require a different regulatory approach. We agree, but as we’ve written recently in this newsletter and in our submission on the bill, there are weaknesses in the proposed regulatory approach. In particular, the concerns of many about government imposition on freedom of expression are, though in some cases overstated, not without foundation.
Secondly, the government’s understanding of the framework itself seems flawed. Rowland said that the exemption for government information was to ensure that information such as disaster alerts ‘could not be removed by platforms under the new misinformation laws’. As things stand, platforms can remove government information, just as they can remove news content (as YouTube did, for example, with Sky News videos) or anything else, for that matter, that they wish. Nothing in the current code or the proposed bill makes this otherwise. The bill makes no rules about what information platforms may or may not remove. It merely sets a scope for content that comes under the transparency and accountability provisions of an industry code, and ties ACMA powers to that same scope. To set rules about what information may or may not be removed would be an entirely different prospect from what the government has so far indicated, and one which would be very concerning. The regulator should have no role in policing the boundary between content considered to be misinformation and content that is not. That is a matter for platforms, but it is also a matter for which they must be accountable to the public, whose information they control.
Certainly, what platforms do with government information should be transparent and accountable. But to be so it must come within the scope of the legislation. As we argued in our submission, to maximise industry accountability while keeping ACMA powers within acceptable limits, the bill needs to decouple the scope of the code from the extent of ACMA powers. This will permit a broad scope to ensure all platform actions are accountable, while focusing ACMA power on ensuring systems of accountability are in place. In addition, an independent mechanism for assessing platform content-moderation decisions would be of great benefit in improving the legitimacy of platform decision-making. An industry code could, for example, include a provision to ensure that government emergency information remains online. A failure to do so could be referred to the independent mechanism, and systematic non-compliance could then be addressed by ACMA enforcement powers.
Michael Davis, CMT Research Fellow