Telling rights from wrongs
The government’s announcement last week that it would not proceed with the misinformation bill overshadowed the report from the Senate committee published the following day. Despite the bill being dead in the water, the report makes for curious reading. No member of the committee offers a dissenting recommendation to adopt the bill, but it is clear that the main report was supported only by the three government members. The opposition and crossbench parties and independents all give dissenting reasons for their recommendations against it.
A focus of many of the comments on the right to free speech is whether opinion is captured by the bill’s definition of misinformation. Although it is not in the bill itself, the explanatory memorandum notes that the definition is ‘intended to include opinions, claims, commentary and invective’. The inclusion of these types of statements in the definition of misinformation generated quite some attention in the submissions. The Australian Human Rights Commission (AHRC) warns that ‘considerable caution should be exercised before including opinions and commentary within the scope of ‘information’ as this significantly broadens the potential reach of this legislation and increases the risk of it being used to censor legitimate debate about matters of public importance.’ And Professor Anne Twomey argued in her appearance that ‘you can't prove that someone's opinion is false; it's an opinion’.
We can grant the immense difficulty of determining what is and isn’t misinformation, and with the AHRC, that legislation should err on the side of protecting freedom of expression. We have argued as much in our own submissions and commentary on the bill and Australian code of practice. But the objection to including opinions is a strange one. Indeed, if misinformation is not to include opinion, it is difficult to imagine what it should include. Being misinformed is, in its simplest sense, having a false opinion about a factual matter. Indeed, one might say that an opinion is true just when it expresses a fact. That does not make it any less an opinion.
For example, my opinion might be that Joe Biden won the 2020 election. That opinion would be true. My uncle might argue that it was Trump. That opinion would be false (at least on the available evidence). When a judge asks in court, for example, ‘What are the facts of the matter?’, they mean essentially, ‘What is true here?’. Witnesses are called to give their opinions about what those facts are. The judge weighs their evidence, and makes a determination. This may involve deciding which opinions are reasonable, which may in turn involve a judgement about which rest on firmer evidence. This process is, of course, fallible. Indeed, as The Victorian Bar Association observes, science also advances through a fallible process of weighing evidence.
Perhaps what Twomey means is that some opinions are 'mere opinions', not founded in fact, but pure expressions of subjective commitment, taste, or value, and that these are to be contrasted with ‘facts’. There are of course such ‘mere opinions’ or statements of commitment or value (‘Ice-cream is the best dessert’, ‘liberal democracy is good’), and these are certainly not verifiable in the same way that opinions about factual matters are. But as the example above shows, only a subset of opinions are expressions of subjective commitment. In any case, the requirement for misinformation to be ‘reasonably verifiable as false, misleading or deceptive’ excludes such statements of commitment or value.
As the Victorian Bar Association and other submitters argue, the burden of making these kinds of judgement is immense, and is not feasible as a basis for enforceable regulation. That is certainly why the objective of the bill was to make platforms accountable for the systems they put in place to deal with misinformation – systems which, it is worth remembering, they already have, despite not having a regulatory obligation to do so. Unfortunately, the bill failed to fully embrace a systems approach, opening up questions about who is to hold platforms accountable for their decisions about what counts as misinformation, and how. This is what, ultimately, led to its downfall.
Michael Davis, CMT Research Fellow