The global fight against misinformation
As the Australian Government ponders new legislation to provide ACMA with powers to tackle online misinformation and disinformation, it may look to the EU’s Digital Services Act (DSA) for inspiration. The Digital Services Act will generally apply from 17 February 2024. Some of its rules already apply to very large online platforms, who have presented their first reports under the DSA rules. These reports have yielded interesting information, for example, about the language capabilities of content moderators and the expected error rates of the algorithmic filters used.
The DSA applies to all internet service providers offering their services to recipients within the European Union. The regulation combines safe-harbour provisions for internet service providers with harmonised due-diligence obligations. The DSA’s safe-harbour provisions already exist in EU law and are simply transferred from the eCommerce Directive to the Digital Services Act. In contrast, the due-diligence rules are new and untested. With respect to such due-diligence obligations, the DSA takes a pyramid approach: while there are some general provisions that apply to all internet service providers, many of the due-diligence requirements apply only to host providers. A subcategory of host providers, the so-called online platforms, are expected to comply with additional obligations. Finally, very large online platforms (VLOPs) and very large search engines (VLOSEs) must implement a risk-management system. A service is deemed 'very large' if its active user base exceeds 45 million recipients (which amounts to roughly 10% of the EU’s population).
As part of this risk-management system, VLOPs and VLOSEs are required to identify, analyse and assess any systemic risks arising from the design or operation of their service and its related systems. These risks include, but are not limited to, 'any actual or foreseeable negative effects on civic discourse and electoral processes and public security', in particular misinformation and disinformation. Providers are required to put in place reasonable, proportionate and effective mitigation measures which are tailored to the specific systemic risks. As misinformation and disinformation are often spread via advertisements, VLOPs and VLOSEs are also required to create a publicly accessible database of the advertisements shown to their users.
VLOPs and VLOSEs must prepare reports both on their risk assessment and on subsequent mitigation measures. They are also subject to a yearly independent audit to assess their compliance. More critically, VLOPs and VLOSEs are required to provide access to their data systems. This access must be provided both to regulators for compliance monitoring and to vetted researchers for research that contributes to the detection, identification and understanding of the systemic risks of their services. As VLOPs and VLOSEs are large corporations with seemingly unending funds, even authorities at EU level are generally under-staffed and underfunded in comparison. The DSA seeks to address this problem by having the European Commission charge a supervisory fee from such service providers.
The DSA also provides for some elements of co-regulation. It calls on the European Commission to encourage and facilitate the drawing up of voluntary codes of conduct to tackle different types of systemic risks. With regard to disinformation, an EU Code of Practice on Disinformation already exists. However, an assessment of its implementation and effectiveness undertaken by the EU Commission in 2021 has shown mixed results. Moreover, social media company Twitter pulled out of the Code in May of 2023. As another co-regulatory tool, the European Commission may initiate the drawing up of voluntary crisis protocols for managing crisis situations. It is envisioned that, in the event of a crisis such as a pandemic, these protocols could allow official information to be prominently displayed on the platforms.
The Digital Services Act has the overarching goal of fostering a safe and responsible online environment. It also covers many issues that, in Australia, are covered by the Online Safety Act. By comparison, the proposed Australian legislation to tackle online misinformation and disinformation is much more tailored. From an outsider’s perspective, the divide between misinformation and e-safety seems artificial, as the fight against misinformation is but one aspect of e-safety. Looking at the DSA from an Australian viewpoint, the supervision and enforcement mechanisms of the DSA might warrant a closer look. In particular, access to data for vetted researchers provides regulators, authorities and the public with a much better understanding of misinformation and disinformation on online platforms than the reports and requests for information envisaged under the proposed Australian legislation. In addition, the imposition of a monitoring fee on larger platforms may be worthwhile even under the proposed co-regulatory scheme.
Ruth Janal - CMT Visiting Research Fellow