Nowhere to hide/ Twitter 2.0
When Twitter submitted its transparency report under the EU Disinformation code in February, the commission publicly criticised it for being short of data, with no information on commitments to ‘empower the fact-checking community’. This was something of an understatement. The report contained very little information at all, with nothing supplied under the qualitative reporting elements or the quantitative service level indicators introduced in June 2022. Instead, Twitter noted it would ‘engage with the relevant stakeholders as to the best way to provide details on Twitter’s compliance with the Digital Services Act’ and to account for platforms’ respective product and policy models, the risks they face, and the resources available to them.
It seems a bit rich to blame resourcing when Elon Musk has laid off more than 75% of Twitter's staff, including heavy cuts to its trust and safety team. But, reading between the lines, perhaps it should have been less of a shock when, last Friday, Europe’s internal market commissioner Thierry Breton revealed Twitter had withdrawn from the Disinformation Code entirely. Breton also sounded a warning: ‘You can run, but you can’t hide’. Under the new Digital Services Act (DSA), companies designated as very large online platforms or very large search engines must undertake annual independent risk assessments and provide comprehensive data against agreed indicators every 6 months. Breaches attract penalties up to 6% of global turnover. Twitter was designated alongside 18 other services on 25 April and has 4 months to comply.
The DSA gives little flexibility for different product and policy models, risks or resourcing, all of which look pretty shaky under 'Twitter 2.0’, the ‘town square of the internet’. Soon after Musk’s takeover, the company signalled a shift in its content-moderation approach towards ‘Freedom of Speech, Not Freedom of Reach’, or, in policy speak, ‘de-amplification of violative content’. Since then, Twitter has cheerfully announced several related innovations, including crowd-sourced fact-checking system community notes (formerly Birdwatch), and labelling of de-amplified posts. These might have promise as part of a comprehensive strategy to combat misinformation. But if such a strategy exists, it appears to be failing.
In part, this is directly attributable to Musk’s own reach – and behaviour – on the platform. Science Feedback has shown that misinformation superspreaders increased their reach significantly after Musk engaged with their posts, and the Institute for Strategic Dialogue found that Musk’s Twitter activity changed considerably after he bought the platform – from engaging mostly with his fans to interacting with right-wing accounts. But product and policy models are also to blame. Reset found that Musk’s decision to roll back controls on Kremlin-controlled media substantially increased the reach of Russian disinformation. And recent research in Australia has found evidence of coordinated manipulation boosting misinformation on the Voice.
Meanwhile, signatories to Australia’s code recently submitted their local transparency reports. It’s no surprise that Twitter’s is pretty threadbare: the company shut down its Australian office in January. Twitter hasn’t signalled an intention to quit Australia’s code, but with the government to grant ACMA new powers to call platforms to account, there may soon be nowhere to hide here either.
Michael Davis, CMT Research Fellow
This article is from our fortnightly newsletter published on 2 June 2023.
To read the newsletter in full, click here. Subscribe here.