Combatting misinformation and disinformation through more effective regulation

Alice Dawkins
Rys Farthing

Australia’s referendum on the Voice has highlighted the need for better regulation of social media platforms to prevent falsehoods corroding our democratic discourse.

28 September 2023

The toxic effects of misinformation and disinformation on democracies over the past decade has been well documented. Misinformation and disinformation are more than merely a speech issue; they are phenomena unique to the networked digital world involving specific actors, behaviours, content and distributive effects. With misinformation and disinformation production rising as key campaigning tactics, attacks on electoral integrity in particular have become attractive for opportunistic actors. Attempts to sow voter mistrust in electoral processes and public institutions have sadly become an expected theme in the lead-up to elections and referenda.

Rapid and sustained research and advocacy efforts from journalists, researchers and civil society on the societal harms suffered from digitally-enabled attacks on electoral integrity contributed to a suite of expanded platform policies and targeted public policy efforts over recent years. Expanded platform policies included specific efforts to safeguard electoral processes, such as civic integrity policies.

We are seeing this in action right now in Australia during the referendum on whether to enshrine an Aboriginal and Torres Strait Islander “Voice” in the Constitution. In August, the Australian Electoral Commissioner remarked that electoral misinformation and disinformation was at the highest levels the Commission had ever seen online. In September, Reuters reported how social media influencers have deliberately spread falsehoods about the referendum to their substantial audiences.

Australia’s current approach to managing misinformation and disinformation rests on co- and self-regulatory mechanisms. For example, the Australian Code of Practice on Disinformation and Misinformation – written by and overseen by industry group Digi – manages platforms’ responses to misinformation and disinformation content. Other aspects, such as the management of political advertising, is entirely self-regulatory. Even the proposed Combating Misinformation and Disinformation Bill, the Commonwealth Government’s attempt to develop a more rigorous solution, rests on the assumption that these co-regulatory codes and self-regulatory practices are working.

Our organisation, Reset.Tech, recently set out to look at how two small aspects of misinformation and disinformation management were working in practice, using the Voice referendum as a case study. The news wasn’t great.

First, we explored if platforms responded to user reports of electoral misinformation on their platforms. Following 99 pieces of electoral misinformation – such as claims ballots were stolen from Australian elections, that the Voice referendum would be five questions, or that polling booths were invalid – we tracked what platforms claim they do under the Digi Code, and what they actually do. Under the Digi Code, platforms must adopt policies and procedures to minimise propagation of this kind of misinformation, including tools for people to report this content. Facebook says it will “reduce the prevalence” of this content, TikTok says it will remove this content, and X (formerly Twitter) says it will label or remove this content.

Yet we reported all 99 pieces of content and found very limited action as a result. Only ten pieces were either labeled or removed, and the rest remain visible. It appears that despite having the policies in place and the tools to report it, in practice platforms take limited actions. Based on our findings, no meaningful changes have come about for the referendum from these policies and tools.

Second, we explored how easily paid-for electoral misinformation could be placed on the platforms. We made 45 pieces of electoral misinformation as advertisements – such as claiming the referendum would be on 31 November (the actual date is 14 October, and November does not have 31 days), or that votes could be made via SMS, or that 16-year-olds could vote – and set these up for approval to run on various platforms. To be clear, we did not actually run these ads, we just sent them for approval and deleted them before they ran. No one saw misinformation as a result of this experiment.

The Digi Code excludes misinformation in paid-for advertising, unless it is perpetuated by bots or other inauthentic processes. This leaves it largely up to individual platforms’ own policymaking processes to address, but many do report back to Digi about this in their annual transparency reports. According to their annual transparency reports, Facebook says that all political ads need to be run by authenticated accounts, TikTok says the platform has strict policies in place to prohibit ads that contain deceptive or misleading claims, and X says it prohibits political advertising in Australia. This is not what we found. These platforms approved between 70 to 100 per cent of the misinformation ads we put forward about the Voice, without requiring authentication or other details. According to our research, even though each platform has policies in place, they are inadequately enforced and their annual reports to Digi may be glossing over real issues.

We do not believe that these small experiments are tiny “hiccups in the system”. We believe they highlight systemic failures. Despite having policies, process and co-regulation in place, Australians do not always enjoy meaningful protections. This raises questions about the impact of this “light touch”, co-regulatory approach. As we have argued elsewhere, self and co-regulation routinely fails Australian children and young people. It appears now that it is failing our democracy too.

While policy around misinformation and disinformation is clearly needed as a matter of urgency, Australia might be on the wrong track. If the Australian Code of Practice on Disinformation and Misinformation is falling short, the current proposals to embed it within the Combating Misinformation and Disinformation Bill will ultimately fail too. This calls for a rethink.

A rethink might be timely. There is an additional opportunity emerging as the Online Safety Act’s scheduled review is brought forward. Australia is now in a unique window of opportunity where policymakers are thinking about how to better improve our online safety frameworks at the same time as they are reconsidering our misinformation and disinformation regulations.

Australia is somewhat unique in splitting safety and misinformation into separate legislation and regulations. Other jurisdictions, such as the EU’s Digital Services Act, take a systemic, comprehensive approach to tackling risks in general, including individual safety and societal safety like misinformation and disinformation. Canada too, which is in the process of redrafting a proposed online safety bill, has moved away from an initial proposed bill focused solely on content and individual harms to a more systemic approach that aims to capture societal harms such as misinformation and disinformation. The UK’s Online Safety Bill also catalyses the UK’s process to tackle misinformation through their regulator Ofcom.

Australia instead has a jigsaw of policy protections, some more effective than others. Our current regime provides:

  • Strong legislative protections from harmful content for individuals via the Online Safety Act.
  • Softer individual protections around safety provided by co-regulatory codes to meet the Basic Online Safety Expectations.
  • Softer protections from societal harms caused by misinformation and disinformation provided by a co-regulatory code.
  • Proposed stronger legislative protections when it comes to evidence-gathering and investigations by the Australian Communications and Media Authority (ACMA) around misinformation and disinformation in the Combating Misinformation and Disinformation Bill.

But there are many gaps in between.

Comprehensive, systemic regulation that places duties of care on platforms to mitigate against risks, both individual and societal, might be our best path forward. Incrementalism can only get us so far when large scale reforms are needed.

Alice Dawkins is the Executive Director at Reset.Tech Australia.

Dr Rys Farthing is Director at Reset.Tech Australia and Associate Investigator at the Center for the Digital Child.

Reset.Tech Australia is an independent think tank, and the Australian affiliate of the global Reset.Tech initiative. We accept no funding from tech, and are funded by trusts and foundations, including Reset Global, Susan McKinnon Foundation and the Internet Society Foundation.

Image credit: Budding/Unsplash

Features

  • Maxwell Yong

  • Rys Farthing and Lorna Woods

  • Maxwell Yong

Subscribe to The Policymaker

Explore more articles

  • Ehsan Noroozinejad Farsangi

  • Thea Snow

  • José-Miguel Bello y Villarino

Features

  • Maxwell Yong

  • Rys Farthing and Lorna Woods

  • Rys Farthing and Lorna Woods

Explore more articles

  • Ehsan Noroozinejad Farsangi

  • Thea Snow

  • José-Miguel Bello y Villarino

Subscribe to The Policymaker