Three reasons why disinformation is so pervasive and what we can do about it

Donald Trump derided any critical news coverage as “fake news” and his unwillingness to concede the 2020 presidential election eventually led to the January 6, 2021 riot at the US Capitol.

For years, radio host Alex Jones denounced the parents of children slaughtered in the Sandy Hook school shooting in Newton, Connecticut as “crisis actors”. On August 5, 2022 he was ordered by a jury to pay more than US$49 million in damages to two families for defamation.

These are in no way isolated efforts to flood the world’s media with dishonest information or malicious content. Governments, organisations and individuals are spreading disinformation for profit or to gain a strategic advantage.

But why is there so much disinformation? And what can we do to protect ourselves?

Three far-reaching reasons

Three schools of thought have emerged to address this issue. The first suggests disinformation is so pervasive because distrust of traditional sources of authority, including the news media, keeps increasing. When people think the mainstream media is not holding industries and governments to account, they may be more likely to accept information that challenges conventional beliefs.

Secondly, social media platforms’ focus on engagement often leads them to promote shocking claims that generate outrage, regardless of whether these claims are true. Indeed studies show false information on social media spreads further, faster and deeper than true information, because it is more novel and surprising.

Lastly, the role of hostile and deliberate disinformation tactics cannot be overlooked. Facebook estimates that during the 2016 US election, malicious content from the Russian Internet Research Agency aimed at creating division within the American voting public reached 126 million people in the US and worldwide.

The many shades of disinformation

This crisis of information is usually framed in terms of the diffusion of false information either intentionally (disinformation) or unwittingly (misinformation). However this approach misses significant forms of propaganda, including techniques honed during the Cold War.

Most Russian influence efforts on Twitter did not involve communicating content that was “demonstrably false”. Instead, subtle, subversive examples of propaganda were common and unrelenting, including calling for the removal of American officials, purchasing divisive ads, and coordinating real life protests.

Russian disinformation on Twitter involved calling for removal of American officials and coordinating real life protests.Russian disinformation on Twitter involved calling for removal of American officials and coordinating real life protests.

Sadly even misinformation spread unwittingly can have tragic consequences. In 2020, following Donald Trump’s false claims that hydroxychloroquine showed “very encouraging results” against COVID-19 rapidly spread over social media, several people in Nigeria died from overdoses.

Responses to propaganda and disinformation

So how have various entities addressed both mis- and disinformation?

The Jones jury case and verdict is one example of how societies can counter disinformation. Being hauled into court and forced by a jury of your peers to shell out $49 million in damages would cause most people to verify what they’re saying before they say it.

Governments and corporations have also taken significant steps to mitigate disinformation. In the wake of the 2022 Russian invasion of Ukraine, the EU ceased retransmitting Russia Today, the well-known Russian state-controlled television network, and it is now no longer available in Europe or in Africa.

The EUvsDisinfo project has countered Russian propaganda and addressed “the Russian Federation’s ongoing disinformation campaigns affecting the European Union, its Member States, and countries in the shared neighbourhood” since 2015. In 2022 Google followed suit, launching its Russia-Ukraine ConflictMisinfo Dashboard, which lists dubious claims related to the invasion and fact-checks their veracity.

Wikipedia as anti-propaganda?

Ordinary citizens have several avenues to counter disinformation as well. Information literacy is typically framed as an individual responsibility, but Swedish scholars Jutta Haider and Olof Sundin point out that “a shared sense of truth requires societal trust, especially institutional trust, at least as an anticipated ideal”.

How can we re-create a common sense of truth? Wikipedia – the freely accessible online encyclopedia where knowledge is collectively produced – is a good place to start.

Wikipedia has emerged as a compelling resource in fighting disinformation.Wikipedia has emerged as a compelling resource in fighting disinformation.

Wikipedia has community-enforced policies on neutrality and verifiability. Anyone can edit a Wikipedia page, but countless administrators, users and automated type-setting “bots” ensure these edits are as correct as possible. Modifications and disputes about article content are archived on the website and visible to all: the editorial process is transparent. With the possible exception of obscure topics where very few editors are involved, misinformation is weeded out fast.

Education is key

As information consumers, some important steps we can take to protect ourselves from disinformation include seeking out and reading a wide variety of sources and not sharing dubious content. Schools are doing their part to spread this message.

Notable initiatives in Australia include Camberwell Grammar School in Canterbury, Victoria where teachers have drawn on resources produced by ABC Education to teach their students how to identify credible news sources. And a University of Canberra pilot program using Stanford University’s “lateral reading” principle is being trialled in three primary and secondary ACT schools this year. The program instructs participants to open another tab and check Wikipedia if they come across any unknown or dubious claims. If the claim is not verifiable, move on.

Such information education needs to be complemented with an awareness of democratic norms and values. And it should also incorporate a better understanding of the importance of privacy: the more we share about ourselves, the more likely we are to be targeted by disinformation campaigns.

Though disinformation may continue and even prosper in certain corners, our best lines of defence are ensuring we read information from multiple, credible sources; utilise fact-checking services; and are more discerning about what we read and share.

To put it simply, don’t feed the trolls – or the platforms where they thrive.

REGISTER NOW

By Mathieu O'Neil / Associate Professor of Communication, News and Media Research Centre, University of Canberra

Mathieu is Associate Professor of Communication at the University of Canberra’s News & Media Research Centre, where he leads the Critical Conversations Lab. Mathieu’s research focuses on the affordances of commons-based peer production such as free and open source software and Wikipedia. He is currently investigating how Wikipedia can assist in generating civic online reasoning and best fact-checking practice in three ACT schools. He also leads an international team researching the organisational and public policy implications of the production of free and open source software. Mathieu has played a key role in developing the field of peer production studies by founding and editing the peer-reviewed Journal of Peer Production (2011-2021), by editing the Handbook of Peer Production (Wiley-Blackwell Handbooks in Communication and Media, 2021), and by founding an international think tank, the Digital Commons Policy Council, in 2021.

Mathieu has also made significant contributions to the development of innovative online research methods and concepts, most recently thanks to an international project in which he is developing heuristics to detect online echo chambers. Mathieu is a founding member of the Virtual Observatory for the Study of Online Networks (VOSON), a world leader in web science and big data analytics located in the Australian National University. Mathieu’s research on the adoption of innovation and the diffusion of health misinformation in the online environment draws on analytical frameworks such as social network analysis, actor-network theory and the sociologies of fields and controversies. In 2020 Mathieu presented findings and made policy recommendations on two occasions to the Senate Select Committee on Foreign Interference through Social Media, and was invited to discuss the Australian Perspectives on Misinformation report on the ABC's Lateline with the iconic Philip Adams.

Mathieu's research has been published in three books and in leading peer-reviewed journals such as Social Networks, Information, Communication & Society, Réseaux, New Media and Society, the International Journal of Communication, and Organization Studies, amongst others.

By Michael Jensen / Associate professor, Institute for Governance and Policy Analysis, University of Canberra, University of Canberra

Dr Michael J Jensen is Associate Professor at the Institute for Governance and Policy Analysis. He has a background in political communication and has published books with Cambridge University Press and Palgrave concerning online political behaviour. His work concerns the use of digital communication technologies in the development of new forms of political organization within political campaigning and protest movements.

(Source: theconversation.com; August 12, 2022; https://tinyurl.com/5n6kfwkx)
Back to INF

Loading please wait...