Crowdsourcing Fact-Checking Undermines Democracy

|


Last month, Twitter unveiled Birdwatch, a fact-checking feature aimed at inoculating citizens against misinformation through a novel experiment in classic crowdsourcing. Twitter’s new initiative has a proactive and sparkling appeal, but there are prominent cracks in its foundation and ‘community-based’ approach to fact-checking.

Birdwatch aims to crowdsource the fact-checking process, taking cues from the model of websites such as Reddit, Wikipedia, and Quora. During its pilot project, 1,000 Twitter users will take their first step into the enigmatic world of crowdsourced fact-checking. They will become ‘Birdwatchers,’ armed with the ability to add notes or corrections to tweets. These notes are pooled into a voting system and rated by Birdwatchers based on their perceived helpfulness in fact-checking original tweets. Popular notes are favoured by the algorithm, heightening their visibility.

Since its announcement, Birdwatch has burst into the public consciousness. Some journalists are dazzled by the feature’s novelty and promise. For instance, Birdwatch enables citizens to contribute to timely crackdowns on misinformation, which continues to circulate rapidly and alarm policymakers, journalists, academics, and citizens worldwide, particularly during the COVID-19 “Infodemic.” 

But many journalists and traditional fact-checkers are not in thrall to Birdwatch. For instance, a report from the Poynter Institute Fact-Checking Network diagnoses Birdwatch’s vulnerability to misleading fact-checks and forecasts the contamination of polarization and partisan sensibility. More bluntly, the Birdwatch feature may become the latest machine learning model exhibiting the concept of “garbage in, garbage out.”

Nevertheless, while the Poynter report underlines Birdwatch’s perhaps most conspicuous cracks, the ills of crowdsourced fact-checking run far deeper. In particular, Birdwatch—a paradigmatic example of the newfangled crowdsourcing of the fact-checking process—threatens to undermine democracy and become a new frontier in authoritarian sharp power.

Championed by scholars such as Christopher Walker and Jessica Ludwig, the term ‘sharp power’ is a catch-all concept describing the tendency of authoritarian regimes towards using digital technology in the service of their ideology and disrupting democracies. Journalist Juan Pablo Cardenal and his colleagues put more forcefully, noting that sharp power entails “piercing, penetrating, or perforating the political and information environments in the targeted countries.”

Authoritarian regimes leverage sharp power and inflame the information ecosystem in diverse and elaborate ways. An explosion of scholarship has demystified the overt and covert strategies and financial and organizational structures that have nurtured and helped authoritarian regimes to muddy, distort, and polarize public opinion and public life.

Birdwatch is eminently prone to incarnations of authoritarian sharp power. To begin, its crowdsourcing model essentially outsources trust to the Twitter universe, paving the way for co-optation and weaponization in the service of ideology and choking off dissent and civic discourse. Wikipedia and Reddit know this well. “Cyber troops,” or state-backed teams hired or recruited to distort public sentiments online, in China have notoriously and routinely tweaked Taiwan’s Wikipedia page, such as to recharacterize Taiwan from ‘a state in East Asia’ to ‘a province in the People’s Republic of China.’ 

Another notable example is Russia’s Internet Research Agency’s spraying of polarizing content and disinformation narratives on Reddit in the lead up to the 2016 U.S. election. While crowdsourcing sites still have the power to retroactively remove content, the crowdsourcing model means that these sites are fighting hydra. In this sense, Birdwatch’s proactive appeal is diminished by its own enabling of a cat and mouse game of catch-up with authoritarian sharp power. A troll army or brigade could masquerade as Twitter Birdwatchers to create notes deflecting and dismissing dissent and dressing up pro-regime propaganda as helpful corrections. 

Indeed, academics have already exposed how regimes weaponize particular features of Big Tech platforms to attack dissenters and shore up their legitimacy. For instance, cyber troops in Indonesia routinely fabricate mass complaints and reports of dissenting Facebook and Twitter user accounts, such as critical journalists or activists. Academics have also exposed how authoritarian regimes leverage subtle and more calculated strategies to reframe discourse and repair their image, such as by turning to micro-influencers on Instagram to promote policy positions.

By outsourcing trust to the Twitter ecosystem, the crowdsourcing model of fact-checking provides authoritarian regimes with a new forum to undermine democracy. Bots and trolls could find ways to unleash a steady stream of unhelpful notes into Birdwatch, giving Twitter employees a major headache. Cyber troops could artificially inflate the scores, contaminating the Birdwatch voting system, algorithm, and ultimately chipping away at our already depleted civic discourse.

To be fair, Birdwatch is in its infancy, and the new fact-checking feature has yet to be tested against the weight of authoritarian sharp power. However, these examples and potential incarnations of manipulation are well-established in many other contexts and on virtually all Big Tech platforms. Ultimately, Birdwatch’s proneness to authoritarian manipulation should sound the alarm for policymakers, journalists, academics, and citizens alike. Sharp power already threatens democracy worldwide, such as by narrowing trust in institutions in longstanding democracies and sowing chaos, fear, and polarization in public life in the COVID-19 era

Yet, the potential for sharp power to afflict Birdwatch poses a more potent threat than other incarnations of sharp power. Unlike other forums and vectors of authoritarian discourse-framing—such as tweets, Instagram posts, and YouTube or TikTok videos—fact-checking has more pronounced legitimacy and a distinct stamp of credibility. Against this backdrop, Birdwatch may offer regimes a new way to reconsolidate their own legitimacy and promote the durability of their regime.

Birdwatch’s patina of credibility may also serve as a heuristic or cue that ultimately makes citizens more susceptible to believing disinformation narratives should they percolate on the new feature. Moreover, it is likely that many citizens and civil society groups living such authoritarian regimes will not be able to contribute to Birdwatch to challenge the weaponization of this fact-checking aspect of the platform.

For instance, there is a growing presence and enforcement of sweeping “anti-fake news” laws and ICT policies in countries like Cambodia, Taiwan, and Thailand to cement social control and choke off dissent. Citizens’ likelihood of self-censorship may deter ordinary people from becoming Birdwatchers and challenging computational propaganda.

These sharp cracks in Twitter’s crowdsourcing approach to combating misinformation threaten to exacerbate, rather than dampen, the flow and cascades of misinformation on Twitter. While refining this novel experiment, Birdwatch’s designers must keep in mind that the cure for misinformation should not be more harmful than the disease.