The Civil Serpent: On Meta’s Regulation of Free Speech

|


In December 2021, Rohingya refugees from Myanmar launched a $150 billion lawsuit against Meta Platform Inc (Meta), formerly known as Facebook, over allegations that the social media giant did not take action against anti-Rohingya hate speech that contributed to the ongoing genocide in Myanmar. The Rohingya genocide is a series of persecutions and killings of the Muslim Rohingya people, perpetrated by the Myanmar military. The genocide has consisted of two phases to date: the first was a military crackdown that occurred from October 2016 to January 2017, and the second has been occurring since August 2017. The crisis forced over a million Rohingya to flee to other countries, with more than 700,000 fleeing to Bangladesh.

Whilst the outcome of the lawsuit is uncertain, it underscores a deeper issue. Namely, how should governments respond to the regulatory challenges presented by private companies that increasingly perform state-like functions?

Rohingya refugees have alleged, in their class-action complaint filed in California, that Meta’s failures to police content and its platform’s design contributed to real-world violence faced by the Rohingya community. The complaint alleges that the Meta was ‘willing to trade the lives of the Rohingya people for better market penetration in a small country in south-east Asia.’ It adds: ‘In the end, there was so little for Facebook to gain from its continued presence in Myanmar, and the consequences for the Rohingya people could not have been more dire. Yet, in the face of this knowledge, and possessing the tools to stop it, it simply kept marching forward.’ The general thrust of these allegations are that Meta will prioritize its expansion into emerging markets over the well-being and safety of their users. When those users are particularly vulnerable and historically subject to persecution, this lack of oversight and negligence on Meta’s part can and has contributed to horrific political and social consequences.

In a coordinated action, British lawyers also submitted a letter of notice to Meta’s London office. The letter makes a number of complaints. Firstly, it alleges that Meta’s algorithms ‘amplified hate speech against the Rohingya people’. Secondly, it outlines how Meta’s failure to invest in moderators and fact checkers who knew about the political situation in Myanmar (in particular between 2010 and 2014, Meta only hired two Burmese speakers, and after 2014 Meta outsourced monitoring work to Accenture which at the time only employed one Burmese speaker) exacerbated the ongoing persecution of the Rohingya people. Finally, it claims that Meta failed to take down posts or delete accounts that incited violence against Rohingya and did not ‘take appropriate and timely action’ despite warnings from numerous human rights charities, the media and the Rohingya people.

Some of the posts that Meta failed to address included comparisons of Rohingya to dogs, maggots, rapists and suggestions that they be fed to pigs, shot, or exterminated. Additionally, the class-action lawsuit directly references claims by whistleblower Frances Haugen, who leaked a cache of internal documents in 2021, that Meta does not police abusive content in countries where such speech is likely to cause the most harm, and instead prioritize profits over safety.

Unfortunately for the claimants, in the United States, platforms such as Meta are protected from liability over content posted under a law known as Section 230 of the Communication Decency Act, which provides: ‘No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.’ The Rohingya complaint says it seeks to apply Myanmar law to the claims if Section 230 is raised as a defence, although it is difficult to see how a U.S. court would apply Myanmar law to a U.S. company domiciled in the state of California.

The Making of Meta’s Supreme Court

The lawsuit raises several broader questions about how best to regulate private companies. Private companies increasingly generate revenues that rival nation-state GDPs and exercise control over forums and discourse traditionally regulated by government. As a consequence, technology and social media companies have become powerful global actors and their corporate governance decisions now affect the rights and freedoms of billions of people.

Whilst national governments have spent the last two years grappling with the new economic and health challenges catalyzed by COVID-19, ‘net- states’ like Facebook, Microsoft, Amazon and others raced to build their empires, dramatically expanding the services and territories in their ‘jurisdiction.’ Facebook spent almost six billion dollars on Reliance Jio, a cellular network catering to the Indian market and, with that investment, brought many new users to its interface. Google’s parent company, Alphabet, spent a combined half-billion dollars on urban infrastructure, with investments in mobility outfit Lime and the city-building firm Sidewalk Infrastructure Partners, thereby increasing their physical presence and market penetration. And Microsoft has spent undisclosed sums of cash on five separate investments over the last year, including on 5G infrastructure projects.

As these technology companies wield their influence over the proverbial ‘public square’ of discourse and discussion, they now can effectively deny constitutionally protected liberties such as the first amendment right to free speech. Strikingly, private companies have failed to monitor their platforms appropriately and are subject to minimal regulation. In 2017, Twitter failed to meet the EU’s standards for removing hate speech on its platform, and a study found that Twitter took down less than 40% of what the European Commission deemed to be ‘hate speech’. In Meta’s specific case, it often makes operational decisions about whether certain content is allowed on its platform, de-platforming entire groups or conversely allowing hateful speech to continue unchecked. This approach was first brought to wider public attention following the Unite the Right attacks in Charlottesville that ultimately led to the death of Heather Hayer. Following the attack, Mark Zuckerberg released statements promising change and additional scrutiny. Unfortunately, the company’s internal policies remained entirely unchanged.

Although these revelations seem innocuous, it lies in stark contrast with the restrictions and regulations imposed on governments. In a US context, the First Amendment right to free speech imposes very strict non-discrimination duties on government actors. In short, the government is not allowed to ban speech merely because it would like to remove that speech from a public forum. Instead, there are only a limited set of cases where it is allowed to do so (where the speech incited violence) and not without additional public scrutiny (an individual or company is able to challenge restrictions through the judicial process). The justification for this strict limitation of government power is that the government has typically served as the regulator of the ‘speech marketplace’ or the ‘marketplace of ideas’. Therefore, attempts from the government to police anyone in this marketplace of ideas are therefore viewed with suspicion since they may distort the market and harm citizens with legitimate viewpoints. Instead, ideally, the government allows as much participation in the marketplace of ideas as possible, to ensure that ultimately the market itself differentiates between ‘good’ ideas and ‘bad’ ideas. In this sense, there can’t be an effective marketplace of ideas if people aren’t allowed to decide which ideas they want to associate with and which ideas they wish to decry.

This distinction between public and private regulation makes sense at a certain level of abstraction. But the world we live in now, is not one where the government is the only overseer of the marketplace of ideas. The distinction has failed to mirror the current landscape of technology monopolies and global conglomerates that dictate new norms and emerging customs. Instead of people gathering in public squares or disseminating information via public radio and television, most news is consumed through private companies and on the international interface of the world wide web. As a consequence, private actors very often govern the marketplace of ideas themselves. Since the First Amendment only limits government actors, social media companies like Meta, Amazon, and Twitter retain total freedom to police speech however they wish on their platforms. The only regulation these platforms face are very limited laws that prohibit unlawful speech, such as speech which includes child pornography, speech that violates copyright protections, and speech that incites violence.

Meta and Twitter are not government actors; they do not have an army and one can leave them much more easily than you can leave the United States. But when it comes to the regulation of speech, all the concerns outlined above about government censorship — that it may limit diversity of expression, manipulate public opinion, or target dissident or heterodox voices — also apply to these sprawling private actors. And yet, under the current First Amendment rules there is no mechanism to protect against those harms. Additionally, as is now quite self-evident, these platforms are driven by perverse incentives that can promote harmful speech and dangerous misinformation, which has real-world consequences.

In a representative democracy, the process of procedural governance confers legitimacy on the laws and policies produced. A piece of legislation or a court ruling commands compliance and carries authority because the decision is made by duly empowered representatives following a clear process. This is valuable because its separates the process of decision-making from the outcome reached, and provides the outcome with validity. Corporations claim they behave like governments because they want to invest their decisions with that sense of procedural legitimacy and signal to investors and consumers that their decision making processes are transparent and dependable. However, in reality, corporations adopt these displays of procedural legitimacy to ward off further scrutiny. In 2018, Mr. Zuckerberg decided to create an oversight board he described as ‘almost like a Supreme Court’ to adjudicate the boundaries of acceptable speech in a private Meta forum. However, for almost two years following this announcement, Mr. Zuckerberg did not set up the oversight board. Further, the board itself can only make recommendations rather than implement change or force Meta’s hand. In this sense, the show of procedural legitimacy is shal- low since, as the lawsuit makes clear, it is not underpinned with a change in internal policy. Corporations may be people, but they are not polities. Their executives are not elected representatives, the rules they choose to follow are not laws, and legitimacy cannot be borrowed to justify decisions contrary to the public interest. As Mr. Zuckerberg himself observed in The New Yorker: ‘maybe there are some calls that just aren’t good for Meta to make by itself’.

Possible Solutions

One response to the problems outlined above would be to increase regulation. Section 230 could be repealed, making companies subject to a host of external oversight from researchers, public bodies and Congress itself. However, the suggestion that regulations are the main drivers of corporate governance from the outset is misleading. It distracts from the power technology companies have in setting norms and standards themselves.

Through their business models and innovations, they develop rules on speech, access to information and competition that creates self-perpetuating norms. Indeed, if these executives want change, there is no need to wait for government regulation to begin with. As regulators of the domains they govern, nothing stops them proactively aligning their terms of use with human rights, democratic principles and the rule of law. Additionally, there are legitimate concerns regarding the institutional capability of organizations and public bodies such as Congress to monitor the constant stream of social media posts on multiple platforms and understand the new technological capabilities that these companies develop.

In light of this, one alternative would be to accept Meta’s net-state status and insist that technology companies take responsibility for their own ‘territories.’ Meta could adopt a number of different approach- es. Firstly, it could anchor its terms of use and oversight standards in the rule of law and democratic principles. Secondly, it could allow independent scrutiny from researchers, regulators and democratic representatives alike, since credible accountability would need to be independently evaluated.

Finally, and perhaps most radically, it could give their employees and customers more of a direct say as prime ‘constituents’. If CEOs are serious about their state-like powers, they should perhaps be forced to consider their consumers as similar to citizens. Considering this, there have been proposals to create a kind of regulatory agency that would potentially collaborate with some of the platforms on developing policies. This might mean more democratic structures of governance inside these platforms and a sustainable structure of oversight.

Perhaps most controversially, Justice Clarence Thomas has suggested that technology platforms should be treated as ‘common carriers’ and regulated as public utilities. A common carrier is a person or company that transports goods or people for any person or company and is responsible for any possible loss of the goods during transport.

However, practically, it is difficult to see how a common carrier regime would work. Common carrier laws— which in practice prevent private actors from excluding almost any speech — work well when applied to companies whose job primarily is moving speech from one place to another. But social media companies do a lot more than that. One of the primary benefits they provide to their users is moderating content, whether to facilitate conversation or to use algorithms to promote the most engaging information to a wider audience. Common carrier obligations would make it difficult for the companies to perform any of these services and this solution may, in fact, exacerbate the issues.

The most tenable solution may be to lift the section 230 protection for companies domiciled in the United States. This would strongly incentivize technology platforms to co-operate on any number of the suggestions outlined above and prevent them being successfully sued.

In 2017, Germany passed a law restricting online incitements to violence and hate speech. This express obligation reason- ably placed the onus on private companies to build into their technology governance models of oversight that actually ensure the removal of hateful or inciteful speech. In Meta’s case, this may mean, in practice, investing monies into natural language processing artificial intelligence that better scans posts for vio- lence or hiring more analysts to look at Facebook Groups where hateful speech foments.

Conclusion

Regulation that divides private and public organizations with a bright line distinction has failed to reflect the new position and reach of social media companies.

Instead of treating these companies as unwieldy and subjecting them to more ineffective regulation, it would be better to acknowledge that they have cannibalized utilities traditionally taken on by states. In doing so, vulnerable peoples, such as the Rohingya Muslims, may have a more direct and tenable complaint against a company that has failed to protect them and instead exacerbated violence against their people.

Mannat Malhi is a trainee solicitor at Sidley Austin with an undergraduate degree from Oxford in law, and a Master’s in Law from Harvard University.