Spread over six floors, thousands of employees stare at their computer screens scanning Facebook, Twitter, and YouTube in the middle of a traffic-clogged city. This is a content moderation center, contracted by many major internet companies to remove content/communications that may not meet a company’s guidelines.
Source: New York Times
On May 26th, this content moderation machine was targeted towards U.S. President Donald Trump’s own Twitter account. Two days later, President Trump responded by signing an Executive Order on Preventing Online Censorship, a largely symbolic order that began a new Federal Government foray into uncharted territory. In seeking to publicly defund online platforms engaging in content moderation and limit their legal protections, President Trump started a heated debate that culminated in the interrogation of many social media executives by the Senate Judiciary Committee on November 17th. Some senators like Ed. Markey of Massachusetts, called out companies “not for… taking down too many posts, but for leaving up too many dangerous ones”, while other senators expressed concerns of mass censorship and the end of free speech. However, one truth is clear: social media companies are taking an active role in moderating our online platforms without government regulation. Although many free-speech advocates are wary of government control, an increased role by the U.S. Federal Government in content moderation could strengthen our democracy.
Governments are uniquely positioned to protect vulnerable parties. In the U.S., this is clearly demonstrated by federal laws prohibiting instances of discrimination by race, religion, gender identity, and sexual orientation. These laws have been extended to prevent hate speech, libel, slander, and defamation. Yet, as technology companies have revolutionized our public forums, modern democracies must decide how to extend these protections into the online world. According to a report from the National Endowment for Democracy, EU democracies have taken the lead as regulators of Internet content moderation. Germany began this movement on January 1st with a strict hate speech law that required Internet and social media companies to remove content that violates existing speech laws within 24 hours of its posting. Austria went even further, proposing the Federal Act on Care and Responsibility on the Internet, a law that bans anonymity on the Internet by requiring users to register their legal names and addresses. The EU followed this legislation with a series of wide-reaching data protection and copyright acts (including the controversial Article 13), which make digital media companies legally responsible for both the material they host and the data of their users. By taking internet regulation back into their own hands, European democracies have clarified the relationship between online and offline speech regulation, exerted influence over internet speech regulation beyond their borders, and successfully hindered the ability for disinformation, internet trolls, and polarization to hijack their political discourse.
In comparison, the U.S. passed Section 230 of the Communications Decency Act in 1996, turning over the ability to regulate online speech largely to internet companies. This act provided to internet companies a metaphorical sword and shield: the ability to remove and manipulate any content delivered by the company, and the legal protection to use or not use the sword at the companies will. With this decision, the U.S. government largely relinquished their role in regulating specific internet content. As detailed in the MIT Technology Review, this has led to a myriad of issues. Each internet company has their own policies and processes when it comes to how they moderate their content, many of which they are not required to disclose. Each year, the amount of interventions to remove content increases. This has led to a series of high profile abuses of power, the most prominent being the near-blanket removal of content from many online platforms regarding possible ties between the Biden family and Ukrainian energy executives preceding the 2020 U.S. elections.
This privately-run content moderation, or lack thereof, has accelerated democratic erosion. In “Can American democracy survive social-media censorship”, Israeli journalist Eric Mandel argues that social media moderation has made the definition of free speech “flexible” and contributed to polarization and disinformation in the U.S. Mandel concludes his article by imploring the U.S. government to step in to defend free speech. The magnification of polarization by online platforms was clearly demonstrated within a comprehensive report of disinformation in the 2018 Brazilian elections. This report concluded that as baseline political polarization grew among voters, hyperpolarization and disinformation began to appear on the Internet. The report details how tech companies struggled to adapt to disinformation campaigns, especially on newer platforms such as WhatsApp. In the 2018 elections, the Brazilian government stepped in to regulate online content by both strengthening data protection laws and using electoral courts to debunk and combat disinformation campaigns. Although it is unclear how successful these efforts were, it appears government involvement curtailed the damages to at least some degree. Furthermore, research conducted by Yale political scientist Milan Svolik concluded that polarized voters are more willing to trade off democratic principles for partisan interests. Svolik surveyed a variety of democracies and consistently found that support for anti-democratic candidates increases when an electorate is sharply divided. Internet companies have also mismanaged online political ads in a manner that poses a serious threat to democracy. A report by the Stanford Cyber Policy Center asserted that online platforms have failed to require the level of transparency mandatory in other mediums and most have separate policies for political advertising. Political advertising has a significant effect on voter’s preferences, which can have a detrimental effect on American democracy when unregulated, as evident in the 2016 Russian online disinformation campaigns.
Unfortunately, leaving internet speech unregulated also leads to democratic erosion. This is illustrated by Texas A&M communications professor Jennifer Mercieca’s essay Dangerous Demagogues and Weaponized Communication, which describes the censoring/de-platforming of popular Texas conspiracy theorist Alex Jones. Mercieca labels Jones a “dangerous demagogue” who engages in weaponized communication, the rhetoric used as “an aggressive means to gain compliance and avoid accountability”, often simply talking over others instead of seeking to engage in dialogue. She concludes that allowing weaponized communication to continue unchecked, which has occurred predominantly on the internet, is inherently dangerous to democracy and needs to be regulated.
Many authoritarian governments have not been as susceptible to polarization and disinformation campaigns, due to their regulation over speech and content. According to a report by the Center for International Media Assistance, China’s Great Firewall prevented disinformation and polarization from seizing political discourse and lifted the country to a global information superpower. These policies have had a moderating effect on the global internet, in which campaigns designed to influence Chinese politics are often self-censored by content creators. This totalitarian control has many extremely negative side effects, such as the self-censorship of political dissidence from the ruling Communist Party. By no means am I advocating totalitarian internet control, I am simply arguing that some regulation may maximize the benefits while minimizing the consequences.
Turning over content moderation to the government does come with its own set of flaws, such as presenting a future threat to political dissidence. As internet content moderation becomes a political issue, it’s implementation may stray from its pro-democracy intentions. Especially when one party has power, history has shown us that politicians are willing to redraw voting districts to maintain and cement their authority. In his essay Stealth Authoritarianism, law professor Ozan Varol details how libel laws have been used by politicians to silence political dissidence. Adding content moderation laws to the government toolset could open up pathways for similar abuse, not to mention bureaucratic bloating and red tape that will likely slow innovation, providing an advantage to those who aim to work around them. The use of cutting edge technologies like AI and machine learning to determine what content should be suppressed is becoming commonplace, but it is unclear whether the government has the infrastructure to innovate upon these advanced technologies. Other concerns include data privacy and the ability to work around legislation. Some question the right of an individual country’s government to regulate a global resource like the Internet and argue that free internet access should be a basic human right. These many flaws force us to consider whether it is possible for private industry to better address these issues.
Source: CNN
Both major party figureheads, President Trump and President-elect Biden have expressed serious interest in repealing Section 230. On December 8th 2020, President Trump threatened to veto a national defense bill unless it addressed Section 230. Regardless, the U.S is faced with a tough decision whether or not to increase their role in online content moderation. This change may not revolutionize our political discourse, but it may help us get a handle on our information and communication in order to combat the deep-seated polarization plaguing our democracy.
Graphics sourced from: Getty Images
Hi Patrick, I thought your stance on government-moderated social media was interesting and quite different from some of the perspectives I have been seeing lately. While I definitely recognize the harm that unmonitored communication can have on social platforms, I wonder what you think about the limitations and boundaries of a government monitor. At this moment, I do not think the US government has the capacity to moderate social media platforms directly, so they would have to either enforce strict guidelines about what the moderation centers that are currently in operation should be censoring or they could contract a private agency (or AI!) to do all of the moderation of social media with their guidelines. Again, I am not sure either of these options would be feasible any time soon and they are not full-fledged ideas, but what do you think a plan for government moderation would look like?
I definitely agree that there are serious limitations and that there really is not a perfect solution to this problem. The central issue that I am trying to focus on is accountability. In the current setup, internet companies are not accountable for their choices in content moderation and are not required to provide any transparency or explanation. I think instituting a governmental system is at least some way to introduce accountability into content moderation. Through the government, content moderation good is easily checked by the judiciary, the vote, and other institutional mechanisms. Although this is not the only solution to introduce accountability and transparency into content moderation, I’m starting to believe it is the most likely. I am very hesitant about the idea of “outsourcing” this job to a contractor or something because it introduces a lot of ethical issues. I am worried about the effects of content moderation becoming a lucrative business, in which a government contractor might receive an incentive like money or influence for taking down certain posts. I think a government plan would be slow and oftentimes involve the passing of some contentious legislation and guidelines. However, I think changing the American system of content moderation is one of the few issues currently uniting Republicans and Democrats. Furthermore, I would prefer a less effective content moderation system to an over-zealous one.
Hey Patrick. I really enjoyed your essay. I’m definitely worried about how susceptible social media platforms have been in regards to disinformation campaigns and their influence in polarization. I’d be surprised if companies aren’t forced to accelerate the creation of effective regulatory measures, but I certainly think it’s strange that the purview of content regulation falls solely upon internet companies. At the very least, I hope that governments across the board start codifying ground rules for internet communications. That being said, I was wondering what rules you have in mind, if any at all?
Although governments currently have broad regulatory powers to regulate the internet (like the Federal Communications Commission in the U.S), I was surprised by exactly how much freedom internet companies were given by the government to regulate/moderate content. At least based on my research, I was shocked by the fact that there is not a lot stopping an employee at a tech company from waking up one day, deciding to change some of the current guidelines on their platform, and then removing millions of posts that were completely legal the previous day. One of the first rules that I think needs to be instated is a standardization of content moderation regulations across platforms, specifically when it comes to topics like political advertising.
The internet is such a fascinating topic because it is a new frontier that democracies are having to grapple with, especially as we have been seeing an influx of anti-trust lawsuits against major corporations like Google and Facebook. Your cross-examination of government involvement in internet content across several global democracies shows how our democracy can be strengthened rather than eroded through government content moderation. I found your deep-dive of how Russian disinformation and the current lack of regulation can erode our democracy very interesting and critical in this time of polarization. I definitely agree with your point that it furthers political polarization among populations and is dangerous to the functioning of our democracy. It will be interesting to see how (and when) this will be implemented, especially when it comes to whether this will be a strictly governmental endeavor or one that is contracted to the private sector. I think free speech will continue to be the main argument of those opposed to government involvement with the internet, however when it comes to the protection against malign foreign forces and discrimination, moderation will only strengthen our democracy.
I definitely agree with you. I think one of the most fascinating/scary things is how fast this new frontier has leap-frogged all other sources of communication and has now become the central battleground of society. When you consider that fact in juxtaposition with how slow and inefficient democracies have been at regulating it, you begin to really fear for the future of democracy. This most struck me when data coming out of the recent Cambridge Analytica scandal (watch The Great Hack on netflix) showed that these new overarching tools of the internet and social media can be used to socially engineer outcomes (such as elections) extremely accurately, effectively, and locally.
The internet has become a platform for voters to express their opinion and debate others with opposing viewpoints. A large issue within these debates is the increased use of hate speech. While I believe posts like these should be censored, the government needs to ensure they are not violating any acts of free speech. Social media websites should be in charge of monitoring their posts and delete what goes against their policies. It would be difficult for the government to monitor all social media platforms, as well as not a great use of their time. The use of the internet as a political platform is not going away any time soon, so companies should be prepared to update their policies and guidelines for users.
You make a very interesting argument. I think I have become concerned with the fact that internet forums have become so central to our society’s public discourse, that I am not completely comfortable with the idea that a small group of individuals has complete and total control over them. I would also like to push-back on the idea of self-regulation. In general, I really dislike the idea that companies motivated by profit also should be put in the position of regulating themselves. We saw this backfire recently with the aerospace industry and the Boeing 737 Max. I think you get into some serious ethical dilemmas when a programmer at Google is left to make the decision whether or not to increase the companies guidelines to remove viral hate speech that may be directly benefiting their income.
Hi Patrick, I really enjoyed reading your article. One of the most relevant headlines in the last few days has been the Federal Trade Commission and 40 states suing Facebook over anti-trust concerns due to the company’s acquisitions of Instagram and Whatsapp in recent years. Although unlikely, if Facebook were to be successfully broken up in the near future, along with a host of other Big Tech companies, this would probably lead to an even bigger headache for the federal government in moderating internet content across even more platforms, if it were to be a successful initiative one day. The lack of moderating infrastructure possessed by the government already poses challenges to how much will be spent on and how committed current and future administrations will be able to tackle the issue, especially with other domestic crises facing the United States today. Yet, the fact that Facebook holds a monopolistic advantage in many sources of information-sharing is equally grim, not to mention the scale of Google in this regard. Further, I’d be interested to see survey data polling results and/or focus group results to see what reactions people might have with increased federal government content moderation.
This is a very interesting point. Personally, I think many tech companies have been in intentional violation of anti-trust laws for many years. However, even if they were broken up, I don’t think it would be hard for the U.S government to say you need to meet these guidelines or comply with these federal content moderation laws or you will be legally liable. I think the biggest card the government has up its sleeve is increasing what companies could be held liable for to push them in a certain direction. For example, I don’t think it would be hard for the federal government to pass legislation that requires all internet companies to provide any flagged or removed post with a report that tells the author which guideline/section their post violated and also maintain a repeal system that a company could be held liable to maintain in court. In this way, the government might be able to shift some of the burden onto large tech corporations.
Hi Patrick,
I picked your article solely because you chose to explore the importance of Section 230 of Title 47 of the U.S. Code. I think is a two-sided sword when we discuss the ramifications of amending the statute. Currently, the system is protected from government overreach in censoring or moderating speech through protecting the legal liability of companies allowing forms of grey-area speech. That being, nearly all platforms have content restrictions that prohibit and punishment expression of extreme or threatening means. This arguably has been a success of the market without direct government action. However, we have come to a moment where we must either remove the liability protection or do that AND allow the government to enforce some kind of penalty regarding breach of the statute by the platform. Coincidentally, I believe the best mechanism of that penalty would be a similar bounty hunter approach that the State of Texas took toward their anti-abortion legislation, allowing citizens to sue a provider of an abortion or someone aiding in the process and being rewarded $10,000 for doing so. Similarly, the FCC could be authorized through an amended Communications Decency Act to penalize companies and award bounties to citizens who bring suits against platform-based expression that is deemed incendiary, false, misleading, etc. and is not flagged or labeled as you mention according to how the federal legislation prescribes. This solves the ability of the government, and more so–the president through his or her control of the FCC, from deliberately censoring expression and allows citizens to enforce the statue through proving their case in court. This limits the possibilities of Stealth Authoritarianism (Voral, 2015), which you note. Now more than ever, we need government action that is effective and upheld through the force of the American people.