In 2004, in a small dorm room at Harvard University, Facebook was born. A platform initially designed for American college students to network and meet each other has grown to host nearly 3 billion global monthly users in little over 16 years. With such exponential growth, unforeseen issues are inevitable. It can be hard to attribute a lack of foresight to anything more than naïveté, but nonetheless, hate speech has been one of the most dogged issues Facebook has tried to address. Hate speech in the United States is a huge problem, but globally, it poses a greater threat, due to Facebook’s inability to moderate its platform. In Myanmar as of August 2018, Facebook did not have a “single employee in the country of some 50 million people.” Instead, its content moderation is outsourced to a separate firm through a covert operation called ‘Project Honey Badger.’ Project Honey Badger is responsible for the moderation of many Asian countries, and in Myanmar, it reported 60 people reviewing (only a handful of which actually speak Burmese) the hate speech reports for more than 18 million active users. As a consequence, hate speech wreaks havoc in Myanmar and throughout many countries around the world. This blog post will argue that in countries with prevalent ethnic group division and weak political institutions, by providing a nearly un-moderated platform for the spread of hate speech, Facebook drives ethnic violence which directly contributes to democratic backsliding. It will review three case studies, Sri Lanka, Myanmar, and Ethiopia, to contextualize how Facebook drives ethnic division and violence and then employ the work of several scholars to connect ethnic violence with democratic backsliding.
In Sri-Lanka in 2018, a false rumor, originating on Facebook, claimed that the Muslim minority was planning to distribute sterilization pills to wipe out the Sinhalese majority. In Ampara, a customer at a Muslim pharmacy begin yelling and harassing the shop owner about something in his food, to which the pharmacist replied in broken Sinhalese: “I don’t know, yes, we put?” This interaction was video recorded and posted to Facebook to prove that the Muslims were indeed distributing sterilization pills to wipe out the Sinhalese. This was, of course, false, but the man was subsequently beaten and killed, his shop destroyed, and the local mosque burned. Critically, “Facebook’s newsfeed played a central role in nearly every step from rumor to killing.”
In Myanmar from the mid-2010s through 2018, Facebook was a key instrument in an ethnic cleansing and genocide perpetrated against the Rohingya Muslim minority group. Importantly, this campaign was largely driven by military officials. These military officials set up seemingly innocuous pages devoted to “Burmese pop stars, models, and other celebrities,” but would eventually begin spreading toxic disinformation. The main goal of the posts was to “generate widespread feelings of vulnerability and fear that could be salved only by the military’s protection.” Facebook’s own commission has found that the platform was instrumental in “foment[ing] division and incit[ing] offline violence.” This rather sculpted corporate admittance really means that Facebook’s content moderation policies or lack thereof fueled a violent ethnic cleansing on a major scale.
In Ethiopia, a popular singer, Hachalu Hundessa was assassinated in June of 2020 after a disinformation campaign on Facebook alleged that Hundessa “abandoned his Oromo roots in siding with Prime Minister Abiy Ahmed.” The assassination sparked days of violence seeing hundreds dead with ethnic minorities seeing the most damage. As in the previous cases, “the bloodshed was supercharged by the almost-instant and widespread sharing of hate speech and incitement to violence by Facebook…Mobs destroyed and burned property. They lynched, beheaded, and dismembered their victims.” Similar to Myanmar, in Ethiopia, the government was also involved in the dissemination of hate speech.
David Waldner and Ellen Lust engage substantively with the driving forces behind democratic backsliding in their article Unwelcome Change: Coming to Terms with Backsliding. In it, they outline six key theories to explain backsliding, and one of them is the theory of social structure and political coalitions. This theory studies the actual divides between ethnic groups as a source of democratic instability. They explain the basic idea to be that in pluralistic societies, ethnic group loyalty can trump national loyalty, which can then lead to politicians appealing directly to members of their own ethnic group, in a process known as outbidding. This process leads directly to “increased ethnic chauvinism, ethnic polarization, the breakdown of democratic institutions, and possibly interethnic political violence.”
This theory of political coalitions driving backsliding is only exacerbated by the presence of a platform that drives deeper wedges between groups. Facebook is particularly unequipped to address these problems in many of the countries that suffer from the political coalition theory. It is a mutually reinforcing cycle: the governments are often unable to engage with Facebook to prevent the spread of hate speech, but even if they were, Facebook is incapable of keeping up with the content produced in languages unfamiliar to its employees. The population suffers as a result, highlighted by Waldner and Lust’s thesis. Ethnic wedges are driven deeper by tensions on Facebook, and, as a result, people and those in power who have political incentive to stoke tension can sow real world damage and drive erosion.
When Facebook grows in countries that have relatively weak institutions and ethnic group in-fighting, the problem of democratic backsliding is compounded. Daron Acemoglu and James Robinson, in their book Economic Origins of Dictatorship and Democracy, contend that political institutions are absolutely critical for democracy because not only do they protect current democratic outcomes but they can be used by the polity to ensure the allocation of future power. The implications of this argument on a nation stricken with ethnic violence due to Facebook are dire. Central to Acemoglu and Robinson’s thesis is the ability of the polity to organize and shape the design of democratic institutions to serve their benefit. So not only is the polity divided along ethnic groups, but their organizational capacity is severely limited. When consumed by violence, it is impossible for a country to come together and redefine their institutions that, in many cases, are already not serving their purpose, driving democratic erosion.
Determining a viable solution to this problem is nearly impossible. The astronomical growth of Facebook and the initial naïveté of its team is partially to blame. A platform, barely over 16 years old, that was initially developed for college students to network now drives genocide around the world, and has denied culpability almost every step of the way. Expecting the platform to evolve and step up to the challenge has proven futile. In the United States and European countries, citizens have turned to their governments for action, also to little avail. In countries where the government is invested in the ethnic violence, it offers little hope. So, barring the creation of some third party regulatory entity (almost entirely unlikely), the problem will likely go unsolved unless sufficient pressure is placed either on governments or Facebook to step up. Until then, Facebook will continue to contribute to the erosion of democracy around the world, especially in the most vulnerable countries.
This is a very interesting (albeit alarming) article; it’s intriguing to see how a platform meant to create connections and foster organization is used in the modern-day to deepen divisions and incite violence. You say that “the astronomical growth of Facebook and the naïveté of its team is partially to blame” for the modern dangers this platform poses, but is there something that the original team could have done differently to stop the spread of hate-speech and misinformation that couldn’t be instilled today almost as effectively (such as increased and diversified employment to monitor hate speech or imposing stricter rules about misinformation campaigns)? If Facebook hadn’t become the spawning-ground for dangerous rhetoric, wouldn’t another platform have taken its place (after all, hate speech and misinformation isn’t at all limited to Facebook but has the potential to expand to most social media platforms.) In a day where social media drives a lot of personal and group interaction, I think that you’ve posed a very critical point, but it would be interesting to continue this investigation and how your observations extend to other social media platforms.
This is an excellent take on the issue of Facebook and democracy. I was unaware of some of the extremely alarming examples that you mentioned, and I would venture to guess that many other are also in the same boat. I think a greater awareness of Facebook’s impact in various countries needs to be brought about, because it is clear that it can be downright dangerous. There were a lot of important points highlighted, such as the fact that Facebook’s exponential growth was bound to lead to unintended consequences. This is a key assertion to make in our age of rapidly advancing technology. Facebook is a great example of how well intended things can gradually become problematic and dangerous. You wrapped this up nicely by tying it back to how the platform was originally intended to be for college students, and it is now completely different. I am very interested to see how countries handle these serious problems in the future.
Bernal, I really appreciated how this post discussed the relationship between Facebook and the countries that you selected that are experiencing significant ethnic in-group fighting. Before reading this piece, I was less familiar with the cases of Myanmar, Ethiopia, and Sri Lanka, and more familiar with Facebook’s effects in the United States, so this blog was very helpful for me! In the US, the media has highlighted Facebook’s (as well as Twitter’s, Youtube’s) role in the proliferation of disinformation, misinformation, and hate speech. As you probably know, Facebook and other social media companies in recent years have begun to “crackdown” on misinformation, disinformation, and hate speech. However, as you perceptively point out, this “crackdown” is not equally administered in every country. At the end of this blog, you hypothesize that hate speech on Facebook’s platform is likely to remain unmoderated unless either a third party regulatory body is created or necessary pressure is placed upon governments to enact change. You end on a bit of dystopian note, stating that until one of two solutions are enacted, “Facebook will continue to contribute to the erosion of democracy,”. Although I agree, I also think that Facebook has contributed (specifically in the United States) positively to the enablement of democracy (mainly in its ability to keep voters informed and connected to their elected officials/representatives, and to mobilize individuals to take action). Do you see these same enablements of democracy taking place in either Myanmar, Ethiopia, and Sri Lanka?
Bernal, I think you raise a really interesting point about the danger of Facebook’s lack of content moderation and the cases that you draw on highlight the severity– even deadliness– of the consequences. While I have read about the disinformation campaigns on Facebook surrounding both the 2016 and 2020 United States elections and the company’s repeated refusal to take a harder stance against fake news and misinformation, I was not aware of the role the platform played in the violent killings and ethnic cleansing agenda in Sri Lanka and Myanmar. The examples you cited reveal the frankly terrifying reality that social media platforms now play a major role in politics, for better or for worse, and can be used to fan the flames of polarization both at the political level and within the mass public. I was especially alarmed by the case in Ethiopia where an enraged mob murdered a famous singer and then went on to terrorize and kill other citizens. While much of the digital age has been plagued by rampant hate speech and anonymous death threats, this marked something much more dangerous for me– people breaking from the relative security of inciting violence behind their screens and going out into the real world to engage in the violence themselves. I wonder if you expect to see the same type of violence and mob behavior taking place in the United States since Facebook still has not cracked down on the hate speech and disinformation furthering polarization after the election.
You make a very powerful point here. We are oftentimes so focused on the way that Facebook contributes to the spread of misinformation and how it impacts polarization in the U.S., but we fail to consider that it is an international platform that severely drops off in its accountability abroad. Over the past several weeks we have become used to seeing flagged posts on Twitter and Instagram with “disputed election fraud claims” and pop-ups that take us to educational sources on the U.S. election. However, for the most part, elections abroad carry on without contentious election posts being flagged. This is particularly alarming because even with the cases in Ethiopia, Myanmar, and Sri Lanka that clearly point to social media being a source of political and ethnic violence little has been done (at least to my knowledge) about preventing the dissemination of misinformation (specifically on platforms like Twitter) in countries plagued by polarization and ethnic divisions. It makes me question why these efforts have not been extended to countries other than the U.S. Is this due to a lack of resources, company unwillingness to change, or a lack of collective action? Though Donald Trump’s usage of Twitter as a platform to challenge election results is certainly unique, governments abroad continue to make disputed claims. For example, Russia’s official Twitter account continues to write posts about how Crimea is a part of Russia despite the hotly disputed annexation and the ongoing Ukrainian Crisis. Leaving these posts unflagged lends them a platform and does a disservice to people affected by the false claims made. Why hasn’t Twitter extended their content flagging to misinformation abroad? And what, if anything, could make them change this practice? On top of that, there’s also the challenging situation of information spread in countries who have state-owned news media. Short of social media platforms developing their own research and news teams, how could Twitter and Facebook hope to combat the spread of divisive misinformation in countries that don’t seem to have any reliable sources of information? Is it possible to determine what is and is not true in those scenarios? And finally, is it ethical for a social media company to be making those calls?
What an interesting article, Bernal. I had no idea about the role of social media in the tension and unrest in other countries. Something I find especially intriguing is that you seem to draw a link between a lack of employees/oversight ability in these less advanced democracies and the perpetuation of political and ethnic violence. But even in an “advanced” democracy like the US, it seems the ability of Facebook to police its platform is lacking. There’s numerous white supremacist, Neo-Nazi groups that have found a home on Facebook and use the platform to radicalize members into extremists willing to carry out violence. We can see that in the 2019 El Paso shooting, with ethnically-driven violence similar to that of Myanmar being committed by a gunman who’s beliefs were traced back to the social media he consumed. This really emphasizes the point you made toward the end of the piece: how much can we expect the platform to evolve? It seems like platforms almost need to take on a governance role, given their immense involvement in the political sphere. We all know that’s rather unreasonable, and leaves us wondering what can be done to curb the negative, real world effects that social media produces.
Bernal, I really enjoyed reading this post — it highlights a lot of the ways that our rapidly digitalizing world is creating new challenges in completely unrelated areas, such as governance and of course democratic erosion.
I absolutely agree with the idea that Facebook allows for “outbidding” which feeds polarization. I want to add here that the more complex social media technology gets, the more dangerous these platforms will become. For one, algorithms and machine learning have gotten incredibly complex, and can very precisely tell political alignment based on other unrelated activities. I think this tends to feed polarization by creating an echo chamber of content, where a person who believes falsehoods will see more and more reinforcing content, leading them to think these false theories are true through confirmation bias.
The international lens is also pretty interesting. It’s of course mostly discussed in the context of the US and the ’16 election, but I think Facebook makes gatekeeping more difficult across the entire world. It’s likely very difficult to root out extremist propaganda nowadays, because crazy commentators can usually find their audience online. I’m thinking in this case about creators like Alex Jones that might’ve found it impossible to take off without widespread access to their content. This idea of Gatekeeping is something discussed a lot in the Levitsky / Ziblatt readings which I think definitely go along with the argument you’re making about polarization.
Bernal, you make a strong argument that is supported by compelling, interesting, and solid examples. We oftentimes get bogged down in discussing how Facebook is negatively impacting Americans, and fail to take into account how it can be harmful internationally, especially in places where there is less accountability. Hate speech and misinformation are dangerous polarizing forces that––as you have pointed out––can cost lives. I know many people who have recently decided to delete facebook altogether because of the very problem you mentioned at the end of your piece: there is really no viable solution. I wonder if maybe a potential way to decrease Facebook’s contribution to the erosion of democracy would be to counter it with useful information spread. For example, in the US, despite the fact that Facebook could be divisive and spread misinformation, the platform was able to flag potential posts that may not be true or might need fact-checking, as well as provide users with useful information about voting registration and other mechanisms that allowed them to be informed voters. This was provided more by the platform regardless of what peoples’ “friends” were posting. I wonder if maybe it is possible for the platform to do this on a more widespread basis, so that even if it provides a platform for hate at some points, it is also doing some good. While this doesn’t necessarily solve the problem, I see it providing a solution that seems somewhat more viable then others, namely Facebook transitioning to a more governance oriented role (I think the platform having more power is the last thing people want). Additionally, you note that FB has fewer employees in weaker democracies; do you see an outreach project to hire more employees in these areas as a potential fix? I worry that if FB were to go, another platform would simply take its place, so reforming the current platform seems like a better option than eliminating it altogether. I think this will be an interesting issue to follow going forward.