We’ve all seen it – fake social media accounts blasting blatant disinformation and propaganda. Your uncle Bill retweets them and your cousin Sally shares their links on Facebook, but they don’t know any better, right? Surely you, a savvy consumer of social media, can navigate this novel landscape of algorithms and disinformation. After all, you grew up with Instagram and Twitter, you’re educated, you can spot the fakes…right? Maybe not. The networks of so-called ‘bots’ on social media are pervasive and sophisticated, designed to target any type of consumer and directly promote a message that consumers will identify with. The technology is not inherently evil, it’s the reason you found that great outfit you’re wearing to that concert it told you about. That doesn’t seem all too bad. The trouble is this same technology can be coopted to deliver more than a convenient shopping experience.
Social media bots play a central role in social and political discourse around the world. Most likely, there is a good chance your own beliefs have been altered by bots and their activity without your knowledge. To understand how bots are attacking your subconscious, one must first understand how they work. Social media bots simulate real human users across various platforms. They will adopt the personality and appearance of the target audience. In the political space, bots often pose as activists or marginalized groups that represent a certain political belief or agenda to ingratiate themselves with that community. Once this role is assumed, the bots can systematically push narratives through their posts and shared content. Beyond inventing their own personas to emulate what one may perceive as a trusted source, bots have also been seen pretending to be real people in order to legitimize their messaging. gain, you may be thinking to yourself ‘but these fakes are easy to spot!’ However, that doesn’t stop their efficacy. If 10,000 fake accounts promote the same disinformation, each account may only need to fool one social media user into sharing a link and suddenly it’s a trending topic. If this is targeted correctly, social media algorithms will promote whatever topic is receiving the highest volume of traffic and push it to a broader base of consumers. As a message spreads across social media, it can find its way to legitimate media outlets and, in turn, to millions of consumers who trust the information they are receiving.
The implications of this reality within the political sphere are vast. If a targeted network of fake accounts is able to create and disseminate disinformation on social media and mainstream media platforms, they are able to alter the political discourse. Honest political discourse is a pillar of democracy and debate is a cornerstone of the democratic process as a vital tool used to persuade voters on political issues and affiliation. Strong debate and discourse is the method through which we formulate our political ideology. Now, we struggle to agree on fact itself as misinformation and disinformation have become deeply rooted within various political ideologies. At no time in history has disinformation been so easily spread and so readily consumed. The social media bubbles we have encased ourselves in are fertile breeding grounds for the spread of ‘fake news’ that affirms and radicalizes our existing ideologies. Now that a majority of Americans consume news from social media the audience and implications of fake news have never been so numerous. The extent of social media news consumption is so vast that manipulation of that news has real-world effects and may even share a role in altering the outcome of democratically held elections.
What is to be done about this clear and ever-growing threat to the sanctity of our democratic process? Many argue social media corporations must bear the burden of erasing these fake accounts from their platforms and enforce stricter rules against the spread of false information. Twitter is reportedly erasing ten accounts a second but is still plagued with thousands upon thousands of active bot accounts. Warning labels and fact-checking icons have found their way onto many platforms, while updated terms of service redefine what information is permissible on certain platforms. However, it cannot all be policed. Social media is, after all, a medium for debate and the spread of ideas. Too much restriction may do as much harm as it does good. There is one thing that every social media user can do to help solve this epidemic of fake accounts – educate themselves on the signs of malicious behavior on social media. There are many resources online that are a perfect first step in this education process, one of which can be found here. Once you have taken these vital first steps, test your knowledge with the Spot the Troll quiz and begin using these tools to identify and avoid fake accounts online. Social media is arguably the most important medium for the exchange of social and political discourse and ideas, so being able to separate the real users from the fake ones will serve every user well as they navigate the space and contribute to debate and discussion. We can all play a role in the neutralization of bot accounts and their incursions into our most closely held beliefs.
I really enjoyed your post, and this use of social media has always intrigued me. We tend to use social media as a tool of our own, rarely acknowledging that it is not neutral in this way. But the closer you examine it, it becomes obvious that built into the design of our apps is technology meant to keep you scrolling, addicted, engaged, etc. and it just so happens to be that in the political arena content that perpetuates these actions is usually inflammatory, divisive, and not always accurate.
This is one use of technology and social media, so my question would be who is using it? You use the word “targeting” a lot, and this is so true of bots. I’d be curious to know more about who does the targeting; is it companies, governments, individuals? Yes, social media companies can and should be held accountable for restricting the access that bots have in changing the information environment, but who do we hold accountable for these willful acts of misinformation?
Hi Tanner! You’ve written such an intriguing post and your use of rhetoric/appeal is truly outstanding. The issue of social media has become one of the most prevalent matters especially within America’s democracy. What was created as a platform to connect and share has shifted towards a highly politicized entity manipulating democratic institutions without the majority of citizens even being aware.
While I feel as though combating misinformation and disinformation must be a joint effort between consumers and corporations, there is a deeply interconnected relationship between the parties that make it hard to discern who is truly responsible for the inflammatory dissemination and projection of radical thoughts. Is it the consumer’s interaction with said tweets? Or is it the allowance of the corporation to permit such tweets to even be on the social media site in the first place?
Not to mention, I think social media has outsmarted its population, directly engaging in manipulative tactics to reinforce instability and polarization amongst the masses. A part of me cringed with the suggestion of the “spot the trolls” quiz because although I acknowledge the intention, that is also a way for social media entities to gain more information on citizens and in turn, fine tune their manipulation to become even more subtle. Almost everyone’s personal information, even mine and yours, has been compromised at the will of greedy hegemonic giants. But, it seems as though we’ve fallen down social media’s rabbit hole that continues to keep us down. Considering that social media controls every little thing down to our next google search, how do we combat aggressive social media tendencies without even engaging with the platforms in the first place? Is that even possible considering the dependency on social media in this day and age?
My apologies Amanda, I meant to post as a comment to Tanner lol. Great comment though!
(Sorry for messing up your comment section lol)
Hi Tanner! You’ve written such an intriguing post and your use of rhetoric/appeal is truly outstanding. The issue of social media has become one of the most prevalent matters especially within America’s democracy. What was created as a platform to connect and share has shifted towards a highly politicized entity manipulating democratic institutions without the majority of citizens even being aware.
While I feel as though combating misinformation and disinformation must be a joint effort between consumers and corporations, there is a deeply interconnected relationship between the parties that make it hard to discern who is truly responsible for the inflammatory dissemination and projection of radical thoughts. Is it the consumer’s interaction with said tweets? Or is it the allowance of the corporation to permit such tweets to even be on the social media site in the first place?
Not to mention, I think social media has outsmarted its population, directly engaging in manipulative tactics to reinforce instability and polarization amongst the masses. A part of me cringed with the suggestion of the “spot the trolls” quiz because although I acknowledge the intention, that is also a way for social media entities to gain more information on citizens and in turn, fine tune their manipulation to become even more subtle. Almost everyone’s personal information, even mine and yours, has been compromised at the will of greedy hegemonic giants. But, it seems as though we’ve fallen down social media’s rabbit hole that continues to keep us down. Considering that social media controls every little thing down to our next google search, how do we combat aggressive social media tendencies without even engaging with the platforms in the first place? Is that even possible considering the dependency on social media in this day and age?
I thoroughly enjoyed your insight on this form of social media usage. As someone who uses various forms of social media, such as TikTok and Twitter, to obtain a majority of my news in politics, it has become quite difficult to feel as informed as I want to when there exists so many fake bots. However, with the design of these social media platforms, all aggressively trying to keep you scrolling and engaged every second you are on the app, it is incredibly hard to not keep absorbing all of this information as fact when you lack other bases. Like you imply throughout your post, this downside of social media is only heightened in the realm of political content. So, of course, it negatively affects politics outside of our phone screens as we begin to form beliefs on government based on quite possibly entirely wrong information.
I also want to touch on the contribution social media has to our country’s ever-growing problem of political polarization. I recently read a study that concluded that, to my surprise and many others’, increased exposure to opposing views actually worsens the situation — much like constant exposure to views that align wth your own. Therefore, even if there was a practical way to combat political bots, I am unsure what lasting effect that could even have.
In the United States, the last thirty years has seen a growing rise in misinformation. This is a great post highlighting this novel phenomena in the political realm. What’s also great about the piece is that you show that Americans really do know there is a ton of misinformation out there yet it still occurs. Nearly 95% of Americans find misinformation to be a problem with nearly 81% saying it’s a major problem in a 2021 Pearson institute survey [1]. In another survey by the Brookings institute, they found that nearly 58% of Americans believe they have seen some form of “fake news” [2]. Despite this, only 23% of Americans have admitted to sharing fabricated stories, whether intentional or not [3]. In these contrasting surveys, a cognitive dissonance can be found concerning misinformation in the United States. While most Americans believe that misinformation is rampant ,they bear no responsibility in the act of spreading “fake news.” What is more is that most Americans believe that misinformation is spread through social media like you’ve pointed out. This isn’t even a partisan revelation with 73% of Democrats and 79% of Republicans believing that social media companies have a great deal of responsibility for its proliferation [4].
So where is the disconnect? If Americans know there is fake news on social media then how are we so susceptible for it? I think a part of the answer is in your conclusion which is that there are social media information bubbles that reaffirm individual content that purports a semblance of truth. If I see multiple posts on Facebook, and then Twitter, and then Tik-Tok that are all telling me the same things, I might be more likely to believe it. However, I think there is a larger “venn diagram” at play here. While the social media algorithms are certainly pushing people into these bubbles, I believe we should certainly broaden the scope to include partisan news as a part of these reaffirming misinformation bubbles.
For the most part, even in the digital age where there have been major disruptions to traditional news media, Americans still prefer to get their information from news sources [5]. While the format preference has changed from television to digital devices, news websites are still the top source for obtaining information, with television coming in a close second [6]. Social media as a news source doesn’t even account for more than 15% of utilized content [7]. Considering these facts, how misinformation prevails in the United States could be part of a greater media misinformation ecosystem. I am more likely to believe something on social media because Fox News or CNN has leaned into or confirmed it for me. . Because news media is at the forefront of disseminating information to Americans, misinformation should be evaluated through how it exists, how it is dispelled, and how it is spread within the United State’s news media environment as well. Maybe misinformation forms online via some social media platform, or is started on a less reputable news website or somewhere else and then spreads from user to user via algorithms that create internet echo chambers. Then people who read it on social media might feel something is true because they heard something similar on Fox News, their cousin shared a similar story on Facebook, and now a stranger on Twitter is reverberating it.
Overall, you are spot on with the assessment, it was a lovely read and I enjoyed responding to it.
[1] “The American Public Views the Spread of Misinformation as a Major ….” 8 Oct. 2021, https://apnorc.org/projects/the-american-public-views-the-spread-of-misinformation-as-a-major-problem/. Accessed 30 Apr. 2022.
[2] “Brookings survey finds 57 percent say they have seen fake news ….” 23 Oct. 2018, https://www.brookings.edu/blog/techtank/2018/10/23/brookings-survey-finds-57-percent-say-they-have-seen-fake-news-during-2018-elections-and-19-percent-believe-it-has-influenced-their-vote/. Accessed 30 Apr. 2022.
[3] “Many Americans Believe Fake News Is Sowing Confusion.” 15 Dec. 2016, https://www.pewresearch.org/journalism/2016/12/15/many-americans-believe-fake-news-is-sowing-confusion/. Accessed 30 Apr. 2022.
[4] “Americans agree misinformation is a problem, poll shows | AP News.” 8 Oct. 2021, https://apnews.com/article/coronavirus-pandemic-technology-business-health-misinformation-fbe9d09024d7b92e1600e411d5f931dd. Accessed 30 Apr. 2022.
[5] “More than eight-in-ten Americans get news from digital devices.” 12 Jan. 2021, https://www.pewresearch.org/fact-tank/2021/01/12/more-than-eight-in-ten-americans-get-news-from-digital-devices/. Accessed 30 Apr. 2022.
[6] Ibid. 1.
[7] Ibid. 2.
[8] Ibid. 3.
I really liked the topic you chose for your blog post! I am currently in a political propaganda class and you did a great job at connecting the manipulative nature of propaganda and its toxic effects that eat away at our democratic institutions. Social media offers unprecedented access to information that can spread like wildfire. It is important to note the sleeper effect when talking about how bots can so easily influence public opinion and attitude changes. The sleeper effect claims that when someone absorbs propaganda and later finds that the source is unreliable, the receiver still absorbs the propaganda’s message (source: notes from propaganda class). This applies to bots because someone can see a post from a bot and not immediately realize it is fabricated outrage. Even after they look into the credibility of the course material and deem it unreliable, the message still makes an imprint on their mind. Bots can even undermine the democratic process by creating “astroturf” campaigns. These are political or social movements that act like grassroots movements but, being inorganic, are fabricated often through the use of social media bots. These can have massive effects on public opinion and clamp down on people’s ability to think freely. The dangers that bots and fabricated public opinion pose to our democratic system cannot be understated. A well informed electorate is key to a democracy’s longevity, and your blog perfectly outlines how that is currently under attack.
Hey Tanner!
I totally relate to this article. All too often I see people I know and love posting news that just isn’t true, and I correct them because they don’t know any better. However, I have also been on their side of it, and been the one who doesn’t know any better. Bots are impossible to avoid, and some are relatively harmless, while others are completely malicious. Ads are targeted scarily well, and you’re right- this in itself isn’t inherently concerning, but using this technology for other uses definitely is.
This reminds me a lot of the documentary we watched about Camebridge Analytica and how they used their illegally obtained datapoints to sway elections in favor of right wing voters. As we speak, Roe v Wade is being overturned, due to justices that were appointed by Donald Trump. Basically, since the 2016 election was so incredibly close and came down to the wire in terms of votes, these votes used to sway us to the right may have had such a great impact that they’re completely affecting the course of our nation today. If Donald Trump was not elected, he would not have served in office for 4 years, and would not have been able to appoint the judges to the supreme court that are making all these decisions that are completely backwards for our democracy. Maybe I’m just looking for someone to blame here, but maybe it all came down to these bots.
The internet is so large and difficult to monitor every corner of, so the idea of tackling this issue of disseminating fake information and fake accounts online is nearly impossible. However, with the amount of harm that they are causing, the social media platforms in question may not be able to entirely shoulder the burdon of policing these accounts. Something needs to be done.
Hi Tanner! I really enjoyed this piece, and it slightly intersects with a topic with which I grappled in my first blog post related to social media. I have one big thought about your post. I know that fake news, misinformation and disinformation have become hot topics within the United States since the 2016 elections, but I do wonder about whether or not these issues on social media can really be identified as causes of democratic erosion. Though Russia’s use of bots (as well as real humans) to spread disinformation and misinformation during the 2016 election and into the present has undeniably contributed to the chaotic, often vitriolic political discourse in the United States, I am personally worried less about Russian infiltration and more about Americans’ vulnerability to it.
Why is it that so many Americans accept, seek out, or contribute to divisive political content? Consider Youtube’s “rabbit hole” problem as a parallel example of this problem. When a young, perhaps listless or disenfranchised man in the United States spirals into the Youtube rabbit hole, absorbing increasingly extreme right-wing content with each click, what is it that makes him susceptible to being influenced by that content? I do not have direct research to back this up, but I could see Americans’ engagement with political inflammatory content, whether it is made by Russian bots or produced by fellow citizens, as evidence of the persistence of underlying animosities and divides within American society. There is so much discourse around Russia’s strategic attempts to undermine American political discourse, but we forget that this strategy was selected for a reason: it targets one of our most glaring vulnerabilities. These bots are designed to capitalize upon our preexisting culture of blind partisanship and largely unreconciled resentment, rooted in our history of racism, sexism, and settler colonialism.