Political Disinformation is one of the greatest threats facing the United States today. It not only makes some question their faith in our hallowed institutions, it threatens the very fabric of our Democracy. This threat has weakened our nation and divided our people in new ways, so it is crucial that we recognize and work to overcome this challenging threat. Our Great American Experiment is in danger, and it is up to each of us do our part to combat it. Although there are many reasons that help explain the increase in political disinformation within our nation since the 2016 presidential election cycle, the role that social media companies have in regards to this issue is paramount in combatting this rising threat.
Social media companies such as Facebook, TikTok, Instagram, and Twitter now known as (“X”), have each played a unique role in the unfolding crisis of political disinformation. Social media companies are facing the challenging task of ensuring that freedom of speech remains intact, while still working to combat the problems created by political disinformation on their platforms. This task requires dedicated resources as well as an educated workforce that can quickly work to create solutions that will combat political disinformation. Social media companies unfortunately are finding this task thus far to be insurmountable. There are new and innovative solutions that are being implemented, but more must be done in order to combat this dangerous threat. We must call upon these companies as well as our government to intervene and work to fight against these political disinformation trends.
Political disinformation can be broadcasted on social media in various ways through stories, posts, lives, and even direct messages. With such a wide array out outreach opportunities, social media companies are struggling to monitor each of these different sections within their platform. The algorithms that social media companies use to popularize and increase the odds of returning users to the platform are harmful. There algorithms often reinforce the opinions of viewers by showing material related to what the user likes and spends a lot of time looking at. The algorithm might work well for users that often watch soccer videos, because as you scroll it will show likely the user more soccer clips. The problem enters the picture when the topic being presented is more controversial such as political opinions and even disinformation. One example of this is the 2020 election where many individuals including the former President pushed the narrative of denying the outcome of the 2020 election. This and other actions such as the January 6th insurrection led to Twitter at the time banning Donald Trump’s account and not allowing him to access their platform due to the violations he had committed. Users that liked election disinformation content were further shown elections disinformation content that reinforced their false reality. This social media content led to more than just reinforcing a falsified narrative, it led to the deadly January 6th insurrection, where Ashli Babbitt was shot and several others later died due to suicide or other complications linked to the insurrection. The insurrection should have led to the call for social media companies to enact policies that help safeguard against future violent events. There needs to be accountability from these social media companies in regards to correcting the algorithms they use, speeding up the process of deleting posts that violate user guidelines, and also speeding up the process for banning accounts when appropriate.
There are various individuals and groups that are seeking to sew political disinformation into the minds of Americans each and every day. These groups include foreign actors, an example of this is Russia’s influence on the 2016 presidential election. There were thousands of Russian bot farms that created accounts throughout the 2016 presidential campaign cycle with the goal in mind of disrupting and influencing the outcome of the election. These accounts often remain intact for months or years before being taken down. One way to combat these bot accounts is to enhance security features such as cell phone verification or other identifiable information that must be verified. If these solutions were implemented, then these accounts could easily be banned and removed. Renée DiResta stated that, “The hard truth is that the problem of disinformation campaigns will never be fixed; it’s a constantly evolving arms race. But it can — and must — be managed. This will require that social media platforms, independent researchers and the government work together as partners in the fight. We cannot rely on — nor should we place the full burden on — the social media platforms themselves.” This furthers the idea that there must be increased collaboration between various groups in order to manage political disinformation. The solution to this issue is very complex, but a step in the right direction is to work to increase mitigation strategies that are already in place within these companies.
There are those who believe that combatting political disinformation means the end of freedom of speech on social media platforms. In reality this could not be further from the truth. Our calling upon social media companies to enact policies that create real change and help mitigate the harmful impact of political disinformation on their platforms is both levelheaded and necessary. Those who view the call for new and enhanced policies as a step towards censorship, are incorrect in thinking that the goal is to censor those who are voicing their political opinions. There is a difference between political opinions and the outright denial of political reality. Politicians such as former President Trump have damaged the idea of combatting political disinformation, because it does not serve his agenda. Political disinformation can lead to democratic erosion. This is very notable today when analyzing Donald Trump. He has championed a disinformation campaign and denied reality regarding the 2016 election.
Hyperlinks to the material I used in my draft blog post:
- https://www.nytimes.com/2018/12/17/opinion/russia-report-disinformation.html
- https://www.theguardian.com/books/2019/jul/27/the-disinformation-age-a-revolution-in-propaganda
- https://www.factcheck.org/2021/11/how-many-died-as-a-result-of-capitol-riot/
- https://www.sipa.columbia.edu/news/study-confirms-influence-russian-internet-trolls-2016-election#:~:text=The%20Internet%20Research%20Agency%2C%20a,information%20during%20the%202016%20election.
0 Comments