It’s common knowledge that countries like China, North Korea and Russia restrict internet access to keep ideas out and rebellions in. They will use any excuse to monitor, restrict, shut down speech, and create massive firewalls to control information and people. Now A.I. could be added to the resources used to get and keep authoritarians in power and China is leading that charge.
Artificial Intelligence (A.I) is a new and developing tool slowly creeping its way into everyday life. We often see it criticized in headlines around the arts, careers and homework. One of its most controversial uses is for facial recognition. China is known for its use of surveillance of individuals as part of its control and Artificial Intelligence is enhancing their systems. A.I can identify and track individuals as they move about, meaning the government might know exactly where individuals are at all times with face recognition technology. China has a surveillance system unlike any other with more than half a billion surveillance cameras (Harvard 2022). This technology is being expanded into commercial use with government contracts increasing the concern. Citizens will be too afraid of “acting suspicious”, let alone voice their opinions and step up to the government. The likelihood of uprisings or disapproval of the government is slim to none especially when you consider the fact that authorities want to use this technology to predict individuals movements to prevent crimes, which basically boils down to profiling individuals or silencing them and convicting them of “uncommitted criminal activities”. There will be no privacy or freedom as this technology continues to advance.
China isn’t stopping within its borders, they want to push their beliefs on privacy, internet access and security to any country or corporation that will listen. According to Freedom House, China has been known to host informational sessions on managing digital information. China has made it quite clear that they want international companies that operate in China to abide by Chinese standards of accessibility and internet security even when working outside China. A question to ask is, if you were a citizen in China would you even know to rebel? Would you know there are other people questioning the government, other places where you are not being watched all the time? Would you know there are leaders who align more with your values and not what has been told to you is the best way? These are the freedoms that are taken away from individuals with media control and monitoring, tools used to keep current leaders in power. Their demand for control outside their borders keeps their authoritarian hold on their own citizens and is a grab at democracy around the world.
Stephen Halls’, in his 2023 book “The Authoritarian International”, explains the sharing and spreading of authoritarian tactics well. Regions tend to follow similar government structures and when one shifts the others start to follow. This is a huge threat to authoritarian regimes so their own control is important to the authoritarian leaders surrounding them. Authoritarian leaders do the same thing but with their tactics. Seeing other authoritarian governments realize the liability of the internet and then regulating it heavily causes others to do the same. When they see the power of A.I. and its ability to quell and discourage rebellion others will follow. This is exactly what we are seeing with China. They are creating a blueprint for other authoritarian leaders to follow.
Hall, Stephan. The Authoritarian International: Tracing How Authoritarian Regimes Learn in the Post-Soviet Space. Cambridge University Press, 2023.
“The Rise of Digital Authoritarianism: Fake News, Data Collection and the Challenge to
Democracy.” Freedom House,
freedomhouse.org/article/rise-digital-authoritarianism-fake-news-data-collection-and-challenge-democracy. Accessed 26 Oct. 2023.
“Authoritarian Regimes’ Ai Innovation Advantage.” Harvard Magazine, 18 Apr. 2022,
www.harvardmagazine.com/2022/04/right-now-authoritarian-regimes-artificial-intelligence.
Artificial Intelligence and Authoritarian Governments
Written by: Alexandra MorkMiesha Acevedo
Sign Up For Updates
Get the latest updates, research, teaching opportunities, and event information from the Democratic Erosion Consortium by signing up for our listserv.
Popular Tags
Popular Categories
7 Comments
Submit a Comment
You must be logged in to post a comment.
Hi Ceirra, I found this article fascinating and terrifying at the same time. I recently explored some of China’s tactics in democratic erosion and erasing free speech, and this perspective on wiping out free speech/delaying citizens’ voices is further proving that AI can be an evil beginning to a sort of dystopian future. Another part of your article that I found interesting was that there are half a billion security cameras in China.
Unfortunately, as I have begun to compare prime examples of democratic erosion to the current state of China, it is unfortunately not difficult to predict how bad things might end up in China in terms of human rights and freedom of expression.
This blog post sheds light on the integration of artificial intelligence (AI) in authoritarian governments, notably exemplified by China. The piece emphasizes the extensive surveillance facilitated by AI, particularly through facial recognition technology, creating a pervasive atmosphere of fear and self-censorship among citizens. The escalation of AI for predictive purposes, framed as crime prevention, raises alarming questions about privacy and individual freedoms.
Acevedo’s portrayal of AI as the silent enforcer in authoritarian regimes resonates strongly and raises a profound concern about the long-term societal impact. The idea that citizens in these oppressive settings may find themselves constantly monitored not by human watchdogs but by technology itself—lampposts, computers, and other unassuming elements—adds a layer of complexity. The insidious nature of AI-driven surveillance, as discussed, goes beyond the fear of a neighbor reporting dissent; it extends to an omnipresent, automated system that could manipulate or misinterpret activities, leading to arbitrary convictions and stifling of voices. Moreover, the continuous and pervasive surveillance by AI, especially in the context of China’s extensive camera network, not only instills fear in citizens but also molds behavior. The insidious nature of this surveillance means that future generations, growing up under the watchful eyes of AI, might internalize self-censorship and conformity as the norm. Adding this consideration to the omnipresence of AI monitoring creates a scenario where individuals become desensitized to constant scrutiny, leading them to self-correct their behavior to align with the authoritarian mold, even when not directly under surveillance.
This desensitization of individuality and non-confirming behaviors also poses a significant challenge to the concept of rebellion and dissent in societies. Suppose the authoritarian regime, through its insidious use of AI monitoring, succeeds in suppressing any form of opposition or deviation. In that case, the youth and subsequent generations may lack the framework to even conceive of rebellion, as noted by Acevedo. The normalization of surveillance could result in a society where citizens, conditioned from a young age, no longer question or challenge authority. The absence of a culture of rebellion, coupled with the fear instilled by AI surveillance, makes it difficult for individuals to learn about alternative perspectives or consider the possibility of dissent. It ruins any concept of democratic freedom and liberty. In essence, the expanding role of AI in authoritarian tactics, as exemplified by China, poses a formidable threat to individual liberties, creating a society perpetually under scrutiny with dire implications for dissent and democratic ideals. The triumphantness of AI monitoring in this context not only stifles current opposition but hinders the organic development of a generation capable of questioning and challenging oppressive regimes. Overall, this was a fascinating read, and I really enjoyed how pertinent it is to start questioning AI use and its ramifications as tools in the modern authoritarian playbook.
I got a bit carried away with my comment above, so to summarize what I meant succinctly is here.
This blog post sheds light on the integration of artificial intelligence (AI) in authoritarian governments, notably exemplified by China. The piece emphasizes the extensive surveillance facilitated by AI, particularly through facial recognition technology, creating a pervasive atmosphere of fear and self-censorship among citizens. The escalation of AI for predictive purposes, framed as crime prevention, raises alarming questions about privacy and individual freedoms.
Acevedo’s portrayal of AI as a silent enforcer in authoritarian regimes raises concerns about a society where continuous surveillance leads to desensitization and self-correction. The normalization of AI monitoring may mold future generations to automatically conform to the regime’s stringent standards and control, making rebellion and dissident ideas and opinions difficult to come to fruition. If AI-driven surveillance continues to triumph, the lack of dissent and the absence of a culture of rebellion could perpetuate a generation incapable of questioning or challenging authoritarian norms and be disastrous for democracies.
Hi Miesha,
I really enjoyed your blog, It talks about how authoritarian governments, like China, are using Artificial Intelligence (AI) more and more to keep their people and information under their control. Concerns about freedom and privacy are raised by China’s widespread surveillance, which includes technology that can recognize faces. AI could silence criticism by keeping an eye on people and telling them not to say what they think, which would be against democratic values.
China’s impact goes beyond its borders. It forces businesses around the world to follow its rules, which is bad for democracy around the world. It’s interesting that you ask if people would even be aware of other points of view or the chance of rebelling but I think it is a good way to interact with audience. Stephen Halls’s idea that authoritarian strategies are like a blueprint shows how these governments are linked and how dangerous they are to democracy. Your blog makes it clear how important it is to understand how AI affects democracy and freedom around the world. We need to be aware of and deal with the possible effects of AI in authoritarian settings.
Hi Miesha,
Great post! I think identifying future technologies and policies that might affect democratic erosion is just as important as recognizing the threats we face today. AI is certainly one of those tools that poses a threat to self-expression. You highlighted the use of AI that might suppress media and opposing points of view because of the potential for immediate and indiscriminate censorship and punishment. The dissemination of information freely in societies is key to a functioning democracy as it offers alternatives to individuals. However, in a society like China, AI serves to further stifle these alternatives. It may also act as a more serious buffer to external data that is already highly limited in the population. I wonder if stealth authoritarianism would benefit from this technology. The ability to erode civil society groups that might try to consolidate differing preferences by having AI identify meeting places and outreach programs is a scary prospect.
I found this article very interesting and a threat of A.I. to democracies that I had not known of before. I think that the combination of private technological progress combined with authoritarian movements and states that you explore here is one of the scariest aspects of artificial intelligence, especially in an era of democratic erosion. I wonder how accesible these forms of surveillance are going to become, and how invested China will be in spreading this accessibility and implementation of A.I. assisted surveillance. I also wonder if there will be any new international standards or research on the effects in the coming years in response to this threat to democracy.
The main threat of A.I. to democracy that I previously knew of was the ability to create falsified letters and advocacy from foreign stakeholders to representatives to illegitimate public opinion. I think that this concept and the concept of surveillance combine to threaten the validity and ability for citizens to dissent and to have their voices heard without fear of being silenced or punished.
Amazing post! I learned a lot!
I see the ways this article is trying to speculate that A.I. may be helping authoritarian regimes in having more control over their people. Although, I would suggest that A.I. also poses a strong advantage the people may have to these governments. It would be easier to say a citizen is being catfished rather than they are actually guilty of a “crime”. In the kinds of governments where the concept of crime is much vaguer, I think the people could benefit overall. That is assuming, of course, that the government would be willing to accept that. Otherwise, people could pin crimes on other individuals through catfishing, and the government would have no alternative but to prosecute the accused. Overall, I though the execution of this article was very interesting and had a captivating headline. A.I. is becoming an unprecedented occurrence that governments all around the world are having to navigate around. I think people saw it coming but not to this extent and not so early. I mean it is everywhere, even big companies like Disney and amazon are putting it to work. In the wrong hands there are so many ailments; however, the benefits would be exponential. I mean imagine a world where no one really had to work or worry about where their next meal was coming from. What if humans created a totally self sufficient system of A.I.. It would be the end to poverty and hunger as we know it today. In terms of authoritarian regimes and A.I., I think those governments are going to tighten their grip on it because it is honestly something out of their control. If it benefits the people though, I think it is a good thing. In any scenario, A.I. is most definitely here to stay. There is no doubt about it, and the best thing we can do now is to figure out where to draw the line.