Artificial Intelligence and the future of Political Propaganda

Downloads

Download the Article:

Authors

  • Mohammed Kabeer Garba PhD Scholar, ECOWAS Parliament, Abuja, Nigeria Author

DOI:

https://doi.org/10.55559/fgr.v1i4.27

Keywords:

Artificial Intelligence, Political Propaganda, Disinformation, Democracy, Deepfakes, Algorithmic Bias

Abstract

This paper examines how Artificial Intelligence (AI) is changing the future of political propaganda by looking at the way that new technologies are changing political communication, misinformation, and democratic procedures. It discusses the use of AI technology, including deepfakes and chatbots and algorithmic targeting, in the dissemination of persuasive and often misleading political information. The study, based on the qualitative research design and content analysis of such case studies as the 2016 elections to the House in the U.S. and the digital propaganda strategies of China, reveals some important trends in manipulations carried out with the help of AI. The results suggest that AI facilitates the possibility of hyper-personalized, scaled, and covert propaganda, which dethrones the previous paradigm of transparency and responsibility in democratic rhetoric. The paper insists that AI is capable of enhancing misinformation, but it can also help in identifying and preventing propaganda. It ends with policy suggestions, which are centered on regulation, ethical AI creation, and digital literacy to protect democratic integrity in the era of algorithmic influence.

References

Alba, D., & Satariano, A. (2020, February 7). A global effort to tackle deepfake videos. The New York Times. https://www.nytimes.com

Barrett, P. M., & Sims, J. (2021). Fueling the fire: How social media intensifies U.S. political polarization—and what can be done about it. NYU Stern Center for Business and Human Rights. https://bhr.stern.nyu.edu/fueling-the-fire

Binns, R., Veale, M., Van Kleek, M., & Shadbolt, N. (2018). “It’s reducing a human being to a percentage”: Perceptions of justice in algorithmic decisions. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1–14.

Bradshaw, S., & Howard, P. N. (2018). The global organization of social media disinformation campaigns. Journal of International Affairs, 71(1.5), 23–32.

Bradshaw, S., Howard, P. N., & Kollanyi, B. (2021). Industrialized disinformation: 2020 global inventory of organized social media manipulation. Oxford Internet Institute. https://comprop.oii.ox.ac.uk/research/industrialized-disinformation-2020

Chesney, R., & Citron, D. K. (2019). Deepfakes and the new disinformation war: The coming age of post-truth geopolitics. Foreign Affairs, 98(1), 147–155.

Cohen, J. E. (2019). Between truth and power: The legal constructions of informational capitalism. Oxford University Press.

Creemers, R., Triolo, P., & Webster, G. (2022). China’s approach to AI governance: Strategy, regulation, and ideology. Center for Strategic and International Studies. https://www.csis.org/analysis/chinas-approach-ai-governance

DiResta, R., Shaffer, K., Ruppel, B., Sullivan, D., Matney, R., & Zeitzoff, T. (2019). The tactics and tropes of the Internet Research Agency. New Knowledge and Oxford Internet Institute. https://disinformationreport.blob.core.windows.net/disinformation-report/NewKnowledge-Disinformation-Report-Whitepaper-121718.pdf

Entman, R. M. (1993). Framing: Toward clarification of a fractured paradigm. Journal of Communication, 43(4), 51–58

European Commission. (2021). Proposal for a regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206

Fairclough, N. (2013). Critical discourse analysis: The critical study of language (2nd ed.). Routledge.

Floridi, L., Cowls, J., Beltrametti, M., Chiarello, F., & Chatila, R. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707.

Friedrich, J., Seetharaman, S., & Abdullahi, A. (2022). AI and democracy in the Global South: Policy gaps and governance opportunities. Brookings Institution. https://www.brookings.edu/articles/ai-and-democracy-in-the-global-south

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1), 1–15.

Howard, P. N. (2020). Lie machines: How to save democracy from troll armies, deceitful robots, junk news operations, and political operatives. Yale University Press.

Isaak, J., & Hanna, M. J. (2018). User data privacy: Facebook, Cambridge Analytica, and privacy protection. Computer, 51(8), 56–59

McCombs, M. E., & Shaw, D. L. (1972). The agenda-setting function of mass media. Public Opinion Quarterly, 36(2), 176–187.

McIntyre, L. (2018). Post-truth. MIT Press.

Musiani, F., Pohle, J., & Badouard, R. (2019). Algorithmic governance: Between self-regulation and regulation by design. Internet Policy Review, 8(2).

Napoli, P. M. (2019). Social media and the public interest: Media regulation in the disinformation age. Columbia University Press.

OECD. (2019). OECD principles on artificial intelligence. https://www.oecd.org/going-digital/ai/principles/

Pennycook, G., & Rand, D. G. (2019). Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition, 188, 39–50.

Sunstein, C. R. (2015). Choosing not to choose: Understanding the value of choice. Oxford University Press.

Townsend, L., & Wallace, C. (2016). Social media research: A guide to ethics. University of Aberdeen. https://www.gla.ac.uk/media/Media_487729_smxx.pdf

Tufekci, Z. (2015). Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency. Colorado Technology Law Journal, 13(203), 203–218.

Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social media + Society, 6(1), 1–13.

Woolley, S. C., & Howard, P. N. (2017). Computational propaganda worldwide: Executive summary. Oxford Internet Institute. https://demtech.oii.ox.ac.uk

Woolley, S. C., & Howard, P. N. (2019). Computational propaganda: Political parties, politicians, and political manipulation on social media. Oxford University Press.

Yeung, K. (2018). Algorithmic regulation: A critical interrogation. Regulation & Governance, 12(4), 505–523.

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.

Published on:

28-12-2025

Also Available On

Note: Third-party indexing sometime takes time. Please wait one week or two for indexing. Validate this article's Schema Markup on Schema.org

How to Cite

Garba, M. K. (2025). Artificial Intelligence and the future of Political Propaganda. Frontiers in Global Research, 1(4), 24-29. https://doi.org/10.55559/fgr.v1i4.27

Similar Articles

You may also start an advanced similarity search for this article.