AI & Elections Across World, 2024: Highlighting Indian Experiences

Image

Upon meticulous examination of all the incidents, it has been noted that a significant amount of deepfake content has been produced in this election year. AI has been useful this election season for a variety of objectives, including campaigning, endorsements, and the resurrection of historical figures to increase turnout and vote totals. This year, over 70 countries will be polled, and it is intriguing to observe the emergence of a new technology over ten years after social media.

AI platforms and technologies have been widely utilised to produce fake information that has deceived consumers. Various actors have dragged well-known leaders. Technology has been utilised by political parties and politicians to produce content that depicts these juxtapositions. Targeting female candidates' bodies with deepfake technology has damaged their reputations, which says volumes about the "end-use" of the technology.



It is that these forms of content have affected the way that candidates for office are reaching out to voters and have revolutionised campaigning by making it quicker and more efficient, even though it is difficult to quantify the exact impact they have had on individual or party campaigns or on overall electoral outcomes. In addition to providing ample space for the production and dissemination of false information, particularly deepfakes. This year's election has created a great deal of anxiety about AI taking over, which has dominated conversations among psephologists, academics, researchers, and administrators.


Objectives

1. To collate and analyse cases across the world where AI has been used in elections and campaigning.


2. To assess the impact of AI generated content (deepfakes and synthetic media) on elections and campaigns. 


3. To explore the ways to counter AI-generated risks in elections and campaigns.


4. To understand the impact of AI-generated contents on various stakeholders (political candidates, voters, electoral bodies, media and others).


Governments across world are deliberating on possible ways to address the rapid dissemination of AI-generated content on social media platforms. Despite various measures taken by these governments, progress has been slow and often ineffective. Even social media platforms, despite releasing their ‘action-packed manifestos’ are not able to take charge when the information keeps spreading online with lightning-fast speed.


A robust mechanism that will regulate the use of this technology at all levels—the level at which it is developed, deployed, and used by creators to generate AI-based content— remains necessary. End users of social media platforms who share content mindlessly without verifying its authenticity also bear responsibility for curtailing the spread of deceptive AI-generated content aimed at misleading voters and users.

Night
Day