Where does the buck stop with Generative AI-ing Elections ?

AI and Elections
Published Date
November 30, 2023

The world has quietly moved on from the protests against a member of the ruling political party in India’s parliament after MP Brij Bhushan was accused of sexual assault by India's leading wrestlers. The use of manipulated images to discredit the allegations levelled by the wrestlers caught considerable media attention both within India and abroad. Unrestricted access to advanced generative tools makes it easier to conjure up such discrediting campaigns. It is particularly relevant to India's electoral politics as the 2024 general election is around the corner.

India has already seen multiple high-profile cases of manipulated deepfake content in the electoral sphere. Deepfake videos for political campaigning were utilised in the previous general elections held in 2019. There has also been a case of manipulated audio that caused quite an uproar in the state of Tamil Nadu. Electoral politics is getting tied closer to the use of Generative AI technology than the public may realise.

Globally, there has been little movement on ways to identify synthetically made content or the liability of its consequences. The ease of access to generative AI tools through the entrepreneurial advances of Open AI, Stability AI, and others, accelerates the potential challenge to the Indian polity and democratic process, as it has to the Pope.

Where does India stand? 
The Indian electoral process and campaigning have movers and shakers fighting on the WhatsApp battleground. However, AI-generated electoral content might not flood WhatsApp groups immediately. The ability of these technologies in Indian languages is subpar, at best. However, Indian jugaad or creativity may outshine the imaginations of the political elite, who may be waiting for a catastrophe to strike.

Content generated cheaply through generative AI tools take many shapes and forms. From smear campaigns against politicians in the fray to interference by external actors,  upcoming elections will continue to see manipulated content flooding through the virtual hallways of our social media platforms. Global research has proved that women and people of minority identities are often subjected to targeted online violence while participating in the electoral process as a candidate. Discrediting a woman as she makes space for herself has been a global pastime for too long to be ignored in the face of AI tools that make this process easier.

There are less than eight months to go before the 2024 General Elections in India. It is a pivotal election on all fronts. Domestic narratives on social, economic, and political realities seek a promised land. It has a considerable geopolitical impact, from the political balance of power to the economic impacts of the power-wielder in New Delhi. It also forms a part of the string of significant elections worldwide that can severely impact the lowest common denominator of the global liberal order.

 

Critical decisions from multiple stakeholders 
Multiple vital stakeholders will have to take decisions that can impact the democratic processes worldwide, as AI-generated content takes up more space in our public lives. Domestically, India seems uninterested in moving into the AI regulation race, culminating in the lack of attention to governance of use-cases in critical areas such as elections. Specific deepfake regulation is not in the cards, according to the Minister of State for Electronics and Information Technology. The Information Technology Act, 2000, which is the umbrella legislation for all things digital, is set to undergo a revamp likely in the next session of the parliament as it convenes for the winter session. It leaves the future in limbo on the proposed substitute, as the proposed Digital India Act is not robust enough to deal with challenges such as the use of deepfaked content in elections.

Decisions might also have to be made at corporate boardrooms across Open AI, Google, Meta, Hugging Face and more on how builders of foundational models would want to approach the use and possible misuse of their products. These decisions could squarely fall under 'content moderation'.

Trickier content moderation challenges await social media platforms as they will soon have to decide on technological and resolutionary practices. They will have to decide how they deal with synthetic electoral and campaign content. Experience shows that they have not sprung into responsible action until a major US Election has been in the offing. Maybe, the Indian experience could be better this time since the Presidential elections in the USA would closely follow the Indian General elections.

Finally, the Election Commission of India, the independent body overseeing and enforcing the model code of conduct, will have critical decisions to make. A recently passed bill that alters the requirements for appointments to the Election Commission of India could dilute the independence of this constitutional body. Threats to its form and function will only make decisions from the body more challenging to implement, leaving certain decisions untaken.

The stakes are high, and so are the number of stakeholders. One can wish that too many cooks don't spoil the broth. Creative solutions resulting from dialogue between the stakeholders and an effective plan to deal with the challenges are the need of the hour.

Creative solutions must come up, but it is essential to avoid falling for utopian technosolutionist proposals. The challenge at hand - synthetic and manipulated content - can contribute to hampering democratic processes through fraud and deception. The answers that we evolve should not lose sight of the socially intertwined nature of the challenge, or the layers that are added to the puzzle as women, people belonging to marginalized identities of religion, caste, etc., try to enter the fray to bring positive disruption to the status quo.