Artificial Intelligence
AI in the super election year - revolution or familiar threat in a new guise?
Germany is facing elections. The first signs of an election campaign conducted using new methods are already visible. Just recently, an AI-generated video of Friedrich Merz, shared by SPD member of parliament Bengt Bergt, hit the headlines. This means that the debate on the use of artificial intelligence, which has been going on internationally since the beginning of the year, has also gained momentum in Germany.
The German election campaign can therefore be seen as part of a global trend. This year, elections were held in 70 countries - accompanied by concerns about how AI affects the integrity of democratic processes. Despite numerous discussions, there has been a lack of reliable data to date. To fill this gap, the Friedrich Naumann Foundation for Freedom teamed up with the German Council on Foreign Relations (DGAP) in a joint project to investigate the influence of AI on elections. The Foundation took a closer look at India, Mexico and South Africa. The DGAP examined the regional elections in France, the USA and the elections in Thuringia, Saxony and Brandenburg.
No revolution but a new head of the Hydra
Although the outcome of the elections examined was not directly influenced by AI, the three countries studied by the Foundation show clear trends and risks. On the one hand, existing risks can be amplified. AI-generated content makes it more difficult to verify content. False information can be spread more quickly, while voters have difficulty distinguishing facts from fakes. This creates a confusing information space and threatens trust in democratic processes. There have been numerous examples of this in India. For example, a deepfake by Bollywood star Ranveer Singh in which he criticized Prime Minister Modi went viral. The positioning of celebrities, which also played a major role in the US elections, was thus distorted. A news anchor was also a victim: an AI-generated video with a voice copy of news anchor Sudhir Chaudhary predicted the victory of the Aam Aadmi Party (AAP) candidate from West Delhi, Mahabal Mishra. The video also showed fake graphics of an exit poll in the background. This attacks confidence in the integrity of the election. Both videos were not labeled. AI-generated deepfakes and disinformation that are not recognized as such impair the electorate's ability to make informed decisions.
AI-powered disinformation also polarizes existing fault lines. This is not a new phenomenon. While AI can produce faster and more content, its dissemination via social media has not fundamentally changed. In societies where tensions are already high, however, disinformation that is even more difficult to detect can have devastating effects. This was evident, for example, in the USA during Hurricanes Helen and Milton. Some of the disinformation was directed at the US Federal Emergency Management Agency (FEMA). For example, Elon Musk, owner of X, claimed that FEMA did not allow aid deliveries to the affected areas. Rumors also spread that FEMA would confiscate land from survivors requesting assistance. This misinformation ran along existing political fault lines (Republicans vs. Democrats) and eroded trust in government agencies to the point that it was sometimes difficult for them to provide important disaster relief.
Liar's Dividend - AI as an excuse for missteps
The increasing spread of deepfakes makes it easier to present genuine content as fake. Our authors observed one example in Mexico: An audio recording on social media purported to show Mayor Paola Angón. In it, she talks about paying money for her consideration for re-election. She denied the allegations and claimed it was a deepfake. However, deepfake detection tools indicate 95% authenticity. This makes accountability difficult and undermines transparency in political processes. Precisely because people tend to believe what already corresponds to their opinion. Politicians can thus escape scandals and retain their voters more easily.
Targeted manipulation through microtargeting
In countries such as India, it became clear how easily voter data can be used for personalized disinformation. The ruling Bharatiya Janata Party (BJP) used the Saral app extensively in India to increase and strengthen its reach. With over 2.9 million downloads on Google Play, it was described by the head of the party's IT and social media department as an “election-deciding machine”. Originally designed to digitize data and communication with party members, the app expanded its focus to collecting voter data at the booth level (smallest voting unit of 700-800 people).
To encourage registration, party members organized door-to-door visits and local camps, assisted with app registration, and promoted government programs at the same time. As an incentive, the app offered party members the opportunity to meet Modi in person if they met their registration targets. Voters could personalize their profiles with Modi quotes and pictures, and the app informed them of party events and opportunities to get involved. Once registered, the app collects extensive personal data, including mobile number, address, age, gender, religion, caste, constituency, voter number, and professional and educational details. Since castes play a central role in Indian society, the BJP uses this data to develop targeted strategies for specific caste groups. The BJP also categorized voters by caste, religion and social status based on electricity bills, which indicate socio-economic status.
Deepfakes attack women in particular
According to the State of Deepfakes Report 2023, around 98% of deepfakes on the internet are non-consensual intimate fakes, over 99% of which feature women. This promotes gender-based violence and can push women out of the public sphere - with long-term consequences for democratic participation. A prominent victim this year was Taylor Swift. X was flooded with manipulated and generated intimate and explicit images of Swift in January 2024. Although this violated the platforms' guidelines, it reacted very late and only deleted content after 17 hours in some cases. The authors from Mexico found two cases of the dissemination of deep nudes of two female candidates. An AI-manipulated image of Senator Andrea Chávez was distributed even after the election was over. It showed her face on the half-naked body of another person. This underlines the ongoing misuse of AI for gender-specific political attacks.
What does this mean for Germany and the Bundestag elections?
Germany is not immune to these developments either. As the policy paper by the Friedrich Naumann Foundation for Freedom shows, AI is not a completely new threat. Nevertheless, AI exacerbates existing challenges such as disinformation and social polarization. Trust in the media and a high level of media literacy among the population are therefore essential to counteract the negative influences. However, this remains a long-term task for politics and civil society in Germany.
We must expect that disinformation campaigns, especially from Russia, will increasingly use AI to divide more effectively. This makes it all the more important that democratic forces do not misuse generative AI. Even if labeled AI content is legitimate in election campaigns, one must ask oneself whether this does not contribute to flooding the information space with disinformation and stirring up uncertainty in the long term. After all, if we have to carefully check whether an image, video or audio recording is genuine even for politicians who are actually trustworthy, this weakens trust in our institutions. What's more, this content can be reproduced without the AI reference. This makes the information space less transparent. Last but not least, this also makes Germany more susceptible to foreign disinformation campaigns.
Protecting democracy in the digital age
The policy papers make it clear that AI is a double-edged sword. On the one hand, it offers opportunities for innovation and inclusion, especially for democratic processes. But it also brings with it considerable risks. These are not new, but AI is getting better. So we need to get better too. In addition to increased media literacy, we need clear ethical standards and a political framework to prevent the misuse of AI. This is the only way to ensure that technological innovation strengthens democracy instead of jeopardizing it.