Deepfakes and the Election Cycle 

Former President Donald Trump shared an AI-generated image of Taylor Swift endorsing his candidacy. What role is AI playing in the upcoming elections? 

Former President Donald Trump recently shared an image on his Truth Social platform that incorrectly suggested pop superstar Taylor Swift endorses him for president. The image, likely generated using artificial intelligence, shows Swift dressed as Uncle Sam with the caption, "Taylor Wants You To Vote For Donald Trump." This misleading post, which Trump amplified to his 7.6 million followers, highlights the growing concern around AI and deepfake technology in the current election cycle.

Deepfakes—digitally altered images and videos that give off the impression that someone is speaking or performing an action they never did—are increasingly being used to spread misinformation. Experts are concerned that as the 2024 election between Trump and Democratic nominee Kamala Harris heats up, the use of AI-generated content could escalate.

While social media platforms like Facebook and X have policies against altered and manipulated media, they often scramble to enforce them as AI-generated content becomes more pervasive. Instead of removing such posts, these platforms usually focus on checking for correct facts or labeling misleading content and context. There are also exceptions for content of a satirical nature, which makes it even easier for people to circulate fake images online.

The Swift image was said to have been created using a mix of real and manipulated images—a particularly insidious tactic to spread falsehoods. This combination of AI with an already polarized political environment could make misinformation even more damaging.

AI tools like ChatGPT, developed by OpenAI, have drawn widespread attention, with their benefits including information-gathering and time management. However, such AI-generated images can be used as powerful propaganda tools.

Social media companies have faced increasing scrutiny and have developed guidelines to moderate content, including banning hate speech and allowing users to report inappropriate content. Lawmakers are also working on legislation that would force platforms to remove unauthenticated deepfakes.

Social media companies, however, are under pressure. With ad revenue tied to user engagement, some voices are critical at the level of commitment these platforms are affording to tackling misleading content, since these controversial posts often attract high engagement. The content that drives the most interaction is typically the most inflammatory.

The image Trump shared is just one example of how AI-generated content is becoming a tool for misinformation in this election cycle. While some platforms, like Meta and TikTok, have stepped up attempts to clearly label AI-generated content, others, including X and Truth Social, have been less clear about their strategies.

With the stakes high and the potential for AI-generated disinformation to influence voters, the challenge for social media companies, lawmakers, and the public is how to navigate this new era of political propaganda. 

Previous
Previous

“Megalopolis” Trailer Withdrawn due to False Information 

Next
Next

OpenAI and Condé Nast Launch Partnership