In our last One Thing for the year, we review a recent report by the UK Centre for Emerging Technology and Security (CETaS) on the impact of AI-driven ‘hostile influence operations’ on elections (by Stockwell, Hughes, Swatton, Zhang, Hall and Kieran).

What happened in the US presidential election?

While going into 2024 there was widespread concern that AI would disrupt elections globally, earlier CETaS studies found “no evidence that AI-enabled disinformation had measurably altered an election result in jurisdictions ranging from the UK and the European Union to Taiwan and India”.

However, CETaS’s analysis of the recent US presidential election “identified far higher – albeit still relatively moderate – volumes of AI-enabled viral disinformation in the US campaign cycle than in elections earlier in the year”.

First, the most common type of AI-enabled misinformation was smearing campaigns in which a deepfake video, image, or audio is used implicating political candidates in controversial but fabricated activities and statements. In one video, Vice President Kamala Harris is depicted making derogatory comments about former President Donald Trump. In another, a Secret Service agent (as a ‘representative of the Deep State’) is depicted smirking during the assassination attempt on Trump.

Second, there were several high-profile voter-targeting efforts that used automated social media bots, most of which carried “the hallmarks of foreign hostile interference – including links to Russia and China”. The Russian-linked bots focused on misinformation about the Ukraine war. The Chinese-linked bots amplified conspiracy theories around the Trump assassination attempts. Interestingly, the Chinese-linked campaigns also used polling features of different campaign issues to harvest demographic information on American voters which could be used for future, better targeted misinformation campaigns.

Third, there were high profile instances of real-world events being misattributed as fake AI-generated news, both reflecting and compounding the levels of public mistrust. For example, there was the misleading claim that President Biden’s TV address withdrawing from the election was fake, evidenced by his supposedly unnatural skin colour. 

Fourth, AI-generated content blurred the lines between spoofs or parodies and harmful or misleading content, which appear to have been mostly generated domestically. While the originator of the AI generated content may have labelled it as a parody, others reposted without labelling as such: for example, Elon Musk’s reposting of a fake video of Kamala Harris making discriminatory comments. Even if the deepfake video is labelled as a parody, it may propagate misleading information about a political candidate, such as the image below circulated by Donald Trump:

Fifth, social media networks affiliated with Russia and Iran spread AI-based disinformation on campaign issues using fake US news sources. To masquerade as a domestic US news source, sometimes an actor would be used, or the image and voice would be AI-generated. 

Lastly, there were fabricated endorsements of a political candidate by a celebrity (for example Donald Trump by Taylor Swift and Martin Luther King), which seemed to be mainly made by domestic users.

What impact did AI have on the US presidential election?

While a recent survey showed that 48% of US respondents felt influenced by political deepfakes in making their decision about who to vote for, data tracking analysis commissioned by CETaS of three of the most high-profile AI-generated deepfakes (including a Harris parody and the smiling Secret Service agent) showed a more complex, nuanced picture.

First, the spread of the deepfakes was driven less by AI chatbots than by human influencers who strongly supported the candidates potentially benefitting from the deepfake:

  • The top ten influential users sharing the Harris parody video were all prominent US figures who displayed right-leaning political views, had shared misleading content previously and showed no telltale signs of being bots.

  • The Harris parody video reached its viral peak after Elon Musk’s resharing, after his public commitment to Trump.

Second, deepfakes often peak virally after mainstream news media picks them up, potentially illustrating ‘the law of unintended consequences’. Of the top ten influential users amplifying the smiling Secret Service agent video, three were US news providers and one US fact-checking organisation. While the media clearly reported them as deepfakes, the reporting appears to have had the unintended consequence of breathing more life into their dissemination.

However, CETaS cautioned that going viral is not the same as having political impact: that is changing the outcome of elections. CETaS expressed concern that deepfakes served to reinforce polarisation: “rather than swaying large numbers of undecided voters, such disinformation more likely consolidated pre-existing beliefs – including discriminatory views of women”. 

Also, while this time around misinformation was AI generated and human disseminated, CETaS found some early signs of the much wider power of AI in future elections:

  • In one campaign, an AI-enhanced software package was used to create multiple fake user profiles on X, which could then generate posts and even repost, like and comment on the posts of other bots in the network.

  • AI-generated misinformation was also directed to ‘down ballot’ candidates from Harris and Trump, right down to the local level. This reflects the low costs and technical skills of generating misinformation through AI.

What can be done?

CETaS made plenty of recommendations (spoiler alert: no quick fix here).

  • The current UK electoral law prohibition on ‘a false statement of fact’ about a candidate’s ‘personal character or conduct’ should be extended to cover disinformation/misinformation more broadly, including about a party’s policies.

  • Industry bodies should provide mainstream media with better guidance on when and how to report disinformation/misinformation to minimise the risks of inadvertently propagating it, such as by not providing a link to the harmful content.

  • Along the lines of Canada’s Critical Election Incident Public Protocol, a committee comprised of senior officials should have power and responsibility to notify the public of serious disinformation/misinformation. To minimise the risk that this process itself could interfere with an election, there should be a high threshold to the giving of these public warnings.

  • While expressing caution about free speech impacts laws broadly prohibiting deepfakes, such as in Singapore, South Korea and Brazil, the UK electoral laws should at least require that political advertisements incorporating digitally altered images should embed provenance information detailing how it was edited and by whom.

  • Along the lines of Finland, digital literacy and critical thinking programmes should be made mandatory in primary and secondary schools and widely available on a voluntary basis to adults.

  • As professional and commercial fact checking teams cannot keep up, readily available online tools should be available for community-based fact checking: a recent study showed decentralised fact checking reduced misinformation/disinformation by 62%.

  • Reversing the trend of social media platforms to reduce third party access to usage data, new legislation should provide for trusted researchers to have mandated access to that data to research misinformation/disinformation trends and the effectiveness of social media platforms in addressing it.

  • While there needs to be more investment in inauthentic content detection, such measures will never completely stem the tide and the most effective measure may be to target the owners of the bot accounts (rather than the accounts themselves) by de-monetisation strategies. Along the lines of the EU’s Code of Practice on Disinformation, this would involve establishing a set of definable behaviours associated with disinformation operators based on the use of bot accounts and self-regulatory and reporting measures to address them once identified.

  • To rebuild trust, ‘good’ content should carry digital provenance markings, such as watermarking. Along the lines of the rules to be issued by the US Office of Budget and Management, all government information should carry provenance markings.

Conclusion

Australia is currently struggling to limit the impact of ‘old technology’ like money on our politics. We head into an election in 2025 with few, if any, of the safeguards which CETaS identifies as needed to safeguard democracy against the tide of AI misinformation/disinformation.