In 2018, the world was shocked to learn that British political consulting firm Cambridge Analytica had harvested the personal data of at least 50 million Facebook users without their consent and used it to influence elections in the United States and abroad.
An undercover investigation by Channel 4 News resulted in footage of the firm’s then CEO, Alexander Nix, suggesting it had no issues with deliberately misleading the public to support its political clients, saying:
“It sounds a dreadful thing to say, but these are things that don’t necessarily need to be true. As long as they’re believed”
The scandal was a wake-up call about the dangers of both social media and big data, as well as how fragile democracy can be in the face of the rapid technological change being experienced globally.
How does artificial intelligence (AI) fit into this picture? Could it also be used to influence elections and threaten the integrity of democracies worldwide?
According to Trish McCluskey, associate professor at Deakin University, and many others, the answer is an emphatic yes.
The Pentagon’s chief digital and AI officer Craig Martell warns that generative AI language models like #ChatGPT could become the “perfect tool” for #disinformation. They lack context and people take their words as fact. #AI #cybersecurity pic.twitter.com/pPCHY2zKJH
— Realtime Global Data Intelligence Platform (@KIDataApp) May 5, 2023
McCluskey told Cointelegraph that large language models such as OpenAI’s ChatGPT “can generate indistinguishable content from human-written text,” which can contribute to disinformation campaigns or the dissemination of fake news online.
Among other examples of how AI can potentially threaten democracies, McCluskey highlighted AI’s capacity to produce deep fakes, which can fabricate videos of public figures like presidential candidates and manipulate public opinion.
While it is still generally easy to tell when a video is a deepfake, the technology is advancing rapidly and will eventually become indistinguishable from reality.
For example, a deepfake video of former FTX CEO Sam Bankman-Fried that linked to a phishing website shows how lips can often be out of sync with the words, leaving viewers feeling that something is not quite right.
Over the weekend, a verified account posing as FTX founder SBF posted dozens of copies of this deepfake video offering FTX users “compensation for the loss” in a phishing scam designed to drain their crypto wallets pic.twitter.com/3KoAPRJsya
— Jason Koebler (@jason_koebler) November 21, 2022
Gary Marcu, an AI entrepreneur and co-author of the book Rebooting AI: Building Artificial Intelligence We Can Trust, agreed with McCluskey’s assessment, telling Cointelegraph that in the short term, the single most significant risk posed by AI is:
“The threat of massive, automated, plausible misinformation overwhelming democracy.”
A 2021 peer-reviewed paper by researchers Noémi Bontridder and Yves Poullet titled “The role of artificial intelligence in disinformation” also highlighted AI systems’ ability to contribute to disinformation and suggested it does so in two ways:
“First, they [AI] can be leveraged by malicious stakeholders in order to manipulate individuals in a particularly effective manner and at a huge scale. Secondly, they directly amplify the spread of such content.”
Additionally, today’s AI systems are only as good as the data fed into them, which can sometimes result in biased responses that can influence the opinion of users.
Classic, liberal AI bias. #AI #SnapchatAI #GenerativeAI #ArtificialIntelligence (note: I don’t vote in elections. This was an idea I had to see how programmers designed this AI to respond in politics.) pic.twitter.com/hhP2v2pFHg
— Dorian Tapias (@CrypticStyle) May 10, 2023
How to mitigate the risks
While it is clear that AI has the potential to threaten democracy and elections around the world, it is worth mentioning that AI can also play a positive role in democracy and combat disinformation.
For example, McCluskey stated that AI could be “used to detect and flag disinformation, to facilitate fact-checking, to monitor election integrity,” as well as educate and engage citizens in democratic processes.
“The key,” McCluskey adds, “is to ensure that AI technologies are developed and used responsibly, with appropriate regulations and safeguards in place.”
An example of regulations that can help mitigate AI’s ability to produce and disseminate disinformation is the European Union’s Digital Services Act (DSA).
When the DSA comes into effect entirely, large online platforms like Twitter and Facebook will be required to meet a list of obligations that intend to minimize disinformation, among other things, or be subject to fines of up to 6% of their annual turnover.
The DSA also introduces increased transparency requirements for these online platforms, which require them to disclose how it recommends content to users — often done using AI algorithms — as well as how it moderate content.
Bontridder and Poullet noted that firms are increasingly using AI to moderate content, which they suggested may be “particularly problematic,” as AI has the potential to over-moderate and impinge on free speech.
The DSA only applies to operations in the European Union; McCluskey notes that as a global phenomenon, international cooperation would be necessary to regulate AI and combat disinformation.
McCluskey suggested this could occur via “international agreements on AI ethics, standards for data privacy, or joint efforts to track and combat disinformation campaigns.”
Ultimately, McCluskey said that “combating the risk of AI contributing to disinformation will require a multifaceted approach,” involving “government regulation, self-regulation by tech companies, international cooperation, public education, technological solutions, media literacy and ongoing research.”