A deepfake robocall of President Biden was received by some voters. Find out who created the Joe Biden deepfake and the ramifications of unethical AI.
Source of Joe Biden deepfake revealed after election interference
In the eleventh hour leading up to the New Hampshire primary, an insidious plot unfolded, utilizing artificial intelligence technology to mimic President Joe Biden’s voice in a robocall. This unprecedented attempt at election interference has brought to light the intersection of AI, deepfakes, and disinformation, raising concerns about the potential consequences for democratic processes. Let’s dive into everything you need to know about the deepfake scam involving a robocall from US President Joe Biden, including who was involved.
What is the Joe Biden deepfake scam?
On Monday, January 22, 2023, the New Hampshire attorney general’s office said it was investigating reports of an apparent robocall that used artificial intelligence to mimic President Joe Biden’s voice and discourage voters in the state from coming to the polls.
This scam aimed to discourage voters from participating in the state’s primary election, falsely claiming that voting in the primary would somehow impact the general election in November. The AI-generated Biden voice urged voters to “save” their votes for the upcoming federal election, claiming that voting in the primary would benefit Republicans and facilitate the re-election of Donald Trump.
Joe Biden deepfake caller ID
The fake Biden robocall’s caller ID appeared as if it were sent from Kathy Sullivan, a former state Democratic Party chair who helps run Granite for America, a super-PAC that is campaigning for voters to write-in Biden to show their support for the President. However, he wasn’t campaigning in New Hampshire and wasn’t going to be on the ballot since he was in lead-off position for the Democratic Presidential primaries.
“This call links back to my personal cell phone number without my permission,” Sullivan said in a statement. “It is outright election interference, and clearly an attempt to harass me and other New Hampshire voters who are planning to write-in Joe Biden on Tuesday.”
Joe Biden deepfake scam audio
In the robocall fake audio obtained by NBC, you can hear a voice that sounds eerily similar to Biden’s say his favorite catchphrase, “What a bunch of malarkey.” The message goes on to state, “it’s important that you save your vote for the November election and “voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again. Your vote makes a difference in November, not this Tuesday.”
After its release, the White House press secretary Karine Jean-Pierre informed the public the message was AI-generated to spread disinformation and not from the Biden campaign.
In a statement, Biden’s campaign manager, Julie Chavez Rodriguez, said the campaign is “actively discussing additional actions to take immediately.”
What should you do if you received a Joe Biden robocall?
The recorded message appeared to be an illegal attempt to spread misinformation and disrupt voting. While it is unclear how many voters received the call, at least a dozen had reported it. Attorney General John Formella explained anyone who received such a call “should disregard the contents of this message entirely.”
Source of the Joe Biden robocall deepfake revealed
According to Bloomberg, a voice-fraud detection company Pindrop Security Inc. found that the Biden deepfake robocall was not made by the Biden campaign but rather by a scammer using ElevenLabs, an AI voice startup that offers voice cloning capabilities.
While ElevenLabs’ safety statement allows voice cloning “for certain non-commercial purposes if you don’t impact the person’s privacy or economic interests,” including “private study, non-commercial research, education, caricature, parody, satire, artistic and political speech contributing to public debates, quotation,” as well as “criticism and review,” ElevenLabs banned the user who created the Biden deepfake robocall.
“We are dedicated to preventing the misuse of audio AI tools and take any incidents of misuse extremely seriously,” ElevenLabs told Bloomberg.
Deepfake vs. AI Voice Generator vs. Voice Cloning
As AI voice generation becomes more popular, it’s essential to understand the terms deepfake, AI voice generator, and voice cloning, as well as the difference between their ethical and unethical uses.
Deepfake
Deepfake refers to the use of artificial intelligence and machine learning techniques to create or manipulate audio and visual content, often replacing someone’s likeness with another person’s in a realistic manner.
- Ethical deepfake use: Deepfake technology can be ethically used for entertainment, artistic expression, and education, where the intent is not to deceive or harm others.
- Unethical deepfake use: When deepfake technology is exploited to create misleading content and false narratives or to impersonate individuals for malicious purposes (such as spreading misinformation or creating fake news), it raises serious ethical concerns.
AI Voice Generator
AI voice generators use machine learning algorithms to synthesize human-like AI voices. They are often trained on large datasets to mimic human voices and tones.
- Ethical AI voice generator use: AI voice generators can be used ethically for various purposes, including accessibility for individuals with speech impairments, language learning, entertainment, or voice overs.
- Unethical AI voice generator use: Misusing AI voice generation to create fake audio recordings with malicious intent, such as forging someone’s voice to impersonate them for fraudulent activities or to spread misinformation, is considered unethical.
Voice cloning
Voice cloning involves creating a synthetic copy of a person’s voice, typically using recordings of their speech as training data for a machine learning model.
- Ethical voice cloning use: Voice cloning can be used ethically for purposes such as creating personalized voice assistants, preserving the voices of individuals for future generations with consent, or aiding individuals with speech disorders.
- Unethical voice cloning use: Unauthorized voice cloning for deceptive activities, such as creating fake audio recordings to impersonate someone without their consent, can be highly unethical and may lead to privacy violations and potential misuse.
Misuse of Joe Biden’s AI voice
The misuse of Joe Biden’s voice through AI-generated robocalls is a stark example of the potential for artificial technology to be weaponized in attempts to suppress voting and spread disinformation. The consequences of misusing AI to replicate political figures’ voices are far-reaching. This incident foreshadows a future where AI-generated voices could be used to spread false narratives, manipulate public opinion, and undermine the democratic process. The potential impact on trust in political discourse and the integrity of elections is a cause for serious concern.
As AI technology advances, the risk of such incidents occurring increases, with digital forensics experts like Hany Farid emphasizing the need for vigilance in addressing these challenges.
Voters should be on AI alert this election season
As the 2024 presidential election approaches, voters must remain vigilant against the misuse of AI technology in attempts to sway public opinion. AI-generated disinformation could become a prevalent issue in political campaigns, necessitating increased awareness among the electorate.
“We have been concerned that generative AI would be weaponized in the upcoming election and we are seeing what is surely a sign of things to come,” Hany Farid, an expert in digital forensics who reviewed the Biden robocall recording, said in an interview.
How to use voice AI ethically
In the wake of this incident, it is crucial to consider the ethical applications of voice AI. While AI voice technology offers numerous possibilities, including voice overs, there is a responsibility to use it ethically. This includes clearly indicating when AI-generated voices are being used, avoiding malicious intent, and respecting privacy and consent.
AI laws and AI regulations are coming
As technology continues to advance, lawmakers, AI developers, election officials, and the public must work together to establish safeguards, regulations, and ethical standards to protect the democratic process from such abuses. The integrity of elections, a cornerstone of American democracy, depends on our ability to adapt and respond to emerging challenges regarding the use of AI. In fact, Congress is already discussing AI protections. At a recent Senate hearing, Senator Richard Blumenthal even demonstrated deepfakes by playing one of his own voice created with AI voice cloning software and featuring a script written by OpenAI’s ChatGPT. The Federal Election Commission has also begun a process to possibly regulate AI-generated deepfakes in 2024 political campaign ads.
Speechify Voice Over Studio: #1 ethical AI voice platform
Speechify Voice Over Studio is the leading ethical AI voice platform that sets the gold standard in synthetic speech technology. With over 200 lifelike text to speech voices available across multiple languages and accents, Speechify Voice Over Studio empowers users to create dynamic and engaging content effortlessly.
Its user-friendly platform features easy-to-use audio editing AI tools, providing granular control over each word, enabling users to fine-tune pitch, pronunciation, tone, and more. Speechify Voice Over Studio also prioritizes ethical considerations by offering voice cloning solely for your own voice, ensuring explicit consent and mitigating the risk of misuse.
Speechify Voice Over Studio is the ideal choice for crafting compelling voice overs for various applications, including social media content, video games, audiobooks, podcasts, and more. Elevate your content creation experience and try Speechify Voice Over Studio for free today.
FAQ
What is an example of unethical AI voice cloning?
An unethical example of AI voice cloning might involve using the technology to fabricate a convincing audio recording of Elon Musk making false statements about sensitive geopolitical matters, such as Washington’s involvement with affairs in Israel, potentially manipulating public perception and causing discord. This deceptive use of voice cloning could exploit public trust and sow discord by attributing misleading statements to influential figures in order to advance a particular agenda.
Does the GOP want to regulate AI?
According to a Fox News poll, “Republicans are less convinced than Democrats that the federal government needs to impose regulations on artificial intelligence systems and are even more skeptical on whether the government is up to the task.”
Did Taylor Swift have a deepfake made of her?
There has been several unethical deepfakes made of Taylor Swift, including photos and music, without her consent. According to reports, she’s actually considering taking legal action against the creators of the deepfakes.
Did New Hampshire voters receive a Biden deepfake robocall telling them not to vote?
Yes, a scammer created a Biden deepfake using ElevenLabs’ AI technology to attempt to convince voters not to write-in Biden’s name during the New Hampshire primaries.