Elon Musk’s repost of a Kamala Harris deepfake shows he’s no free speech warrior

Last Friday night, Elon Musk, the Donald Trump-supporting owner of X, decided to repost a deepfake video of the presumptive Democratic presidential nominee Kamala Harris wherein the vice president says she’s a “diversity hire” and doesn’t know “the first thing about running the country.” The video’s creator confirmed to the Associated Press that he used an AI voice-synthesis tool to manipulate the audio found in a Harris political ad.

People who create and post this kind of thing often claim that it was meant as parody or satire, forms of political speech that are protected. And indeed the creator of the faked Harris video labeled it “Kamala Harris Campaign Ad PARODY,” but Musk’s repost didn’t include that text. After people began pointing out that the video repost violated X’s community guidelines, Musk hid behind the “parody” defense, even after removing that label from the deepfake he reposted.

Not many reasonable people would believe that the voice in the ad was Harris’s. She’d never say those things. But when the owner of a major social platform ignores his own community guidelines and reposts an unlabeled AI deepfake to his millions of followers it sends a message: Deepfakes are okay, and community guidelines are negligible.

And don’t expect regulators to help combat this element of the misinformation war. “It will be extremely difficult to regulate AI to the point where we can avoid videos like the recent one of Vice President Harris that was shared on X,” says John Powers, a media and design professor at Quinnipiac University. “[T]here should be a stronger push for social media literacy to be introduced to young students as a way to ensure that the next generation is diligent in examining information and news as it comes across their social feeds.”

Musk’s repost is particularly galling when you remember the reasons he gave for buying Twitter in the first place. Twitter had been considered the “town square” for open discussion, especially on political topics. Musk thought it was a place dominated by a “woke” mindset, and intolerant of “conservative” viewpoints, as evidenced by the network’s 2022 ban of the right-wing Christian parody site Babylon Bee. He said in one TED interview that he wanted to make Twitter a “platform for free speech around the globe” and called free speech a “societal imperative for a functioning democracy.”

Is the Harris deepfake Musk’s idea of “free speech?” The man understands AI; he owns and oversees a multibillion-dollar AI company (xAI). He willingly posted disinformation to his 190 million followers, and, when challenged, doubled down with a rather flimsy “parody” defense. Musk now denies he ever said he would give the Trump campaign $45 million per month, but he’s still using his X platform to campaign for Trump, including with Trump’s preferred currency: BS

Deepfakes aren’t the only way AI could seriously impact an election. Brand marketers and political strategists have long been enticed by the possibility of creating ads that are tailor-fit to a single target individual versus a whole demographic segment. Such an ad might reflect an understanding of the individual’s demographic information, their voting record, and signals from their social media activity about their politics and the key issues they care about. It could also reflect an understanding of an individual’s “psychographic” profile, based on their levels of agreeableness, neuroticism, openness to new experiences, extroversion, and conscientiousness.

Researchers showed in 2013 that a person’s social “likes” could suggest their “big five” levels. Cambridge Analytica, which worked for the Trump campaign in 2016, believed such levels could accurately telegraph the political issues people were sensitive to (though it’s unclear if that strategy had any effect on the election).

But if such a personality “graph” could be created in a database, it may be possible to present each individual voter profile as a template. This templated information could be presented to a large language model (LLM) as a prompt, and the LLM could be trained to generate ad copy or a fundraising letter that touched on the political issues that they believe will trigger a response. Even rapid-response videos might be generated that refute a claim against a candidate by hitting on the hot-button issues or political sensitivities of a specific voter.

Rate this post