February 26, 2024
Meet the UC Berkeley professor tracking election deepfakes


Not in recent history has a technology come along with the potential to harm society more than deepfakes

The manipulative, insidious AI-generated content is already being weaponized in politics and will be pervasive in the upcoming U.S. Presidential election, as well as those in the Senate and the House of Representatives. 

As regulators grapple to control the technology, incredibly realistic deepfakes are being used to smear candidates, sway public opinion and manipulate voter turnout. On the other hand, some candidates, in backfired attempts, have turned to generative AI to help bolster their campaigns.

University of California, Berkeley’s School of Information Professor Hany Farid has had enough of all this. He has launched a project dedicated to tracking deepfakes throughout the 2024 presidential campaign. 

VB Event

The AI Impact Tour – NYC

We’ll be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below.

 


Request an invite

Source: LinkedIn.

“My hope is that by casting a light on this content, we raise awareness among the media and public — and we signal to those creating this content that we are watching, and we will find you,” Farid told VentureBeat. 

From Biden in fatigues to DeSantis lamenting challenging Trump

In its most recent entry (Jan. 30), Farid’s site provides three images of President Joe Biden in fatigues sitting at what looks to be a military command center. 

Source: https://farid.berkeley.edu/deepfakes2024election/

However, the post points out, “There are tell-tale signs of misinformed objects on the table, and our geometric analysis of the ceiling tiles reveals a physically inconsistent vanishing point.” 

The “misinformed objects” include randomly placed computer mice and a jumble of indistinguishable equipment at the center. 

The site also references the now infamous deepfake robocalls impersonating Biden ahead of the New Hampshire primary. These urged voters not to participate and said that “Voting this Tuesday only enables the Republicans in their quest to elect former President Donald Trump again. Your vote makes a difference in November, not this Tuesday.” 

It remains unclear who is behind the calls, but Farid points out that the quality of the voice is “quite low” and has an odd-sounding cadence. 

Another post calls out the “fairly crude mouth motion” and audio quality in a deepfake of Ron DeSantis saying “I never should have challenged President Trump, the greatest president of my lifetime.” 

The site also breaks down a six-photo montage of Trump embracing former Chief Medical Advisor Anthony Fauci. These contained physical inconsistencies such as a “nonsensical” White House logo and misshapen stars on the American flag. Furthermore, the site points out, the shape of Trump’s ear is inconsistent with several real reference images. 

Farid noted that “With respect to elections here in the U.S., it doesn’t take a lot to swing an entire national election — thousands of votes in a select number of counties in a few swing states can move an entire election.” 

Anything can be fake; nothing has to be real 

Over recent months, many other widespread deepfakes have depicted Trump being tackled by a half-dozen police officers; Ukrainian Vladimir Zelenskiy calling for his soldiers to lay down their weapons and return to their families; and U.S. Vice President Kamala Harris seemingly rambling and inebriated at an event at Howard University. 

The harmful technology has also been used to tamper with elections in Turkey and Bangladesh — and countless others to come — and some candidates including Rep. Dean Phillips of Minnesota and Miami Mayor Francis Suarez have used deepfakes to engage with voters. 

“I have seen for the past few years a rise in the sophistication of deepfakes and their misuse,” said Farid. “This year feels like a tipping point, where billions will vote around the world and the technology to manipulate and distort reality is emerging out of its infancy.” 

Beyond their impact on voters, deepfakes can be used as shields when people are recorded breaking the law or saying or doing something inappropriate. 

“They can deny reality by claiming it is fake,” he said, noting that this so-called “Liar’s Dividend” has already been used by Trump and Elon Musk. 

“When we enter a world when anything be fake,” Farid said, “nothing has to be real.”

Stop, think, check your biases

Research has shown that humans can only detect deepfake videos a little more than half the time and phony audio 73% of the time

Deepfakes are becoming ever more dangerous because images, audio and video created by AI are increasingly realistic, Farid noted. Also, doctored materials are quickly spread throughout social media and can go viral in minutes. 

“A year ago we saw primarily image-based deepfakes that were fairly obviously fake,” said Farid. “Today we are seeing more audio/video deepfakes that are more sophisticated and believable.”

Because the technology is evolving so quickly, it is difficult to call out “specific artifacts” that will continue to be useful over time in spotting deepfakes, Farid noted. 

“My best advice is to stop getting news from social media — this is not what it was designed for,” he said. “If you must spend time on social media, please slow down, think before you share/like, check your biases and confirmation bias and understand that when you share false information, you are part of the problem.”

Telltale deepfake signs to look out for

Others offer more concrete and specific devices for spotting deepfakes. 

The Northwestern University project Detect Fakes, for one, offers a test where users can determine their savviness in spotting phonies. 

The MIT Media Lab, meanwhile, offers several tips, including: 

  • Paying attention to faces, as high-end manipulations are “almost always facial transformations.”
  • Looking out for cheeks and foreheads that are “too smooth or too wrinkly,” and look at whether the “agedness of the skin” is similar to that of the hair and eyes,” as deepfakes can be “incongruent on some dimensions.”
  • Noting eyes and eyebrows and shadows that appear where they shouldn’t be. Deepfakes can’t always represent natural physics. 
  • Looking at whether glasses have too much glare, none at all, or if glare changes when the person moves. 
  • Paying attention to facial hair (or lack thereof) and whether it looks real. While deepfakes may add or remove mustaches, sideburns or beards, those transformations aren’t always fully natural. 
  • Look at the way the person’s blinking (too much or at all) and the way their lips move, as some deepfakes are based on lip-syncing. 

Think you’ve spotted a deepfake related to the U.S. elections? Contact Farid.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.



Source link