首页Fashion资讯

As concern over deepfakes shifts to politics, detection software tries to keep up

发布时间:2025-04-27 05:15:25

Fake faceswap videos haven't overrun the internet or started a world war yet, but there are programmers working hard to improve detection tools as concern shifts to the potential use of such clips for the purposes of political propaganda.

It's been over a year since Reddit shut down its most popular deepfake subreddit, r/deepfakes, and government entities and the media continue to wring their handsover the evolution of AI-assisted technology that enables people to make extremely realistic videos of anyone, famous or not, doing basically anything.

As the 2020 presidential election cycle gets underway with fresh concerns about more hacking and more attempts by foreign actors to interfere in elections, concern is shifting from revenge porn and celebrity exploitation to politically-motivated faceswap videos. These clips could be used as part of misinformation campaigns or even larger efforts at potentially destabilizing governments.

And while some experts believe that the threat isn’t quite as dire as the media reaction suggests, that hasn’t stopped others from doing their best to keep software that detects deepfakes up to date with the evolving technology that makes faceswap videos look more and more real.

The emergence of deepfakes

When deepfake video began attracting widespread notice in early 2018, the concern from expertsand the mediawas immediate: They sounded the alarm about the technology's possible negative effects. As free software for creating deepfakes became more widely available, shared through platforms like Reddit and Github, social sites were flooded with fake pornographic videos made using the technology, with users typically putting the faces of celebrity women like Gal Gadot and Scarlett Johansson on the bodies of adult film actors.

Worry about the creation of fake revenge pornspread as it became clear that the software could be used to insert a former partner's face into a pornographic video. Bad actors could use deepfake technology to control a partner, ex, or enemy by blackmailing them or releasing the video to the internet.

Reddit reacted by banning the r/deepfakes subreddit, a popular forum for videos created with the emerging software. Ultimately, it wasn't the general idea of faceswapping that pushed the ban but, rather, using that technology to create fake, non-consensual, faceswapped porn.

Mashable ImageThe banning of the r/deepfakes subreddit made waves in early 2018. Credit: Reddit

In a statement on the banning, reps for Reddit said, "This subreddit was banned due to a violation of our content policy, specifically our policy against involuntary pornography."

Another subreddit, r/FakeApp, dedicated to a widely available program that allowed users to easily make these videos, was also banned.

But even as platforms like Reddit fought off these pornographic deepfakes, concern has now turned to the potential trouble that politically-themed deepfakes can unleash.

Concern over political uses

While there hasn’t yet been a particular instance of a political faceswap video leading to large-scale instability, just the potential has officials on high alert. For example, a fake video could be weaponized by making a world leader appear to say something politically inflammatory, meant to prompt a response or sow chaos. It’s enough of a concern that the U.S. Department of Defense has cranked up its own monitoring of deepfake videosas they pertain to government officials.

If the White House falls for tampered videos, it’s scary to think how easily they’d be duped by a high quality deepfake.

Given that President Trump so readily yells "fake news!" about reports he doesn't like, what's to stop him from claiming a real video like, say, the pee tape, is fake, given the proliferation of deepfakes? He's already gone down that road in relation to voice manipulation with regard to the infamous Access Hollywoodtape.

He, and the White House, have also perpetrated the spread of altered videos. Though not a deepfake, Trump recently shared a video of House Speaker (and Trump foil) Nancy Pelosi that was simply slowed down enoughto make Pelosi appear to be slowing her speech. That video, quickly debunked, was still spread to Trump's 60 million-plus Twitter followers.

This follows a November 2018 incident in which White House Press Secretary Sarah Sanders shared a video alteredby notorious conspiracy site InfoWars. The clip made it appear as if CNN reporter Jim Acosta had a more physical reaction to a White House staffer than he actually did.

If they’ll fall for these videos, it’s scary to think how easily they’d be duped by a high-quality deepfake.

Perhaps the easiest way to think about the potential consequences of political deepfakes is in relation to recent issueswith Facebook's WhatsApp, a messaging app that's enabled the viral spread of rumorsto snowball into real-life violence. Imagine if a convincing political deepfake video were to go viral like WhatsApp videos that have led to mob violence

Still finding a home on Reddit

Perhaps the best known example of these sorts of politically-tinged deepfakes is one co-produced by Buzzfeed and actor/director Jordan Peele. Using video of Barack Obama and Peele’s uncanny imitation of the former president, the outlet created a believable video of Obama saying things he’s never said in an effort to spread awareness about these types of clips.

But other examples proliferate on the web in more likely corners, namely Reddit. While the r/deepfakes subreddit was banned, other more tame forums have popped up, like r/GIFFakes and r/SFWdeepfakes where user-created deepfakes that stay within Reddit’s Terms of Service (i.e., no porn) are shared.

Most are of the sillier variety, often inserting leaders like, say, Donald Trump, into famous movies.

But there are a few floating around that reflect more concerted attempts to create convincing political deepfakes.

Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up!

And there is actual evidence of a group trying to leverage a Trump deepfake for a political ad. The sp.a, a Belgian Socialist Democratic party, used the fake Trump video in an attempt to garner signatures for a climate change-related petition. When posted to Twitter on the party’s account, it was accompanied by a message that translated to, “Trump has a message for all Belgians.”

The video owns up as a fake when Trump is shown saying, "We all know climate change is fake, just like this video." But, as Buzzfeed notes, that part gets literally lost in translation.

"However, that's not translated into Dutch in the subtitles and the volume drops sharply at the beginning of that sentence, so it's hard to make out. There would be no way for a viewer who's watching the video without volume to know it's fake from the text."

While many of these examples came from a simple skim of Reddit, there are plenty of darker corners of the web (4chan, for example) where these kinds of videos could proliferate. With just the right boost, they could easily leap to other platforms and reach a wide and naive audience.

So there’s a real need for detection tools, especially ones that can keep up with the ever-evolving technology used to create these videos.

In the blink of an eye

There’s at least one telltale sign that users can look for when trying to determine if a faceswap video is real: blinking. A 2018 study publishedby Cornell focused on how the act of blinking is poorly represented in deepfake videos because of the lack of available videos or photographs showing the subject with their eyes closed.

As Phys.org noted:

Healthy adult humans blink somewhere between every 2 and 10 seconds, and a single blink takes between one-tenth and four-tenths of a second. That's what would be normal to see in a video of a person talking. But it's not what happens in many deepfake videos.

You can see what they're talking about by comparing the below videos.

Elsewhere, Facebook, which has faced a mountain of criticism for the way fake news proliferateson the platform, is using its own machine learning toolto detect fake videos and partnering with its fact-checking partners, including the Associated Press and Snopes, to examine potential fake photos and videos that get flagged.

Of course, the system is only as good as its software tool -- if a deepfake video doesn't get flagged, it doesn't get to the fact checkers -- but it's a step in the right direction.

Fighting back with detection tools

There are experts and groups making great strides in the detection arena. One of those is Matthias Niessnerof Germany’s Technical University of Munich. Niessner is part of a teamthat's been studying a large data set of manipulated videos and images to develop detection tools. On March 14, 2019, his group released a "faceforensics benchmark"where, he told Mashable via email, "people can test their approaches on various forgery methods in an objective measure."

In other words, testers can use the benchmark to see how accurate various detection software is at correctly flagging several types of manipulated videos, including deepfake videos and videos made with software like Face2Faceand Microsoft's Pristine. So far, the results are promising

For example, the Xception (FaceForensics++) network, the detection tool Niessner helped develop, had an overall 78.3 percent success rate at detection, with an 88.2 percent success rate specifically with deepfakes. While he acknowledged that there's still plenty of room to improve, Niessner told me, "It also gives you a measure of how good the fakes are."

There's also the issue of awareness among internet users: Most have probably not heard of deepfakes, much less know about ways to detect them. Talking to Digital Trendsin 2018, Niessner suggested a fix: “Ideally, the goal would be to integrate our A.I. algorithms into a browser or social media plugin. Essentially, the algorithm [will run] in the background, and if it identifies an image or video as manipulated it would give the user a warning.”

If such software can be disseminated widely -- and if detection tool developers can keep pace with the evolution of deepfake videos -- there does seem to be the hope of at least giving users the tools to stay educated and prevent the viral spread of deepfakes.

How worried should we be?

Some experts and people in media, though, think the concern around deepfakes is exaggerated, and that the concern should be about propaganda and false or misleading news of all kinds, not just video.

Over at The Verge, Russell Brandom makes a salient pointthat the use of deepfakes as political propaganda hasn't panned out in relation to the attention and concern it's received in the last year. Noting that these videos would likely flag filters like those noted above, the trolls behind these campaigns recognized that creating fake news articles would be more effective in playing into the preexisting beliefs of those targeted.

Brandom points to the widely-circulated false 2016 claimthat Pope Francis endorsed Donald Trump as an example.

"It was widely shared and completely false, the perfect example of fake news run amok. But the fake story offered no real evidence for the claim, just a cursory article on an otherwise unknown website. It wasn’t damaging because it was convincing; people just wanted to believe it. If you already think that Donald Trump is leading America toward the path of Christ, it won’t take much to convince you that the Pope thinks so, too. If you’re skeptical, a doctored video of a papal address probably won’t change your mind."

Developer Alan Zucconishares the view that, when it comes to misleading or fake news, deepfakes aren’t even necessary.

Using Pizzagate as an example, Zucconi illustrates how easy it is for people who lack a certain level of internet education to be “preyed upon by people who make propaganda and propaganda doesn’t need to be that convoluted.”

Echoing Brandom’s points, Zucconi points out that if a person is likely to believe a deepfake video, they’re already susceptible to other forms of false information. “It’s a mindset rather than video itself,” he says.

To that end, he points out that it’s far cheaper and simpler to spread conspiracies using internet forums and text: “Making a realistic deepfake video requires weeks of work for a single video. And we can’t even do fake audio properly yet. But making a single video is so expensive that the return you’ll have is not really much.”

Zucconi also stresses that it’s also easier for those spreading propaganda and conspiracies to present a real video out of context than to create a fake video. The doctored Pelosi video is a good example of this; all the creator had to do was simply slow the speed of the video down just a smidge to create the desired effect, and Trump bought it.

That at least one major social media platform -- Facebook -- refused to take the video down only shows how hard that particular fight remains.

“It's the post-truth era. Which means to me that if you see a video, it’s not about whether the video is fake or not,” he tells me. “It's about whether the video is used to support something that the video was supposed to support or not.”

If anything, he’s worried that discussions of deepfakes will lead to some people claiming that a video of them isn’t real when, in fact, it is: “I think that it gives more people the chance of saying, ‘this video wasn't true, it wasn’t me.’”

Given that, as I mentioned before, Trump has already tested these waters by distancing himself from the Access Hollywoodtape, Zucconi’s point is well taken.

Even if the concern about these videos may be overblown, though, the worry about the lack of education surrounding deepfakes remains a concern and the ability for detection software to keep pace is vital.

As Aviv Ovadya warned Buzzfeed in early 2018, “It doesn’t have to be perfect — just good enough to make the enemy think something happened that it provokes a knee-jerk and reckless response of retaliation.”

And as long as that education lags and the chance of these videos sowing mistrust remains, then the work being done on filters continues to be an essential part of the battle against misinformation, with white hat developers racing to stay ahead of the more nefarious elements of the internet hell-bent on causing chaos.


Featured Video For You
Facebook cracks down on Russian-based "fake news"

友情链接:

外链: