More about Deepfake

More about Deepfake

deepfake

Deepfake definition

Deepfake is  fake video or audio recording that look and sound just like the real thing. Once the bailiwick of Hollywood special effects studios and intelligence agencies producing propaganda, like the CIA or GCHQ’s JTRIG directorate, today anyone can download deepfake software and create convincing fake videos in their spare time.

However, it would be just as easy to create a deepfake of an emergency alert warning an attack was imminent, or disrupt a close election by dropping a fake video or audio recording of one of the candidates days before voting starts.

How dangerous is deepfake?

This makes a lot of people nervous. So much so that Marco Rubio, 2016 presidential candidate, called them the modern equivalent of nuclear weapons. In the old days,” he told an audience in Washington a couple weeks ago. “If you wanted to threaten the United States, you needed 10 aircraft carriers, and nuclear weapons, and long-range missiles. Today, you just need access to our internet system, to our banking system, to our electrical grid and infrastructure. And increasingly, all you need is the ability to produce a very realistic fake video that could undermine our elections. That could throw our country into tremendous crisis internally and weaken us deeply.”

Political hyperbole skewed by frustrated ambition, or is deepfake really a bigger threat than nuclear weapons?

“As dangerous as nuclear bombs? I don’t think so,” Tim Hwang, tells CSO. ” It is evident that certainly the demonstrations that we’ve seen are disturbing. The situations show that they’re concerning and they raise a lot of questions, but I’m skeptical they change the game in a way that a lot of people are suggesting.”

How deepfakes work

Seeing is believing, the old saw has it, but the truth is that believing is seeing. Human beings seek out information that supports what they want to believe and ignore the rest.

Hacking that human tendency gives malicious actors a lot of power. We see this already with disinformation (so-called “fake news”) that creates deliberate falsehoods that then spread under the guise of truth. By the time fact checkers start howling in protest, it’s too late, and #PizzaGate is a thing.

Deepfakes exploit this human tendency using generative adversarial networks (GANs), in which two machine learning (ML) models duke it out. One ML model trains on a data set and then creates video forgeries, while the other attempts to detect the forgeries. The forger creates fakes until the other ML model can’t detect the forgery. The larger the set of training data, the easier it is for the forger to create a believable deepfake. This is why videos of former presidents and Hollywood celebrities have been frequently used in this early, first generation of deepfake— there’s a ton of publicly available video footage to train the forger.

Shallow fakes are a problem, too

It turns out that low-tech-doctored videos can be just as effective a form of disinformation as deepfake, as the controversy surrounding the doctored video of President Trump’s confrontation with CNN reporter Jim Acosta at a November press conference makes clear. Video clearly shows a female White House intern attempting to take the microphone from Acosta. But subsequent editing made it look like the CNN reporter attacked the intern.

This accident tell that these videos are an easy way to damage the reputation of the opponents. Unlike so-called “deepfake,” however, where machine learning puts words in people’s mouths, low-tech doctored video hews close enough to reality that it blurs the line between the true and false.

FUD (fear, uncertainty and doubt) is familiar to folks working in the security trenches, and deploying that FUD as a weapon at scale can severely damage a business as well as an individual. Defending against FUD attacks is very difficult. Once the doubt has been sowed that Acosta manhandled a female White House intern, a non-trivial portion of viewers will never forget that detail and suspect it might be true.

Who’s wagging whom?

David Mamet’s wickedly funny 1997 film Wag the Dog satirized a president running for re-election who fakes a war using special effects to cover up a scandal. Prophetic for its time, the ability to “fake TV news” has been around for a while and is now in the hands on pretty much every laptop owner on the planet.

GANs, of course, have many other uses than making fake videos and putting words in politicians’ mouths. GANs are a big leap forward in what’s known as “unsupervised learning” — when ML models teach themselves. This holds great promise in improving self-driving vehicles’ ability to recognize pedestrians and bicyclists, and to make voice-activated digital assistants like Alexa and Siri more conversational. Some herald GANs as the rise of “AI imagination.”

FakeApp  are avalable for common user to download start creating their own deepfakes right away. Using the app isn’t super-easy, but a moderately geeky user should have no trouble, as Kevin Roose demonstrated for the New York Times earlier this year.

That said, there are so many other forms of effective disinformation that focusing on playing “Whack-a-Mole” with deepfakes is the wrong strategy, Hwang tells CSO. “I think that even in the present it turns out there are lots of cheap ways that don’t require deep learning or machine learning to deceive and shape public opinion.”

For instance, taking a video of people beating someone up in the street, and then creating a false narrative around that video. Perhaps claiming that the attackers are immigrants to the U.S., for example — doesn’t require a fancy ML algorithm, just a believable false narrative and a video that fits.

How to detect deepfake

Detecting deepfake is a hard problem. Anyone can detect a defected deepfake.  Other signs that machines can spot include a lack of eye blinking or shadows that look wrong. GANs that generate deepfakes are getting better all the time, and soon we will have to rely on digital forensics to detect deepfakes — if we can, in fact, detect them at all.

This is such a hard problem that DARPA is throwing money at researchers to find better ways to authenticate video. However, because GANs can themselves be trained to learn how to evade such forensics, it’s not clear that this is a battle we can win.

“Theoretically, if you gave a GAN all the techniques we know to detect it, it could pass all of those techniques,” David Gunning, the DARPA program manager in charge of the project, told MIT Technology Review. “We don’t know if there’s a limit. It’s unclear.”

If we are unable to detect fake videos, we may soon be forced to distrust everything we see and hear, critics warn. The internet now mediates every aspect of our lives. And an inability to trust anything we see could lead to an “end of truth. This threatens not only faith in our political system, but, over the longer term, our faith in what is shared objective reality. If we can’t agree on what is real and what is not, how can we possibly debate policy issues? alarmists lament.

 

How do we regulate to prevent deepfake?

Is deepfake legal? It’s a difficult question and has no answer. There’s the First Amendment to consider, but then intellectual property law, privacy law

However, when it comes to political speech that is not of an abusive nature, the lines get blurry. The First Amendment protects the right of a politician to lie to people. It protects the right to publish wrong information, by accident or on purpose.

 Early examples of deep fakes (from the 1920s)

Think fake news videos–of the political deepfake variety–are a new thing under the sun? Think again. After discovery of deepfake, it became coomon to use it in media industry.

At a time when film could take weeks to cross an ocean, filmmakers would dramatize earthquakes or fires with tiny sets to make the news more lifelike. In the 1920s, sending black-and-white photographs over transoceanic cables was the latest rage. And filmmakers would use genuine photographs to create their scenes of destruction.

It was not same in 1930s, audience of the drama use to think that they are watching original content

 

Leave a Reply

Your email address will not be published. Required fields are marked *