15/05/2021

DSRN Blogs

Discover the world with DSRN Blogs

Deepfake

Advancement in deepfake technology

deepfake

Realistic videos of people doing things that never really happened have become shockingly easy to create. Now is the time to put in some guardrails. Smartphone apps make deepfakes shockingly easy to create. We aren’t ready for what happens next.
It used to take weeks of intense computer analysis to make a deepfake. But now with smartphone apps, online harassment and disinformation may get much worse.
With just a photo and an iPhone app, you can create a video of any face saying, or singing, whatever you want.

And now so can you. The technology to create “deepfakes” — videos of people doing things that never really happened — has arrived on smartphones. It’s simple, fun … and also troubling.

The past few months have brought advances in this controversial technology. A few years ago, deepfake videos — named after the “deep learning” artificial intelligence used to generate faces , required a Hollywood studio or at least a crazy powerful computer. Then around 2020 came apps, like one called Reface, that let you map your own face onto a clip of a celebrity.

Now with a single source photo and zero technical expertise, an iPhone app called Avatarify lets you actually control the face of another person like a puppet. Using your phone’s selfie camera, whatever you do with your own face happens on theirs. Avatarify doesn’t make videos as sophisticated as pro fakes of Tom Cruise that have been flying on social network TikTok. But it has been downloaded more than 6 million times since February alone.

Another app for iPhone and Android devices called Wombo turns a straight-on photo into a funny lip-sync music video. It generated 100 million clips just in its first two weeks.

And MyHeritage, a genealogy website, lets anyone use deepfake tech to bring old still photos to life. Upload a shot of a long-lost relative or friend, and it produces a remarkably convincing short video of them looking around and smiling. Even the little wrinkles around the eyes look real. They call it “Deep Nostalgia” and have reanimated more than 65 million photos of people in the past four weeks.

These deepfakes may not fool everyone, but it’s still a cultural tipping point we aren’t ready for. Forget laws to keep fakes from running amok, we hardly even have social norms for this stuff.

All three of the latest free services say they’re mostly being used for positive purposes: satire, entertainment and historical re-creations. The problem is, we already know there are plenty of bad uses for deepfakes, too.

“It’s all very cute when we do this with grandpa’s pictures,” says Michigan State University responsible-AI professor Anjana Susarla. “But you can take anyone’s picture from social media and make manipulated images of them. That’s what’s concerning.”

“You must make sure that the audience is aware this is synthetic media,” says Gil Perry, the CEO of D-ID, the tech company that powers MyHeritage’s deepfakes. “We have to set the guidelines, the frameworks and the policies for the world to know what is good and what is bad.”

The technology to digitally alter still images. Adobe’s Photoshop editing software  has been around for decades. But deepfake videos pose new problems, like being weaponized, particularly against women, to create humiliating, nonconsensual fake videos.

How to spot a fake video

In early March, a woman in Bucks County, Pa., was arrested on allegations she sent her daughter’s cheerleading coaches fake photos and video of her rivals to try to get them kicked off the squad. Police say she used deepfake tech to manipulate photos of three girls on the Victory Vipers squad to make them look like they were taking drugs.

“There’s potential harm to the viewer. There’s harm to the subject of the thing. And then there’s a broader harm to society in undermining trust,” says Deborah Johnson, emeritus professor of applied ethics at the University of Virginia.

Social networks say deepfakes haven’t been a major source of problematic content. We shouldn’t wait for them to become one.

It’s probably not realistic to think deepfake tech could be successfully banned. One 2019 effort in Congress to forbid some uses of the technology faltered.

But we can insist on some guardrails from these consumer apps and services, the app stores promoting them and the social networks making the videos popular. And we can start talking about when it is and isn’t okay to make deepfakes — including when that involves reanimating grandpa.

Installing guardrails

Avatarify’s creator, Ali Aliev, a former Samsung engineer in Moscow, told that he’s also concerned that deepfakes could be misused. But he doesn’t believe his current app will cause problems. “I think the technology is not that good at this point,” he told me.

That doesn’t put me at ease. “They will become that good,” says Mutale Nkonde, CEO of the nonprofit AI For the People and a fellow at Stanford University. The way AI systems learn from being trained on new images, she says, “it’s not going to take very long for those deepfakes to be really, really convincing.”

Your smartphone photos are totally fake — and you love it

Avatarify’s terms of service say it can’t be used in hateful or obscene ways, but it doesn’t have any systems to check. Moreover, the app itself doesn’t limit what you can make people say or do. “We didn’t limit it because we are looking for use cases — and they are mainly for fun,” Aliev says. “If we are too preventive then we could miss something.”

Hany Farid, a computer science professor at the University of California at Berkeley, says he has heard that move-fast-and-break-things ethos before from companies like Facebook. “If your technology is going to lead to harm — and it’s reasonable to foresee that harm — I think you have to be held liable,” he says.

What guardrails might mitigate harm?

Wombo’s CEO Ben-Zion Benkhin says deepfake app makers should be “very careful” about giving people the power to control what comes out of other people’s mouths. His app is limited to deepfake animations from a curated collection of music videos with head and lip movements recorded by actors. “You’re not able to pick something that’s super offensive or that could be misconstrued,” Benkhin says.

MyHeritage won’t let you add lip motion or voices to its videos at all.  However, it broke its own rule by using its tech to produce an advertisement featuring a fake Abraham Lincoln.

There are also privacy concerns about sharing faces with an app. A lesson we learned from 2019′s controversial FaceApp, a Russian service that needed access to your photos to use AI to make faces look old. Avatarify (also Russian) says it doesn’t ever receive your photos because it works entirely on the phone — but Wombo and MyHeritage do take your photos to process them in the cloud.

App stores that distribute this technology could be doing a lot more to set standards. Apple removed Avatarify from its China App Store, saying it violated unspecified Chinese law. But the app is available in the United States and elsewhere. Apple says it doesn’t have specific rules for deepfake apps aside from general prohibitions on defamatory, discriminatory or mean-spirited content.

Labels or watermarks that make it clear when you’re looking at a deepfake could help, too. All three of these services include visible watermarks, though Avatarify removes them with a $2.50-per-week premium subscription.

Even better would be hidden watermarks in video files that might be harder to remove, and could help identify fakes. All three creators say they think that’s a good idea — but need somebody to develop the standards.

Social networking and deepfake

Social networks, too, will play a key role in making sure deepfakes aren’t used for ill. Their policies generally treat deepfakes like other content that misinforms or could lead to people getting hurt. Facebook and Instagram’s policy is to remove “manipulated media,” though it has an exception for parodies. TikTok’s policy is to remove “digital forgeries” that mislead and cause harm to the subject of the video or society, such as inaccurate health information. YouTube’s “deceptive practices” policy prohibits technically manipulated content that misleads and may pose a serious risk.

But it’s not clear how good of a job the social networks can do enforcing their policies when the volume of deepfakes skyrockets. What if, say, a student makes a mean joke deepfake of his math teacher — and then the principal doesn’t immediately understand it’s a fake? All the companies say they’ll continue to evaluate their approaches.

One idea: Social networks could bolster guardrails by making a practice out of automatically labeling deepfakes — a use for those hidden watermarks — even if it’s not immediately obvious they’re causing harm. Facebook and Google have been investing in technology to identify them.

New norms

Whatever steps the industry and government take, deepfakes are also where personal tech meets personal ethics.

You might not think twice about taking or posting a photo of someone else. But making a deepfake of them is different. You’re turning them into a puppet.

“Deepfakes play with identity and agency, because you can take over someone else . You can make them do something that they’ve never done before,” says Wombo’s Benkhin.

Nkonde, who has two teenagers, says families need to talk about norms around this sort of media. “I think our norm should be ask people if you have their permission,” she says.

But that might be easier said than done. Creating a video is a free-speech right. And getting permission isn’t even always practical: One major use of the latest apps is to surprise a friend.

Permission to create a deepfake is also not entirely the point. But the content created matters.

“If someone in my family wants to take my childhood picture and make this video, then I would be comfortable with it in the context of a family event,” Susarla says. “But if that person is showing it outside an immediate family circle, that would make it a very uncomfortable proposition.”

The Internet is great at taking things out of context. Once a video is online, you can not control the use and interpretation of it.