We often face videos of an event—such as an event of police brutality. We can generally trust that the event happened as shown within the video. But which will soon change. Because of the arrival of so-called deepfakes videos that use machine learning technology to point out a true person saying and doing things they haven’t.
This technology poses a specific threat to marginalized communities. If deepfakes cause society to maneuver far away from the present “seeing is believing” paradigm for video footage, that shift may negatively impact individuals whose stories society is already less likely to believe. The proliferation of video technology has fueled a reckoning with police violence within us, recorded by bystanders and body cameras.
But during a world of pervasive, compelling deepfakes, the burden of proof to verify the authenticity of videos may shift onto the videographer. A development that might further undermine attempts to hunt justice for police violence. To counter deepfakes, high-tech tools meant to extend trust in videos are in development. But these technologies, though well-intentioned, could find yourself getting used to discrediting already marginalized voices.
What makes video so powerful?
Why does it spur crowds to require the streets and lawyers to showcase it in trials? It’s because seeing is believing. Shot at differing angles from officers’ point of view, bystander footage paints a fuller picture of what happened. Two people (on a jury, say, or watching a viral video online) might interpret a video in two alternative ways. But they’ve generally been ready to deem granted that the footage may be a true, accurate record of something that basically happened.
That might not be the case for much longer. It’s now possible to use AI to get highly realistic “deep fake” videos showing real people saying and doing things they never said or did. Like the recent viral TikTok videos depicting an ersatz Tom Cruise. You’ll also find realistic headshots of individuals who don’t exist in the least on the creatively-named website thispersondoesnotexist.com. (There’s even a cat version.)
Deepfakes Verification technology
Photo and video verification technology hold promise for confirming what’s real within the age of “fake news”. But it also causes concern. During a society where guilty verdicts for cops remain elusive despite ample video evidence, is even more technology the answer? Or will it simply reinforce existing inequities?
The “ambitious goal” of adding verification technology to smartphone chipsets necessarily entails increasing the value of production. Once such phones start to return onto the market, they’re going to be costlier than lower-end devices that lack this functionality. And not everyone is going to be ready to afford them. Black Americans and poor Americans have lower rates of smartphone ownership than whites. Similarly, high earners and are more likely to have a “dumb” telephone. (The same pattern holds with reference to educational attainment and concrete versus rural residence.) Unless and until verification technology is baked into even the foremost affordable phones, it risks replicating existing disparities in digital access.
That has implications for police accountability, and, by extension, for Black lives. Primed by societal concerns about deepfakes and “fake news,” juries may start expecting high-tech proof that a video is real. Which may lead them to doubt the veracity of bystander videos of police brutality if they were captured on lower-end phones that lack verification technology. Extrapolating from current trends in phone ownership, such bystanders are more likely to be members of marginalized racial and socioeconomic groups. Those are the very people that, as witnesses in court, face an uphill battle in credibility by juries. That bias has long outlived the 19th-century rules. These rules explicitly barred Black (and other non-white) people from testifying for or against the White race on the grounds that their race rendered them inherently unreliable witnesses.
In short, skepticism of “unverified” phone videos may compound existing prejudices against the owners of these phones. Which will matter less in situations where a various group of various eyewitnesses records a police brutality incident on a variety of devices. But if there’s only one bystander witness to the scene, the type of phone they own could prove significant.
The advent of mobile devices empowered Black Americans to force a national reckoning with police brutality. Ubiquitous, pocket-sized video recorders allow average bystanders to document the pandemic of police violence. And since seeing is believing, those videos make it harder for others to continue denying the matter exists. Even with the evidence thrust under their noses, juries keep acquitting cops who kill Black people. Chauvin’s conviction in the week represents an exception to recent history. Between 2005 and 2019, 35 of the 104 enforcement officers charged with murder or manslaughter in reference to shooting while on duty faced punishment.
The fight against fake videos will complicate the fight for Black lives. Unless it’s equally available to everyone, video verification technology might not help the movement for police accountability, and will even set it back. Technological guarantees of videos’ trustworthiness will make little difference if they’re accessible only to the privileged.