Picture this: You click on a PornHub clip and see your face appear in a pornographic video. The clip looks extremely realistic but, of course, you know it’s not real. Would your friends and family believe you, though? They won’t, and why should they? It looks real!
As we look back on the past year and consider which things would affect us most in the time to come, we can’t help but stop flipping the page when we come across the mention of ‘DEEPFAKE.’
Deepfake could be the end of truth.
So, what exactly is deepfake?
Deepfake is a technology pioneered in 2014 by Ian Goodfellow, a Ph.D. student who now works at Apple. Most deepfake technology is based on generative adversarial networks (GANs).
GANs were introduced in a paper by Ian Goodfellow and other researchers at the University of Montreal, including Yoshua Bengio, in 2014.
Deepfake videos have demonstrated their applications in politics, entertainment, and arts. This dangerously deceptive genre of technology seems to be used to undermine political opponents. For example, a deepfake video of Matteo Renzi, former prime minister of Italy, was broadcast on private channel Canale 5’s Italian satirical news show Striscia la notizia (“The News Slither” in English).
The video shows Renzi making obscene arm gesture, sneering at a technician, and gurning at an off-screen audience member. This performance has put Italian twitter on a frenzy as many of Renzi’s critics express their outrage at the ex-prime minister’s notorious diatribe.
The video is the latest in a series of examples of how pictures and footages of high-profile figures are misused in AI-generated hyper-realistic videos designed to fool humans has started to affect the real world.
Back home, we saw a viral spread edit of Mark Zuckerberg give a sinister speech about the power of Facebook and how the platform owns its users.
In entertainment, Scarlett Johansson remains to be a frequent subject of deepfake porn. Johansson expressed concern about the phenomenon, describing the internet as a “vast wormhole of darkness that eats itself.”
These GANs still have a long way to go before they offer hyper-realistic results. What’s worrying is that GANs is no longer restricted to those with supercomputers. And with the amount of selfies an average person takes in a lifetime, perhaps anyone could become a victim of deepfakes.
Deepfake: When Seeing Isn’t Believing
Samsung’s AI Center recently released research sharing the science behind deepfake technology. GANs enable algorithms for classification purpose into generating or creating images. This happens when two GANs try to fool each other into thinking an image is real. A small clip of that person or even something as little as a single image can help a seasoned GAN expertly craft a deepfake.
“Crucially, the system is able to initialize the parameters of both the generator and the discriminator in a person-specific way, so that training can be based on just a few images and done quickly, despite the need to tune tens of millions of parameters,” said the researchers behind the paper. “We show that such an approach is able to learn highly realistic and personalized talking head models of new people and even portrait paintings.”
With over 30 nations actively engaged in cyberwar at any given time, the cost of cybercrime and espionage on a global scale has reached $600 billion. One of the deepest concerns of this technology news is that the probability of it being misused to influence elections is too high.
We’re not talking about lip-syncing videos. Lawmakers and intelligence officials worry that deepfakes could become a minefield of fake videos: Elizabeth Warren advocating a complete ban on vaccines, Bernie Sanders denigrating people of color, Donald Trump admitting to corrupt deals with the Middle East.
Iran, Russia and China each have their own reasons for uniting against the United States. The Kremlin is particularly apt at developing tools that infringe upon the freedom of speech for U.S. citizens. In the 2016 elections, Russia had even gone so far as to create fake propaganda tools. In carrying out very malicious plans to disrupt the democracy we might see complete exploitation of deepfake technology.
So, what is being done to combat deepfakes?
Not so long ago, the U.S. House of Representatives’ Intelligence Committee sent a letter to Google, Twitter, and Facebook asking how these sites planned to fight deepfakes in the upcoming election. The inquiry was hastened after the Director of National Intelligence provided an alarming report on deepfake technology.
Institutions like DARPA and researchers at the Max Planck Institute for Informatics, Carnegie Mellon, Stanford University, and the University of Washington are also experimenting with deepfake technology. Some of these organizations are looking at ways to put GAN technology to good use, and also how to fight it.
By feeding algorithms real video as well as the deepfake, researchers are hoping to help solve this leg of the problem. Surprisingly, the solution to this problem isn’t more tech. Researchers at the University of Oregon Institute of Neuroscience think that “a mouse model, given the powerful genetic and electrophysiological tools for probing neural circuits available for them, has the potential to powerfully augment a mechanistic understanding of phonetic perception.”
Does this mean that mice could help the next-generation of algorithms detect AI-generated media? We don’t know yet.
The problem with the fight is that deepfake technology is advancing at a rate where it’s hurting the basic foundation of truth and democracy. It’s difficult to bring back the trust we have in technology once we reach a point where it’s impossible to tell what is real and what is fake.
In October, California Governor Gavin Newsom signed two new pieces of legislations designed to fight deepfakes. The first, Assembly Bll 602 (AB-602), provides victims of synthetic pornographic deepfakes the right to sue the video’s creators. The second, Assembly Bill 730 (AB-730) will make it illegal for anyone in California to share deepfakes of a political candidate within 60 days of an election. This law applies to the distribution of media that is designed to undermine political opponents to swing voters.
The new legislation does not take effect until next year. It also does not apply to content that includes a disclaimer saying they have been manipulated or videos that are considered parody or satire.
The American Civil Liberties Union of California had urged Governor Gavin Newsom to veto the law.
“Despite the author’s good intentions, this bill will not solve the problem of deceptive political videos; it will only result in voter confusion, malicious litigation, and repression of free speech,” Kevin Baker, the organization’s legislative director, wrote in a letter to Newsom.
The new bill may hold deepfakes creators accountable for their actions and that will not be enough to discourage bad actors from creating deepfakes. This is also not stop the deepfakes from hitting all corners of the internet. Once a clip is on the Internet, it’s hard to take it down or take any action to stop the piece of misinformation from spreading far and wide.