It’s 1995 and the internet has to change the world for better or worse. Fast forward to 2018 and the widespread of viral fake videos, images and news stories on the internet platform has become a threat to the digital generation. Facebook, YouTube, Twitter and other social networking sites are the biggest victims (or mediums?) of the unethical online trend of fakeism.
To combat this sensitive issue, almost all tech enterprises have put the concerned subject on priority in this year’s annual agendas. The enterprises have laid down extensive plans to detect fake images and fake videos, validate the authenticity of the content as well as remove the fake profiles.
Yet, these efforts just seem “minimal” when compared to the profound roots of internet stretching across the corners of the world. Millions of information pieces are uploaded and shared on various online platforms and tens of billions of users scroll through them on daily basis.
It’s high time that people get access to an advanced technology whose function is to notify/alert them as soon as they start playing fake videos on their devices. Yes, this is the need of the hour to safeguard the faith of global citizens in the internet world.
Creating Fake Videos Is Kid’s Play
Google engineer Supasorn Suwajanakorn screened multiple examples of deep fake videos at Ted Talks in April this year. He shared quite impressive virtual videos of the ex-US Presidents George W. Bush and Barrack Obama, and a real survivor of Holocaust orating dialogues they never said in real.
For instance, take a look at this dialogue of Bush, “It’s a difficult bill to pass, because there’s a lot of moving parts, and the legislative processes can be ugly.” The other one comes from Obama,“To help families refinance their homes, to invest in things like high-tech manufacturing, clean energy and the infrastructure that creates good new jobs.”
Amazing yet, nothing is true here. Influenced by the power of real human interactions Suwajanakorn developed a machine learning tool that can create human model using nothing but photos and videos of a person (dead or alive).
He explained the process, “We introduce a new technique that can reconstruct a high-detailed 3D face model from any image without ever 3D-scanning the person.” The Google engineer collected multiple photos of an individual personality and converted them into a fake video with audio input employing the neural network. “I let a computer watch 14 hours of pure Obama video, and synthesized him talking.”
Beaming again on the video played at TED stage, he added, “So what you see here are controllable models of people I built from their internet photos. Now, if you transfer the motion from the input video, we can actually drive the entire party.”
However, like any discipline, videos made by machine learning also hold dual possibilities, the positive ones and the misuses.
Counter Attacking Fake Videos Culture
Suwajanakorn very well realizes the extent of damage this video manipulation technology could do to the humanity. Thereby, presently he is working with the AI Foundation to build another application, “Reality Defender” to spot a deep fake video.
Reality Defender is a tool for detecting fake video in the form of “web-browser plug-in that can flag potentially fake content automatically, right in the browser”.
At the same time, the Google engineer recommended the developer community to make “virtual video creation” technology tough, risky and cost-ineffective.
“There is a long way to go before we can effectively model people,” quoted Suwajanakorn at TED Talks. “It’s very important that we make everyone aware of what’s currently possible so we can have the right assumption and be critical about what we see.”