From celebrity pornography to problematic politicians; a new face-swapping technology is rippling through digital media, fueling the plague of fake news.
A “Deepfake” is an unrealistic, fake video made using artificial intelligence software. This is carried out using Deep Learning, a network of interconnected nodes which autonomously run computations on input data, allowing users to essentially ‘train’ algorithms to produce a convincing combination between an original video and the newly introduced data.
Utilising Google image searches, stock images, and Youtube videos, this AI software can effortlessly manipulate videos. Common examples range from Gal Gadot featuring in porn, to influential public figures ruining their pristine reputation.
Digital experts have expressed concern over the difficulty it takes to distinguish real videos from fake, along with the rate at which this technology is growing and the increase in demand.
The New York Times recently referred to Deepfakes as “one of the newest forms of digital media manipulation” and “one of the most obviously mischief-prone”, continuing to mention the technology’s potential to “smear politicians, create counterfeit revenge porn or frame people for crimes”.
Researcher Aviv Ovadya has even gone as far to predict that “such technology could be used to manipulate diplomacy, and even goad countries into making decisions based on fake information”.
U.S. Senator Mark Warner recently stating “this ultimately begs the question — how do you maintain trust in a digital-based economy when you may not be able to believe your own eyes anymore?”
We spoke to Tom White, an advocate for the disclosure of media altered with AI or similar techniques, and Senior Lecturer of Media Design at Victoria University of Wellington. Tom has created multiple tools for manipulating media such as Toposketch, a sketch based interface for generating animations, and also believes one of his students in 2016 may have created the first ever ‘deepfake’ video (to his knowledge).
“Deepfakes introduce two types of threats: the most immediate is the danger that someone will use the technology to spread specific false information. However, perhaps more damaging over time is the threat caused by the general erosion of credibility of recorded media in general - which can be equally used to claim real events never happened, for example: in the US Trump has begun claiming that the Access Hollywood tape is a fake,” says Tom.
“It is difficult to know what media being consumed online is real, so it useful to be suspicious and to encourage others to be as well. Since in many contexts (like research) there is no rationale to hide the fact that the media is fake, I have proposed (and use) a ‘FakeMark’ indicator be added to signal that the media has been altered.”
“It is becoming increasingly easy for the average person do add DeepFake face swapping to videos. In fact, the technology has not changed much in the past year, but this has become an issue in recent months because the tools to make these types of videos have made the techniques more accessible. For now, the image quality on the videos is still relatively low - there are not yet any HD DeepFake videos being created.”