Artificial Intelligence

The current state of deep fakes: A call for increased regulation, comprehensive AI education, and tech intervention

Published on April 15, 2020

As Chairman of the House Intelligence Committee Adam Schiff recently declared at a congressional hearing, “Advances in AI and machine learning have led to the emergence of advanced, doctored types of media—so called ‘deepfakes’–that enable malicious actors to foment chaos, division, or crises. And they have the capacity to disrupt entire campaigns, including that of the Presidency.” According to the BBC, there have already been several instances of criminals employing deepfake audio to impersonate CEOs and facilitate wire transfers.

Despite its real potential for harm, deepfake technology is merely an advanced version of existing technologies, such as Adobe Photoshop and video editing software. With every new technology, comes a host of detractors who emphasize the potential effects. It’s important to remember that the technology itself is agnostic; it can be used to help or to harm. David Doermann, director of the AI Institute at the University of Buffalo, says, “There’s nothing fundamentally wrong, or evil, about the underlying technology; like basic image and desktop editors, deepfakes [are] only a tool. And there are a lot more positive aspects of generative networks than there are negative ones.” That said, deepfake audio and video do create cause for concern, especially when they are combined with social media platforms. 

When deepfake video is shared on social media, the files are shrunk down and compressed, making it exponentially more difficult to decipher that the file isn’t real. Attributions and trace evidence are destroyed while the synthetic media is spreading like wildfire. This can have serious side effects. Considering the possibility of a deepfake video going viral the night before an IPO, BU Law Professor Danielle Citron laments, “The market will respond far faster than we can debunk it.” Also, the proliferation of deepfake media will lead to what Citron calls “the liar’s dividend,” which is the idea that people will begin to doubt whether or not content is real, allowing liars to proclaim things they actually said were not said. Addressing this trust decay, Citron says, “We’ve already seen the ‘liar’s dividend’ happen in practice from the highest of the bully pulpits. So, I think we’ve got a real problem on our hands.” 

Software companies are leading the fight

In December 2019, Facebook, along with Microsoft, the Partnership for AI, and a group of academics, launched a “Deepfake Detection Challenge,” offering $10m in prizes to contributors who can help develop deepfake detection software. This contest wraps in 2020. There are a handful of software companies focusing on media manipulation detection as well, including Deeptrace Labs in Amsterdam, ZeroFox in Baltimore, TruePic in San Diego, and of course, the Department of Defense’s Defense Advanced Research Projects Agency (DARPA)’s MediFor (Media Forensics) team is also working vigilantly in the space.

Enterprises can protect themselves from the imminent threat of deepfake audio and video

Part of the solution will be to provide comprehensive AI education. Although legislators may require search engines and social media companies to identify and provide watermarks on synthetic media, we need to educate our own employees about deepfake technologies as well. Deepfake audio, in particular, is getting increasingly prolific; so it’s important for employees to be wary of any phone calls that don’t sound quite right. These calls could be social engineering attempts. As far as deepfake videos are concerned, they can be can identified by blurriness and changes in skin tone around the perimeter of the facial region. Also, be on the look for unnatural movements and lighting as well.

Leave a comment

Your email address will not be published. Required fields are marked *

1 + 4 =