Submitted by Cindy Wagner on
The digital tools for manipulating images, sounds, and other information—already used for such things as making movies—are increasingly being misused as well as democratized, said panelists at the Information Technology & Innovation Foundation’s March 12 forum, “Responding to the Deepfakes Challenge.” But calls to regulate “deepfake” technologies that create deceptive sounds and images could have unintended negative consequences, they pointed out.
Michael Clauser of Access Partnership distinguished between cheap fakes, which are easily manipulated media, and deepfakes, which are AI-generated “adversarial networks” that can both detect and improve on deepfakes.
These capabilities resemble movies’ special effects and editing technologies, noted Ben Sheffner of the Motion Picture Association. One of the first misuses of the technology to arise was faked pornographic photos of famous actresses, for which victims are seeking legislative remedies. But First Amendment issues should be narrowly targeted to avoid impacts to legitimate moviemaking, he argued.
AI-generated deepfakes include not just manipulated video but also faked audio and “synthetic text” designed to spoof articles and YouTube comments, for instance, said Lindsay Gorman of the Alliance for Securing Democracy.
An example of audio fakery are technologies that can record a sentence you’ve spoken and recreate your voice saying something else, Sheffner said. This technology will get more sophisticated and easier to use, he added.
The technologies for deepfake are democratizing so that the average person on the street can use them with apps developed in China, Clauser said, and the rate of innovation means it’s something kids will eventually be able to use.
These technologies pose a threat to democracy beyond election security, Gorman said, as citizens become less able to tell if something is real or not and politicians become less able to refute the fakes. Trust will be broken. But should the technologies be regulated by governments?
Saleela Salahuddin, Facebook’s cybersecurity policy lead, outlined several of Facebook’s initiatives to address the deepfake challenge, including media partnerships such as one with Reuters to create an online educational tool that journalists globally can use to identify fakes. Facebook also employs third-party fact checkers and has established community standards to reject manipulated videos (other than parody and satire), she said.
Industry leaders are trying to educate policymakers about the dangers of regulating legitimate uses of the AI technologies that enable deepfakes, said Sheffner. First, victims of deepfakes already have recourse through existing laws, such as defamation, fraud, infliction of emotional distress, false advertising, copyright infringement, and right of publicity. Second, the Supreme Court has already struck down several attempts to regulate false speech, such as the “stolen honor” claims about military honors.
Regulating the technology’s developers would also inhibit innovation for good, such as cancer detection, Clauser said. And, ironically, it takes deepfakes-level technology to identify and fight deepfakes, he observed.
Labeling faked media on platforms such as Twitter is another nonlegal remedy that would also protect free speech, Gorman said. An example she cited is a video of Democratic presidential candidate Joseph Biden that was edited to make it seem like he was endorsing Donald Trump. The video had 5 million views on Twitter before Twitter labeled it as manipulated media.
Educating students about sources and fact checking will help, but media literacy alone won’t be enough to combat deepfakes, Gorman said.
Facebook has launched a million-dollar Deepfake Detection Challenge (ending March 31) to develop tools to detect manipulated content, and the winner will make the technology open source, Salahuddin said.
View event video or learn more at “Responding to the Deepfakes Challenge.”
Cindy Wagner is consulting editor for AAI Foresight and editor of Foresight Signals. Send feedback for this post to CynthiaGWagner@gmail.com.