Altering video footage has been possible for decades, but doing so has historically taken time, professional skills and a lot of money. Developments in areas like AI have now made it possible for anyone to cheaply create a convincing fake video, including those who might use it for malicious purposes. This reality, and some high-profile news stories that we've covered in this newsletter,
have brought deepfakes to the forefront of political conversations —
and the latest debate surrounds what to do about them.
The obvious answer: create technology that can automatically detect forgeries. Unfortunately, this is still well out of reach. And until the tech exists, legislation may be the last line of defense. New laws, or lack thereof, will have potential implications, and are something we should all keep an eye on.
Lawmakers including Sen. Ben Sasse
(R-Neb.), Sen. Mark Warner (D-Va.) and House Intelligence Chairman Adam Schiff (D-Calif.) have all looked into deepfake legislation, with Sasse even introducing a bill to criminalize the malicious creation and distribution of deepfakes. Their main argument is that deepfakes pose a national security risk
and with the right fake threats, could even lead to war. Not to mention they can hurt the reputations of business and government leaders, celebrities
and everyday people. The proposed legislation includes fines or jail time for individual deepfake creators, and penalties for distributors like Facebook — but only if they know they're distributing a deepfake.
Those who oppose legislation including law professors Danielle Citron and Mary Anne Franks think regulation could do more harm than good. They worry legislation could scare platforms into taking down everything that's reported as a deepfake — likely deleting legitimate posts in the process. Others like David Greene, civil liberties director at the Electronic Frontier Foundation, believe civil liberties could be at risk. He worries that making malicious deepfakes a federal crime could endanger protected speech like parody videos. People often use deepfake technology to create comedy like the above video, but the lines between harmless fun and harmful intent are often blurry.
Until a deepfake detector is proven possible (check out the people working on it today's Deep Take), regulation will be the only viable option. Given the gravity and potential impact on communicators, this is something we should all be watching closely.