By Dustin Niles
One of the most drastic changes we’ve seen in the digital age is how digital media has changed how our democracy works and revolutionized how we talk about politics. Suddenly there are global platforms on which world events are discussed and often even take place, too. With such a dominating influence, how “reality” is asserted on these platforms has been hotly contested. Photographic mediums are powerful mediums to assert reality with. After all, it’s a photo, so it has to be real, right? However, with digital tools, photos can sometimes be anything but real. Tools to manipulate photos have been around almost as long as the medium itself. (Farid, n.d.) Digital manipulation is largely attributed to the program Adobe Photoshop, which was developed in 1988 and is still the dominant image editing application today. (Allen, 2019) However, digital image manipulation has come a long way since then, especially with regards to video.
Recently, political coverage has been haunted by the emergence of “deep fakes,” digital video altered with deep learning software to be disturbingly realistic, and is becoming easier and easier for amateurs to make. (Roose, 2018) Deep fakes have been used for a variety of nefarious purposes, but one of the most concerning is the way it’s been used to depict political leaders saying things that they didn’t. (Agarwal, Farid, Gu, He, & Nagano, 2019) A deep fake of Nancy Pelosi slurring her words was viewed more than 3 million times. (Harwell, 2019) With the 2020 election starting very soon, deep fakes could have disastrous consequences if used recklessly.
With deep fake technology becoming more advanced, researchers are having a more and more difficult time grappling with the task of debunking deep fakes and revealing them to the public. (Harwell, 2019) As researches debunk videos, they necessarily provide the clues that led them to that conclusion, and therefore give video makers the information to make them more difficult to detect next time. (Harwell, 2019) ‘“We are outgunned,” said Hany Farid, a computer-science professor and digital-forensics expert at the University of California at Berkeley. “The number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1.”’ (Harwell, 2019)
Very recently, social media have platforms have taken steps to ban deep fake videos on their sites. For instance, Facebook published an official blog post on January 6th claiming that it would now remove videos from the site that have “been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. [Or are] the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.” (Bickert, 2020) The move was mirrored on Instagram, with that platform releasing a statement saying it would allow third-party fact checkers to vet posts on the site. “When content has been rated as false or partly false by a third-party fact-checker, we reduce its distribution by removing it from Explore and hashtag pages. In addition, it will be labeled so people can better decide for themselves what to read, trust, and share. When these labels are applied, they will appear to everyone around the world viewing that content – in feed, profile, stories, and direct messages.” (Instagram, 2019)
While it remains to be seen how effective these tools will be moving forward, the fact that social media platforms and video experts are watching is promising. But these facts are comforting only under the assumption that deep fake videos will continue to be detectable. If not, they could undermine our collective sense of reality one video at a time.
Images not only define our sense of the outside world, but our sense of ourselves as well. In the next blog post, I’ll discuss how the manipulated images we see impact body image and other senses of the self.
Agarwal, S., Farid, H., Gu, Y., He, M. Nagano, K., & Hao, L. (2019) Protecting world leaders against deep fakes. Retrieved from https://farid.berkeley.edu/downloads/publications/cvpr19/cvpr19a.pdf
Allen, J. (2019, Nov. 13) What is Photoshop? Lifewire. Retrieved from https://www.lifewire.com/what-is-photoshop-4688397
Bickert, M. (2020, Jan. 6) Enforcing against manipulated media. Facebook Newsroom. Retrieved from https://about.fb.com/news/2020/01/enforcing-against-manipulated-media/
Instagram. (2019, Dec. 16) Combatting misinformation on Instagram. Instagram Press. Retrieved from https://instagram-press.com/blog/2019/12/16/combatting-misinformation-on-instagram/
Farid, H. (n.d.) Photo tampering throughout history. [PDF file]. Retrieved from https://web.archive.org/web/20150908155915/http://www.cc.gatech.edu/~beki/cs4001/history.pdf
Harwell, D. (2019, June 12) Top AI researchers race to detect ‘deepfake’ videos: ‘We are outgunned.’ The Washington Post. Retrieved from https://www.washingtonpost.com/technology/2019/06/12/top-ai-researchers-race-detect-deepfake-videos-we-are-outgunned/
Roose, K. (2018, March 4) Here come the fake videos, too. The New York Times. Retrieved from https://www.nytimes.com/2018/03/04/technology/fake-videos-deepfakes.html