Seeing is no longer believing

Cheng Xi Tsou
2 min readDec 7, 2020

In an increasingly more digital world, we can gather information through reading text, watching videos, listening to audiobooks, and more. While it can be easy to lie on the internet or photoshop an image, faking a video is much harder to pull off. However, in recent years, the technology of machine learning and artificial intelligence allows for a whole new level of potential deception in synthesizing videos, also known as deepfakes.

The video above is an example of a deepfake. In the video, Barack Obama seems to be warning the public about the dangers of enemies being able to make anyone say anything at any time, then proceeds to insult Donald Trump. Of course, the character is in fact Jordan Peele, whose face is layered over by a deepfake. While it may be seemingly harmless in this entertaining context, the potential for deepfakes to cause harm is paramount. Imagine if the clip of Obama blatantly insulting Trump was taken out of context, or if a political figure is deepfaked into doing something worse, such as a crime, the repercussions could be irreparable.

In a paper by Regina Rini, she describes video recording as something called an “epistemic backdrop”. When we know that we are being recorded, we tend to speak with more sincerity and honesty, therefore providing a stronger testimony. Videos or recordings can also be done subtly to provide a way to check testimonies of events that happened in the past. There are many instances where a public figure had been forced to acknowledge a false testimony when video evidence shows otherwise.

With the rising prevalence of deepfakes, the once “epistemic backdrop” of testimonies seems to be not as credible as before. Just like any verbal or written testimony, the possibility of falsification is there, albeit not as easy. While falsified testimonies can be cross-checked easily, the higher quality deepfakes can be hard to spot. Just having the technology to produce something fake provides an outlet for individuals to defer seemingly incriminating video evidence as a “deepfake”, and there would be no way to prove it.

A dramatic example would be a video of a public address released by the president of Gabon, a country in central Africa. In late 2018, the president who has not been seen in public for months following an illness released this video:

3:48 for the statement

The political opponents of the president Ali Bongo called this video deepfaked and cited the video as evidence of Ali Bongo’s misbeing. Within a week, the military launched a coup.

As machine learning and artificial intelligence get more and more advanced, the ability to create higher quality deepfakes is outpacing the technology to detect deepfakes. This leads to the question, can videos still be used as a reliable source of testimonial evidence?

--

--