What are deepfakes? Deepfakes are synthetic media in which a person in an existing image or video is replaced with someone else’s likeness. While the act of creating fake content is not new, deepfakes leverage powerful techniques from machine learning and artificial intelligence to manipulate or generate visual and audio content that can deceive easily, and detecting these fakes is next to impossible.
What are deepfakes? Deepfakes have only recently come into the spotlight, but they’ve been around for much longer than you may realize. The technology behind deepfakes can be used for a wide range of applications that aren’t necessarily connected to fake porn.
So, what are deepfakes? What makes them different from other kinds of fakes? And how can we detect them? Here’s everything you need to know about deepfakes so you can better understand this technology and its many uses.
Read here our 10 tips to become productive at work.
What are Deepfakes? An Introduction to Deepfakes
The idea of creating a deepfake—an artificial intelligence-generated image or video that appears incredibly realistic—may seem more like science fiction than reality. But advances in technology have made it possible to create not just relatively realistic-looking images, but also videos that look almost indistinguishable from reality, with relative ease and low cost.
What are deepfakes? How do they work? The term deepfake comes from the underlying technology of deep learning, which is a form of AI. Deep learning algorithms, which teach themselves how to solve problems when given large sets of data, are used to swap faces in the video and digital content to make realistic-looking fake media.
In summary, this technology uses a combination of AI technologies like deep learning, machine learning, and deep neural networks to create fake content that looks painfully real.
What are deepfakes? If you have never seen a deepfake video, see the below-embedded video of Barack Obama, or see Mark Zuckerberg bragging about having total control of billions of people’s stolen data.
How Does it Work?
There can be many methods to create a deepfake but the most mentioned technique is the use of deep neural networks involving autoencoders that employ a face-swapping technique.
To start, a person needs a video clip to use as the base for the deepfake. Then you need a large collection of data like video clips to map a person’s face you want to insert in the base video.
The autoencoder is a deep learning AI program that studies the video clips to understand what the person looks like from a variety of angles and environmental conditions, and then maps that person onto the individual in the target video by finding common features.
What are deepfakes? Another machine learning technique called Generative Adversarial Networks (GANs) is also used that detects and improves any flaws in the deepfake.
Many softwares are available on GitHub in this regard. Some are just for fun, while some are far more likely to be used maliciously. A Business Insider’s research finds Chinese app Zao, DeepFace Lab, Face Swap, and DeepNude, a now-removed app that generated fake nude images of women.
Can you Detect Deepfakes Videos?
What are deepfakes? We already know that deepfake porn is a thing, and now we’re starting to hear about deepfake news. Deepfakes audios are now a reality as well. How can you detect these types of fraudulent videos in your feed?
An absolute “yes” or “no” is not the correct answer to the question of detecting deepfakes. A study conducted at the University of Sydney says that even though humans may not be able to consciously tell which faces are real or fake, people’s neural activity can.
According to a press release by the University of Sydney, using behavior and brain imaging techniques including Electroencephalography (EEG), the study concludes that:
When looking at participants’ brain activity, the University of Sydney researchers found deepfakes could be identified 54 percent of the time. However, when participants were asked to verbally identify the deepfakes, they could only do this 37 percent of the time.
They are some deepfake decoders but are not very good at it. Experts fear that deepfakes will become far more sophisticated and undetectable as technology further develops. But we can hope that as the deepfakes technology develops, their detection mechanism will also develop accordingly.
What are Potential Risks?
What are deepfakes? The origin of this technology was for entertainment purposes but is now threatening to have far-reaching consequences. The most-talked problem with this technology is the spread of fake porn.
In September 2019, Deeptrace, an AI firm, 15,000 deepfake videos online. A staggering 96% of these 15,000 videos were pornographic and 99% of those mapped faces from female celebrities to porn stars.
What are deepfakes? As we now know what deepfakes are, everyone can assess how far-reaching the consequences this technology can have on governments, elections, and democracy around the world through the spread of fake news. These are not only the potential risks. They have happened in the near past.
Some searches suggest that in 2016, a Russian troll farm deployed over 50,000 bots on Twitter, using deepfakes as profile pictures, to try to influence the outcome of the US presidential election. It is expected that they may have boosted Donald Trump’s votes by over three percent.
More recently, a deepfakes video of Volodymyr Zelensky urging his troops to surrender to Russian forces surfaced on social media.
A deepfake of Ukrainian President Volodymyr Zelensky calling on his soldiers to lay down their weapons was reportedly uploaded to a hacked Ukrainian news website today, per @Shayan86 pic.twitter.com/tXLrYECGY4
— Mikael Thalen (@MikaelThalen) March 16, 2022
What are deepfakes audio? Deepfake audios can now be made using deep learning algorithms with just a few hours (or in some cases, minutes) of audio of the person whose voice is being cloned, and once a model of a voice is made, that person can be made to say anything.
An incident happened in September 2019 when an employee of a company made a transfer of $240,000 to thieves impersonating the CEO of the company after receiving a deepfake call.
The risks of deepfakes are enormous and the public should equip themselves with in-depth knowledge to fight this dilemma. With the ever-increasing use of social media, every one of us can become a victim of a malicious actor.