What are Deepfakes: Everything You Need To Know In 2023
Synopsis
- Deepfake technology has gained significant attention in recent years, allowing users to create realistic and often convincing videos by superimposing someone’s face onto another person’s body.
- Deepfake technology uses powerful neural networks to analyze and synthesize data, enabling the app to learn from existing images or videos and generate convincing fake content.
- While deepfake technology has both positive and negative implications, it is essential to approach it responsibly and ethically.
What Are Deep Fakes?
Deepfakes are highly realistic and often deceptive digital media, such as videos, images, or audio recordings, that are created or manipulated using advanced artificial intelligence techniques, particularly deep learning algorithms. The term “deepfake” is derived from “deep learning” and “fake.”
Deepfake technology utilizes deep neural networks, a type of artificial neural network with multiple layers, to analyze and synthesize large amounts of data. By training on extensive datasets of images, videos, or audio samples, these algorithms can learn to generate or modify content that appears convincingly real.
The most common application of deepfakes involves swapping faces or altering the appearance and actions of individuals in existing media. For example, using deepfake algorithms, one can superimpose the face of one person onto the body of another in a video, making it appear as though the target person is saying or doing things they never actually did. Deepfakes can also be used to change facial expressions, mimic voices, or generate entirely new content featuring individuals who never participated in the original media.
Audio can be deepfaked now using Natural Language Processing (NLP) a component of Artificial Intelligence (AI). This enables to create “voice clones” of famous celebtrities and public figures. You might have heard or seen President Barack Obama calling President Donald Trump a “complete dipshit”, or you might have seen Facebook CEO Mark Zuckerberg brag about having “total control of billions of people’s stolen data”. If you have seen these, you have seen a deepfake.
What are majority of deepfake for?
Many of them are pornographic and then there are faceovers. The AI firm Deeptrace found between 12000 to 15000 deepfake videos online just during September 2019. Deeptrace firm during their investigation found that close to 96% were pornographic and almost 99% of those depict faceover from female celebrities around the world on to porn stars. This technology has evolved into new simplistic software form that now has enabled unskilled people create deepfake photos and videos and spread them all over the internet. This has created a shift beyond celebrity domain to fuel revenge porn.
As Danielle Citron, a professor of law at Boston University, puts it: “Deepfake technology is being weaponized against women.” Beyond the porn there’s plenty of spoof, satire and mischief.
History of Deepfakes
The term “deepfake” first came into public consciousness around 2017, when an anonymous person on Reddit who went by the handle “Deepfakes” started the discussion forum “r/deepfakes.” The forum was devoted to videos featuring the faces of Hollywood actresses on the bodies of adult film stars. Similar forms of deepfake pornography quickly moved from fringe online sectors to more easily accessible and mainstream platforms.
According to The Guardian online story, the deepfakes were first emerged on the internet in 2017. According to the theguardian online, “a Reddit user of the same name posted manipulated porn clips on the site. The videos swapped the faces of celebrities – Gal Gadot, Taylor Swift, Scarlett Johansson and others – on to porn performers.”
In their March 2021 report, the threat intelligence company Sensity noted that “defamatory, derogatory, and pornographic fake videos account for 93%” of all deepfakes, most of which target women.
Technologies Required To Produce Deepfakes
The development of deepfakes is becoming easier, more accurate and more prevalent as the following technologies are developed and enhanced:
- GAN neural network technology is used in the development of all deepfake content, using generator and discriminator algorithms.
- Convolutional neural networks analyze patterns in visual data. CNNs are used for facial recognition and movement tracking.
- Autoencoders are a neural network technology that identifies the relevant attributes of a target such as facial expressions and body movements, and then imposes these attributes onto the source video.
- Natural language processing is used to create deepfake audio. NLP algorithms analyze the attributes of a target’s speech and then generate original text using those attributes.
- High-performance computing is a type of computing that provides the significant necessary computing power required by deepfakes.
According to the U.S Department of Homeland Security’s “Increasing Threat of Deepfake Identities” report, the several tools are commonly used to generate deepfakes in a matter of seconds. Those tools include Deep Art Effects, Deepswap, Deep Video Portraits, FaceApp, FaceMagic, MyHeritage, Wav2Lip, Wombo and Zao.
How Deepfake Images & Videos Are Created?
To create a deepfake video it takes a few complex steps into the AI algorithm.
While deepfakes can be created using a range of sophisticated techniques, one prevalent method is generative adversarial networks (GANs). GANs consist of two neural networks—the generator and the discriminator—that work in tandem. The generator creates the deepfake content, and the discriminator evaluates and provides feedback on the realism of the generated content. Through an iterative process, the generator aims to create deepfakes that are increasingly convincing and difficult to differentiate from genuine media.
Creating deepfake images and videos typically involves a complex process that combines machine learning techniques and image manipulation. Here’s a general overview of the steps involved in creating deepfakes;
- Data Collection: To create a deepfake, a large amount of data is required. This includes gathering video footage or images of the target person (the person whose face will be superimposed) and the source person (the person whose face will be used to replace the target person’s face).
- Data Preparation: The collected data needs to be preprocessed to extract and align facial features. This involves detecting and tracking facial landmarks such as eyes, nose, and mouth in both the target and source videos/images. Techniques like facial landmark detection and face tracking are employed to identify and align key points on the face.
- Model Training: Deepfake creation heavily relies on deep learning models, particularly Generative Adversarial Networks (GANs). The concept was originally developed by Ian Goodfellow and his colleagues in June 2014. GANs consist of two components: a generator and a discriminator. Both the generator and the discriminator are neural networks. The generator output is connected directly to the discriminator input. The generator attempts to create realistic fake images, while the discriminator tries to distinguish between real and fake images.
- The generator is trained using the aligned facial data to generate synthetic face images that resemble the source person’s face.
- The discriminator is simultaneously trained to classify between real and fake images.
- The training process involves iteratively refining the generator and discriminator models until the generator becomes proficient at producing convincing deepfake images.
- Face Swapping: Once the models are trained, the face swapping process begins. The target person’s face is replaced with the synthetic face generated by the generator model, while preserving the original facial expressions and movements. Advanced techniques like warping and blending are used to seamlessly blend the synthetic face onto the target person’s face.
- Post-processing: After the face swap, additional post-processing steps may be performed to enhance the quality and realism of the deepfake. This may involve adjusting colors, lighting, or other image characteristics to ensure consistency throughout the video or image sequence.
It is essential to note that the creation and distribution of deepfakes can have serious ethical and legal implications. Deepfakes have the potential to be used for harmful purposes, including spreading misinformation, non-consensual adult content, or cyberbullying. Responsible use and ethical considerations should always be a priority when working with this technology.
Risks of Deepfake Technology
Deepfake technology poses several risks and potential negative consequences, including:
- Misinformation and Disinformation: Deepfakes can be used to create highly convincing fake videos or audio recordings, making it difficult for viewers to discern between real and fake content. This can lead to the spread of misinformation and disinformation, potentially causing significant harm to individuals, organizations, or society at large.
- Political Manipulation: Deepfakes have the potential to disrupt political processes by creating fake videos or speeches of politicians, thereby manipulating public opinion or inciting social unrest. Such manipulation can undermine trust in democratic institutions and elections.
- Reputation Damage: Deepfakes can be used to damage the reputation of individuals or organizations by creating fake videos or images that appear to show them engaging in inappropriate or illegal activities. This can have severe personal, professional, and financial consequences for the targeted individuals.
- Privacy Invasion: Deepfake technology can be used to generate realistic fake intimate or compromising videos by superimposing someone’s face onto explicit content. This raises concerns about personal privacy and consent, as individuals can be falsely depicted in sensitive and compromising situations.
- Fraud and Scams: Deepfakes can be utilized to deceive individuals into believing they are interacting with someone they trust, such as a family member, a colleague, or a financial institution representative. This can lead to social engineering attacks, identity theft, or financial fraud.
- Implications for Journalism and Trust: Deepfakes can undermine trust in media and journalism as the authenticity of videos and audio recordings becomes increasingly questionable. It becomes more challenging to verify the veracity of information, which can erode public trust in legitimate news sources.
- Cybersecurity Concerns: The development and proliferation of deepfake technology may result in increased cyber threats. For example, malicious actors could use deepfakes as a tool for social engineering, phishing attacks, or to manipulate biometric authentication systems.
- Legal and Ethical Challenges: Deepfakes present complex legal and ethical challenges. Laws and regulations may struggle to keep pace with the rapid advancements in this technology, making it challenging to address the harms caused by deepfakes effectively. Questions regarding consent, intellectual property rights, and freedom of expression also arise.
To mitigate these risks, efforts are being made to develop deepfake detection and authentication technologies, raise awareness about deepfake threats, and establish legal frameworks to address the misuse of deepfakes.
Are deepfakes legal?
Deepfakes are generally legal, and there is little law enforcement can do about them, despite the serious threats they pose. Deepfakes are only illegal if they violate existing laws such as child pornography, defamation or hate speech.
Three states have laws concerning deepfakes. According to Police Chief Magazine, Texas bans deepfakes that aim to influence elections, Virginia bans the dissemination of deepfake pornography, and California has laws against the use of political deepfakes within 60 days of an election and nonconsensual deepfake pornography.
The lack of laws against deepfakes is because most people are unaware of the new technology, its uses and dangers. Because of this, victims don’t get protection under the law in most cases of deepfakes.
Methods to detecting deepfakes
There are several best practices for detecting deepfake attacks. The following are signs of possible deepfake content:
- Unusual or awkward facial positioning.
- Unnatural facial or body movement.
- Unnatural coloring.
- Videos that look odd when zoomed in or magnified.
- Inconsistent audio.
- People that don’t blink.
In textual deepfakes, there are a few indicators:
- Misspellings.
- Sentences that don’t flow naturally.
- Suspicious source email addresses.
- Phrasing that doesn’t match the supposed sender.
- Out-of-context messages that aren’t relevant to any discussion, event or issue.
However, AI is steadily overcoming some of these indicators, such as with tools that support natural blinking.
How to defend yourself against deepfakes
Defending against deepfakes can be challenging, but there are several steps you can take to protect yourself and mitigate the risks associated with this technology. Here are some strategies to consider:
- Awareness and Vigilance: Stay informed about the existence and advancements of deepfake technology. Be cautious when consuming media, especially if it seems suspicious or too good to be true. Develop a critical eye and question the authenticity of content that appears questionable.
- Verify Sources: Always verify the credibility of the sources of information, particularly when it comes to sensitive or controversial topics. Rely on trusted news outlets, official statements, and reputable websites. Cross-check information from multiple reliable sources to ensure accuracy.
- Be Mindful of Privacy: Limit the amount of personal information you share online, including photos and videos. Review your social media privacy settings and be selective about who can access your content. The less personal data available, the harder it becomes for someone to create convincing deepfakes.
- Strengthen Online Security: Protect your online accounts by using strong and unique passwords, enabling two-factor authentication, and regularly updating your software and applications. This reduces the risk of unauthorized access to your personal data, which could be used to create deepfakes.
- Educate Yourself and Others: Learn about deepfake detection techniques and educate others about the risks associated with this technology. Stay informed about advancements in deepfake detection tools and technologies to better recognize and report suspicious content.
- Use Digital Watermarks: Consider adding digital watermarks to your images and videos. Watermarks can help establish the authenticity of your content and make it more difficult for others to misuse or manipulate your media.
- Report and Flag Deepfakes: If you come across a deepfake, report it to the appropriate platform or website hosting the content. Many social media platforms have mechanisms in place to report misleading or harmful content. By flagging deepfakes, you contribute to the collective effort of reducing their spread.
- Support Research and Development: Encourage and support ongoing research and development of deepfake detection technologies. These advancements can aid in quickly identifying and mitigating the impact of deepfakes.
It’s important to note that the battle against deepfakes requires a collective effort from technology developers, social media platforms, governments, and individuals. By staying informed, being cautious, and actively engaging in responsible online behavior, you can better defend yourself against the potential risks associated with deepfakes.