Deepfake refers to artificial intelligence-based technology that can replicate audio and visual content convincingly. For the past five years, advancements in deepfake technology have been rapid. Combining genuine and phony data to create fakes is not a new concept. But with the help of neural networks and deep learning, scientists have found ways to automate this procedure and apply it to visual and aural media.
Once easily spotted by the naked eye due to poor quality, nowadays it can be quite challenging to spot a fake. This is made worse by the emergence of open-source software and the declining cost of storing and processing data. This development makes deepfake a potentially devastating future technology.
How realistic can it get?
Fans released a deepfake video featuring Morgan Freeman discussing reality perception in July 2021.
It’s not Morgan Freeman, but it sure looks like him! There are no glaring video artefacts, and the facial animations and hairstyles look great. It’s an expertly crafted deepfake that exemplifies how simple it is to fool our sense of reality today.
Where’s the harm in that?
Pornography was the first and most visible industry to embrace deepfake. It originally affected famous people, but soon it was a concern for everyone. Assumptions were made about a wide range of potential threats, including but not limited to acts of bullying at school, fake phone calls requesting money transfers, blackmailing of firm management, and industrial espionage. It was initially considered a theoretical danger but is now a genuine concern.
In 2019, we saw the first confirmed incidence of a cyberattack targeting a business. Con artists robbed a British power provider using voice-changing software. In a second incident involving a voice deepfake, thieves stole from a bank manager in the UAE in the year 2020. Scammers have progressed from sending emails and creating false social media profiles to employing voice deepfake in their attacks
As a result, we now have a new form of cyber fraud to worry about deepfake fraud. It can be used as an adjunct to more common social engineering techniques for purposes such as spreading false information, blackmailing, and spying.
Cybercriminals have reportedly utilized deepfakes to gain interviews for remote jobs, and the FBI has issued a warning to human resources professionals. Deepfakes can be made using publicly available internet photos to fool human resources personnel into thinking they are interviewing a real candidate. This might give them access to sensitive company information or even allow them to release malware into company systems. This form of fraud is a possible threat to any company.
And those are simply the most surface-level uses for deepfake fraud. However, it is common knowledge that hackers are always coming up with novel applications for existing attack vectors.
How serious is the threat?
That is all very eerie. Is it that terrible, though? In reality, it’s not at all. It takes a lot of time and money to make a convincing deepfake.
First, a large amount of data is required to create a deepfake; the more varied the data collection, the more realistic the resulting deepfake will be. If we’re talking about still images, this means that in order to create convincing forgeries, fakes will need to be shot using a variety of camera angles, lighting conditions, and subject expressions. Furthermore, a phony photo would need to be adjusted by hand (automation isn’t very useful in this case).
Second, you’ll need specialized software and a lot of computational power, not to mention a hefty budget, to create a convincing fake. Making a deepfake on a home computer using free software will produce ridiculous appearing results.
The already-complicated procedure is further complicated by deepfake Zoom calls. In this case, the bad guys need to produce a deep fake and do it “online” while still producing a high-quality image devoid of artifacts. While it’s true that some programs can generate a deepfake video stream in real-time, they can only be used to generate a digital clone of the pre-programmed individual, not an entirely new fake identity. And because there are so many pictures of famous actors online, they are almost always the ones that come up first.
In other words, it is now feasible to launch a deepfake attack, but the cost of such deception is prohibitive. While deepfake fraud is possible, only a small number of hackers have the resources to pull it off (particularly if we’re talking about high-quality fakes).
Of course, that’s no excuse to kick back and relax; technology advances rapidly, so the threat could mushroom in the next few years. Attempts to make deepfakes with popular contemporary generative models like stable diffusion have already been made. In addition to swapping out individuals’ faces, this type of model also permits the substitution of virtually any other visual elements in the image.
Measures to Prevent the Effects of Deep Fake
How safe are you and your company from deepfake scams? There is no simple solution, unfortunately. We can only try to mitigate the danger.
Deepfake fraud, like other forms of social engineering, is aimed at human victims. The human element has traditionally been the most vulnerable part of any organization’s defenses. Explain this new threat to your coworkers, demonstrate and publicly analyze a few incidents, and maybe even illustrate where to look to recognize a deepfake are all good initial steps in [security awareness placeholder] preparing your workforce for such attacks.
What you should be on the lookout for in the picture: double eyebrows; extremely smooth faces; a lack of emotion; excessively smooth hair and skin; an awkward placement of facial features; unnatural eye movement; unnatural facial expressions and movements;
It’s also an excellent time to beef up your security measures in general. Multi-factor authentication should be used whenever private information is being transmitted. And perhaps make use of anomaly detection tools that enable seeing and reacting to out-of-the-ordinary user actions.
In addition, the same machine learning algorithms that facilitate deepfake development may be used to combat deepfake fraud. The public does not have access to the technologies that social media giants like Twitter and Facebook have created to detect deepfakes. However, this highlights the fact that experts in the cybersecurity industry recognize the gravity of the deepfake issue and are developing and enhancing solutions to counter it.