Top 3 deepfake threat scenarios users face in 2023

0
102
Deepfake Threats

The number of deepfake videos online is increasing at an annual rate of 900% according to the World Economic Forum (WEF). A lot of deepfake fraud cases hit major headlines in the news, with reports relating to harassment, revenge, crypto scams. Kaspersky researchers are shedding light on the top three fraud schemes using deepfakes that users should look out for.

The use of neural networks and deep learning (hence ‘deep fake’) has allowed users from all over the world to apply images, video and audio materials to create realistic videos of a person in which their face or body has been digitally altered so that they appear to be someone else. These manipulated videos and images are frequently used for malicious purposes to spread false information.

Financial Fraud

Deepfakes can be used for social engineering, where criminals use enhanced images to impersonate celebrities to bait victims into falling for their scams. For example, an artificially created video of Elon Musk promising high returns from a dubious cryptocurrency investment scheme went viral last year, causing users to lose their money. To create deepfakes like this one, scammers use footage of celebrities or splice together old videos, and launch live streams on social media platforms, promising to double any cryptocurrency payment sent to them.

Pornographic deepfakes

Another use for deepfakes is to violate an individual’s privacy. Deepfake videos can be created by manipulating a person’s face onto a pornographic video, causing harm and distress. In one case, deepfake videos of some celebrities surfaced online, showing their faces superimposed onto the bodies of pornographic actresses in explicit scenes. As a consequence, in such cases, the attack victims have their reputation harmed and rights violated.

Business risks

Often, deepfakes are even used to target businesses for crimes such as extortion from company managers, blackmail and industrial espionage. For instance, there is a known case where cybercriminals managed to deceive a bank manager in the UAE and steal $35 million, using a voice deepfake – just a small recording of employee’s boss’s voice was enough to generate a convincing deepfake. In another case, scammers tried to fool the largest cryptocurrency platform, Binance. The Binance executive was surprised when he started receiving “thank you!” messages about a Zoom meeting he never attended. With his publicly available images, the attackers managed to generate a deepfake and successfully apply it at an online meeting, speaking for the executive.

In general, the aims of scammers who exploit deepfakes include disinformation and manipulation of public opinion, blackmail or even espionage. HR managers are already on alert regarding the use of deepfakes by candidates who apply for remote work, according to an FBI warning. In the case of Binance, attackers used images of people from the Internet to create deepfakes and were even able to add the photos of these people to resumes. If they manage to trick HR managers in this way and later receive an offer, they can steal employer data.

While the number of deepfakes is increasing, it remains an expensive type of fraud which requires a big budget. Earlier research by Kaspersky revealed cost of deepfakes on the darknet. If an ordinary user finds software on the Internet and tries to make a deepfake, the result will be unrealistic and obvious to the human eye. Few people will buy into a poor quality deepfake: they will notice lags in facial expression or a blurring of the shape of the chin.

Therefore, when cybercriminals are preparing for an attack, they will need a big amount of data: photos, videos and audio of the person they want to impersonate. Different angles, brightness of lighting, facial expressions, all play a big role in the final quality. For the result to be realistic, up-to-date computer power and software is necessary. All this demands a huge amount of resources and is only available to a small number of cybercriminals. Therefore, despite the dangers that a deepfake can provide, it is still an extremely rare threat and only a small number of buyers will be able to afford it – after all, the price for one minute of a deepfake can start from 20,000 US dollars.

“One of the most serious threats that deepfake poses to business is not always the theft of corporate data. Sometimes reputational risks can have very severe consequences. Imagine a video is published in which your executive (apparently) makes polarising statements on sensitive issues. For corporations, this can quickly lead to a crash in share prices. However, despite the fact that the risks of such a threat are extremely high, the chance that you will be attacked in this way remains extremely low due to the cost of creating deepfakes and the fact that few attackers are able to create a high-quality deepfake,” comments Dmitry Anikin, a senior security expert at Kaspersky. “What you can do today is to be aware of the key characteristics of deepfake videos to look out and keep a sceptical attitude to voicemail and videos you receive. Also, ensure your employees understand what deepfake is and how they can recognise it: for instance, jerky movement, shifts in skin tone, strange blinking or no blinking at all, and so on.”

Continuous monitoring of darknet resources provides valuable insights into the deepfake industry, allowing researchers to track the latest trends and activities of threat actors in this space. By monitoring the darknet, researchers can uncover new tools, services, and marketplaces used for the creation and distribution of deepfakes. This type of monitoring is a critical component of deepfake research, and helps improve our understanding of the evolving threat landscape. Kaspersky’s Digital Footprint Intelligence service includes this type of monitoring to help its customers stay ahead of the curve when it comes to deepfake-related threats.

Learn more about the deepfake industry on Kaspersky Daily.

To be protected from threats related to deepfakes, Kaspersky recommends:

  • Check the cybersecurity practices in place in your organisation – not only in the form of software, but also in terms of developed IT skills. Use Kaspersky Threat Intelligence to get ahead of the current threat landscape.
  • Boost the corporate “human firewall”: ensure the employees understand what deepfakes are, how they work, and the challenges they can pose. Have ongoing awareness and education drives on teaching employees how to spot a deepfake. Kaspersky Automated Security Awareness Platform helps employees to stay up-to-date with the most recent threats and increases the digital literacy levels.
  • Use good quality news sources. Information illiteracy remains a crucial enabler for the proliferation of deepfakes.
  • Have good protocols like ‘trust but verify.’ A skeptical attitude to voicemail and videos will not guarantee people will never be deceived, but it can help avoid many of the most common traps.
  • Be aware of the key characteristics of deepfake videos to look out for to avoid becoming a victim: jerky movement, shifts in lighting from one frame to the next, shifts in skin tone, strange blinking or no blinking at all, lips poorly synched with speech, digital artifacts on the image, video intentionally encoded down in quality and has poor lighting.
Provided by Kaspersky SA

LEAVE A REPLY

Please enter your comment!
Please enter your name here