Only 21% of employees in South Africa could tell a deepfake from a real image

0
30
Deepfake

According to the Kaspersky Business Digitization survey¹, just under half of employees surveyed in South Africa (42%) said they could tell a deepfake from a real image. However, despite this claim, in a test² only 21% could actually distinguish a real image from an AI-generated one. This means that organisations are vulnerable to such scams, with cybercriminals using generative AI imagery in several ways for illegal activities. They can use deepfakes to create fake videos or images that can be used to defraud individuals or organisations.

For instance, cybercriminals can create a fake video of a CEO requesting a wire transfer or authorising a payment, which can be used to steal corporate funds. Compromising videos or images of individuals can be created, which can be used to extort money or information from them. Cybercriminals can also use deepfakes to spread false information or manipulate public opinion. 55% of employees surveyed in South Africa believe their company can lose money because of deepfakes.

“Even though many employees claimed that they could spot a deepfake, our research showed that only half of them could actually do it. It is quite common for users to overestimate their digital skills; for organisations this means vulnerabilities in their human firewall and potential cyber risks – to infrastructure, funds, and products,” comments Dmitry Anikin, Senior Data Scientist at Kaspersky. “Continuous monitoring of the Dark Web resources provides valuable insights into the deepfake industry, allowing researchers to track the latest trends and activities of threat actors in this space. This monitoring is a critical component of deepfake research which helps to improve our understanding of the evolving threat landscape. Kaspersky’s Digital Footprint Intelligence service includes such monitoring to help its customers stay ahead of the curve when it comes to deepfake-related threats.”

To be protected from threats related to deepfakes, Kaspersky recommends:

Check the cybersecurity practices in place in your organisation – not only in the form of software, but also in the form of developed IT skills. Use Kaspersky Threat Intelligence to get ahead of the current threat landscape.

Boost the corporate “human firewall”: ensure  employees understand what deepfakes are, how they work, and the challenges they can pose. Have ongoing awareness and education drives on teaching employees how to spot a deepfake. Kaspersky Automated Security Awareness Platform helps employees to stay up-to-date with the most recent threats and increases the digital literacy levels.

Use good quality news sources. Information illiteracy remains a crucial enabler for the proliferation of deepfakes.

Have good protocols like ‘trust but verify.’ A skeptical attitude to voicemail and videos will not guarantee people will never be deceived, but it can help avoid many of the most common traps.

Be aware of the key characteristics of deepfake videos to look out for to avoid becoming a victim: jerky movement, shifts in lighting from one frame to the next, shifts in skin tone, strange blinking or no blinking at all, lips poorly synched with speech, digital artifacts on the image, video intentionally encoded down in quality and has poor lighting.

References:

¹2,000 employees across SMBs & enterprises were surveyed in the Middle East, Turkiye, Africa region in 2023.

²First, respondents were asked if they could distinguish a deepfake from a real image. Then they were given two images from videos with a popular American actor, and one of these images was from a deepfake video. The respondents were then asked to indicate which of the images was real and which was fake.

Article Provided

LEAVE A REPLY

Please enter your comment!
Please enter your name here