Check out all the on-demand sessions from the Intelligent Security Summit here.
Everything isn’t always as it seems. As artificial intelligence (AI) technology has advanced, individuals have exploited it to distort reality. They’ve created synthetic images and videos of everyone from Tom Cruise and Mark Zuckerberg to President Obama. While many of these use cases are innocuous, other applications, like deepfake phishing, are far more nefarious.
A wave of threat actors are exploiting AI to generate synthetic audio, image and video content that’s designed to impersonate trusted individuals, such as CEOs and other executives, to trick employees into handing over information.
Yet most organizations simply aren’t prepared to address these types of threats. Back in 2021, Gartner analyst Darin Stewart wrote a blog post warning that “while companies are scrambling to defend against ransomware attacks, they are doing nothing to prepare for an imminent onslaught of synthetic media.”
With AI rapidly advancing, and providers like OpenAI democratizing access to AI and machine learning via new tools like ChatGPT, organizations can’t afford to ignore the social engineering threat posed by deepfakes. If they do, they will leave themselves vulnerable to data breaches.
Event
Intelligent Security Summit On-Demand
Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.
The state of deepfake phishing in 2022 and beyond
While deepfake technology remains in its infancy, it’s growing in popularity. Cybercriminals are already starting to experiment with it to launch attacks on unsuspecting users and organizations.
According to the World Economic Forum (WEF), the number of deepfake videos online is increasing at an annual rate of 900%. At the same time, VMware finds that two out of three defenders report seeing malicious deepfakes used as part of an attack, a 13% increase from last year.
These attacks can be devastatingly effective. For instance, in 2021, cybercriminals used AI voice cloning to impersonate the CEO of a large company and tricked the organization’s bank manager into transferring $35 million to another account to complete an “acquisition.”
A similar incident occurred in 2019. A fraudster called the CEO of a UK energy firm using AI to impersonate the chief executive of the firm’s German parent company. He requested an urgent transfer of $243,000 to a Hungarian supplier.
Many analysts predict that the uptick in deepfake phishing will only continue, and that the false content produced by threat actors will only become more sophisticated and convincing.
“As deepfake technology matures, [attacks using deepfakes] are expected to become more common and expand into newer scams,” said KPMG analyst Akhilesh Tuteja.
“They are increasingly becoming indistinguishable from reality. It was easy to tell deepfake videos two years ago, as they had a clunky [movement] quality and … the faked person never seemed to blink. But it’s becoming harder and harder to distinguish it now,” Tuteja said.
Tuteja suggests that security leaders need to prepare for fraudsters using synthetic images and video to bypass authentication systems, such as biometric logins.
How deepfakes mimic individuals and may bypass biometric authentication
To execute a deepfake phishing attack, hackers use AI and machine learning to process a range of content, including images, videos and audio clips. With this data they create a digital imitation of an individual.
“Bad actors can easily make autoencoders — a kind of advanced neural network — to watch videos, study images, and listen to recordings of individuals to mimic that individual’s physical attributes,” said David Mahdi, a CSO and CISO advisor at Sectigo.
One of the best examples of this approach occurred earlier this year. Hackers generated a deepfake hologram of Patrick Hillmann, the chief communication officer at Binance, by taking content from past interviews and media appearances.
With this approach, threat actors can not only mimic an individual’s physical attributes to fool human users via social engineering, they can also flout biometric authentication solutions.
For this reason, Gartner analyst Avivah Litan recommends organizations “don’t rely on biometric certification for user authentication applications unless it uses effective deepfake detection that assures user liveness and legitimacy.”
Litan also notes that detecting these types of attacks is likely to become more difficult over time as the AI they use advances to be able to create more compelling audio and visual representations.
“Deepfake detection is a losing proposition, because the deepfakes created by the generative network are evaluated by a discriminative network,” Litan said. Litan explains that the generator aims to create content that fools the discriminator, while the discriminator continually improves to detect artificial content.
The problem is that as the discriminator’s accuracy increases, cybercriminals can apply insights from this to the generator to produce content that’s harder to detect.
The role of security awareness training
One of the simplest ways that organizations can address deepfake phishing is through the use of security awareness training. While no amount of training will prevent all employees from ever being taken in by a highly sophisticated phishing attempt, it can decrease the likelihood of security incidents and breaches.
“The best way to address deepfake phishing is to integrate this threat into security awareness training. Just as users are taught to avoid clicking on web links, they should receive similar training about deepfake phishing,” said ESG Global analyst John Oltsik.
Part of that training should include a process to report phishing attempts to the security team.
In terms of training content, the FBI suggests that users can learn to identify deepfake spear phishing and social engineering attacks by looking out for visual indicators such as distortion, warping or inconsistencies in images and video.
Teaching users how to identify common red flags, such as multiple images featuring consistent eye spacing and placement, or syncing problems between lip movement and audio, can help prevent them from falling prey to a skilled attacker.
Fighting adversarial AI with defensive AI
Organizations can also attempt to address deepfake phishing using AI. Generative adversarial networks (GANs), a type of deep learning model, can produce synthetic datasets and generate mock social engineering attacks.
“A strong CISO can rely on AI tools, for example, to detect fakes. Organizations can also use GANs to generate possible types of cyberattacks that criminals have not yet deployed, and devise ways to counteract them before they occur,” said Liz Grennan, expert associate partner at McKinsey.
However, organizations that take these paths need to be prepared to put the time in, as cybercriminals can also use these capabilities to innovate new attack types.
“Of course, criminals can use GANs to create new attacks, so it’s up to businesses to stay one step ahead,” Grennan said.
Above all, enterprises need to be prepared. Organizations that don’t take the threat of deepfake phishing seriously will leave themselves vulnerable to a threat vector that has the potential to explode in popularity as AI becomes democratized and more accessible to malicious entities.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.