Technology Why deepfake phishing is a disaster waiting to happen

Why deepfake phishing is a disaster waiting to happen

-

View all on-demand sessions from the Intelligent Security Summit here.


Everything is not always what it seems. As artificial intelligence (AI) technology has progressed, individuals have exploited it to distort reality. They made synthetic images and videos of everyone Tom Cruising and Marking Zuckerberg until President Obama. While many of these use cases are harmless, other uses, such as deepfake phishing, are much more harmful.

A wave of threat actors are using AI to generate synthetic audio, image, and video content designed to impersonate trusted individuals, such as CEOs and other executives, to trick employees into handing over information.

Yet most organizations are simply not prepared to deal with these kinds of threats. In 2021, Gartner analyst Darin Stewart wrote a blog post warning that “while companies scramble to defend against ransomware attacks, they are doing nothing to prepare for an imminent attack from synthetic media.”

With AI evolving rapidly and providers like OpenAI democratizing access to AI and machine learning through new tools like ChatGPT, organizations cannot afford to ignore the social engineering threat of deepfakes. If they do, they will make themselves vulnerable to data breaches.

Event

Intelligent Security Summit on demand

Learn the critical role of AI and ML in cybersecurity and industry-specific case studies. Check out on-demand sessions today.

Look here

The state of deepfake phishing in 2022 and beyond

Although deepfake technology is still in its infancy, it is growing in popularity. Cybercriminals are already starting to experiment with it to launch attacks against unsuspecting users and organizations.

According to the World Economic Forum (WEF), the number of deepfake videos online is increasing by 900% annually. At the same time, VMware notes that two in three defenders report seeing malicious deepfakes as part of an attack, a 13% increase from last year.

These attacks can be devastatingly effective. For example, in 2021, cybercriminals used AI clone voice to impersonate the CEO of a major corporation and trick the organization’s bank manager into transferring $35 million to another account to complete a “takeover.”

A similar incident occurred in 2019. A fraudster called the CEO of a UK energy firm Using AI to impersonate the CEO of the company’s German parent company. He requested an urgent wire transfer of $243,000 to a Hungarian supplier.

Many analysts predict that the rise in deepfake phishing will only continue and the fake content produced by threat actors will only become more sophisticated and persuasive.

“As deepfake technology matures, [attacks using deepfakes] are expected to become more common and expand to newer scams,” said KPMG analyst Akhilesh Tuteja.

“They are increasingly indistinguishable from reality. Two years ago it was easy to tell deepfake videos because they were clunky [movement] quality and… the counterfeit person never seemed to bat an eyelid. But it’s getting harder and harder to discern now,” Tuteja said.

Tuteja suggests security leaders should prepare for fraudsters who use synthetic images and video to evade authentication systems, such as biometric logins.

How deepfakes impersonate individuals and can bypass biometric authentication

To perform a deepfake phishing attack, hackers use AI and machine learning to process a variety of content, including images, videos, and audio clips. With this data they create a digital imitation of an individual.

“Bad actors can easily create autoencoders — a kind of sophisticated neural network — to watch videos, study images and listen to recordings of individuals to mimic that person’s physical attributes,” says David Mahdi, a CSO and CISO advisor at Sectigo.

One of the best examples of this approach occurred earlier this year. Hackers generated a deepfake hologram of Patrick Hillmann, the chief communications officer at Binanceby drawing content from past interviews and media appearances.

This approach not only allows threat actors to mimic an individual’s physical attributes to fool human users through social engineering, but also to flout biometric authentication solutions.

For this reason, Gartner analyst Avivah Litan recommends organizations “not rely on biometric certification for user authentication applications unless it uses effective deepfake detection that ensures user vibrancy and legitimacy.”

Litan also notes that detecting these types of attacks will likely become more difficult over time as the AI ​​they use improves to create more convincing audio and visual representations.

“Deepfake detection is a lost cause, because the deepfakes created by the generative network are evaluated by a discriminatory network,” Litan said. Litan explains that the purpose of the generator is to create content that fools the discriminator, while constantly improving the discriminator to detect artificial content.

The problem is that as the accuracy of the discriminator increases, cybercriminals can apply insights gained from it to the generator to produce content that is harder to detect.

The role of security awareness training

One of the simplest ways organizations can tackle deepfake phishing is through security awareness training. While no training will prevent all employees from ever being taken by a highly sophisticated phishing attempt, it can reduce the likelihood of security incidents and breaches.

“The best way to tackle deepfake phishing is to integrate this threat into security awareness training. Just as users are taught not to click on web links, they should receive similar training on deepfake phishing,” said ESG Global analyst John Oltsik.

Part of that training should include a process for reporting phishing attempts to the security team.

In terms of training content, the FBI suggests that users can learn to recognize deepfake spear phishing and social engineering attacks by paying attention to visual indicators such as distortion, warping, or inconsistencies in images and video.

Teaching users how to identify common red flags, such as multiple images with consistent eye relief and placement, or synchronization issues between lip movement and audio, can help prevent them from falling prey to a skilled attacker.

Fight conflicting AI with defensive AI

Organizations can also try to tackle deepfake phishing using AI. Generative adversarial networks (GANs), a type of deep learning model, can produce synthetic datasets and generate mock social engineering attacks.

“For example, a strong CISO can rely on AI tools to detect fakes. Organizations can also use GANs to generate potential types of cyberattacks that criminals haven’t yet deployed, and devise ways to counter them before they happen,” said Liz Grennan, expert associate partner at McKinsey.

However, organizations that go down these paths should be willing to put in the time, as cybercriminals can also use these capabilities to innovate new types of attacks.

“Of course, criminals can use GANs to create new attacks, so it’s up to companies to stay one step ahead,” said Grennan.

Above all, companies need to be prepared. Organizations that don’t take the threat of deepfake phishing seriously will leave themselves vulnerable to a threat vector that could explode in popularity as AI is democratized and becomes more accessible to malicious entities.

VentureBeat’s mission is to become a digital city plaza where tech decision makers can learn about transformative business technology and execute transactions. Discover our Briefings.

Shreya Christinahttp://ukbusinessupdates.com
Shreya has been with ukbusinessupdates.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider ukbusinessupdates.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

Latest news

Rəsmi Casino Veb Pin Up

ContentPin Up Bet-ə Casino Girişi - TədqiqatçılarPin Up QeydiyyatıMüasir Kriptovalyuta Kazinolarını Skan Etmək üçün ürəyiaçiq MəsləhətlərPinup-az Online Casino Pin-upPin-up...

Играть В Авиатора: Самолетик Pin Up

ContentAzərbaycanda Rəsmi SayЕсли Ли Джекпот В Aviator?Pin-up Aviator: Hədis Qaydaları Və StrategiyalarAviator Oyununu Necə Tapmaq OlarКак Играть В Игру...

1win Azerbaycan Başlanğıc Login Və Qeydiyyat Yukle 456

ContentEtibarlı Və Güvənli Mərc Kontorları 2023In Azerbaycan Başlanğıc Login Və Qeydiyyat Yukle Xitô PsSeyrək Oyunçuları Görə 1win Mobil Proqram...

1win Nadir Onlayn Kazino Bonuslar 1win Rəsmi Saytı

ContentIos üçün 1win Proqramı: Yükləyin Və QuraşdırınWindows-da 1win YükləyinIn Proqramların Və Mobil Versiyanın MövcudluğuIn – ۱۸۰۰ Azn Bonusu Ilə...

1win Azerbaycan Başlanğıc Login Və Qeydiyyat Yukle

ContentIn Azerbaycan Başlanğıc Login Və Qeydiyyat Yukle Winbox Malaysia: Spin, Bet, Win, RepeatIn Bonus Maksimum 2000 Azn 1win Bonus...

Azərbaycanda Onlayn Mərc Evi Və Kazino

ContentIn Azərbaycandakı Rəsmi SaytıIn Azerbaijan - Onlayn Mərc Evi Və CasinoAddımda 1win Hesabının QeydiyyatıRulet Və Ya Avropa RuletiIn Azerbaijan...

Must read

You might also likeRELATED
Recommended to you