Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Learn more
Once crude and expensive, deepfakes are now a rapidly growing threat to cybersecurity.
A UK-based company lost $243,000 thanks to a deepfake that mimicked a CEO’s voice so accurately that the person on the other end of the line authorized a fraudulent wire transfer. A similar “deep voice” attack that precisely imitated a company executive’s pronounced accent cost another company $35 million.
Perhaps even more terrifying, the crypto company’s CCO Binance reported that a “sophisticated hacking team” used video from his past TV appearances to create a believable AI hologram that tricked people into joining meetings. “Apart from the 15 pounds I gained when COVID was noticeably absent, this deepfake was sophisticated enough to fool several highly intelligent members of the crypto community,” He wrote.
Cheaper, sneakier and more dangerous
Don’t be fooled into taking deepfakes lightly. Accenture’s Cyber Threat Information (ACTI) team notes that while recent deepfakes can be laughably crude, the trend in the technology is toward more sophistication with less cost.
In fact, the ACTI team believes that high-quality deepfakes aimed at impersonating specific individuals are already more common in organizations than reported. In a recent examplea legitimate company’s use of deepfake technologies was used to create fraudulent news anchors to spread Chinese disinformation, demonstrating that the malicious use is here and already impacting entities.
A natural evolution
The ACTI team believes that deepfake attacks are the logical continuation of social engineering. In fact, they should be considered as one whole, because the primary malicious potential of deepfakes is to integrate into other social engineering tricks. This can make it even more difficult for victims to deny an already cumbersome threat landscape.
ACTI has been tracking significant evolutionary changes in deepfakes over the past two years. For example, between January 1 and December 31 By 2021, underground chatter regarding the sale and purchase of deepfaked goods and services had been extensively focused on common fraud, cryptocurrency fraud (such as pump-and-dump schemes), or gaining access to crypto accounts.
A vibrant market for deepfake fraud
However, the trend from January 1 to November 25, 2022 shows a different, and arguably more dangerous, focus on using deepfakes to access corporate networks. In fact, underground forum discussions about this attack method have more than doubled (from 5% to 11%), with the intent to use deepfakes to evade security measures increased fivefold (from 3% to 15%).
This shows that deepfakes are moving from crude crypto schemes to sophisticated ways of accessing corporate networks – bypassing security measures and accelerating or extending existing techniques used by a large number of threat actors.
The ACTI team believes that the changing nature and use of deepfakes is partly driven by improvements in technology, such as AI. The hardware, software, and data needed to create compelling deepfakes is becoming more widespread, easier to use, and less expensive, with some professional services now charging less than $40 a month to license their platform.
Emerging deepfake trends
The rise of deepfakes is reinforced by three adjacent trends. First, the cybercriminal underground is highly professionalized, with specialists offering high-quality tools, methods, services and exploits. The ACTI team believes this likely means skilled cyberthreat actors will try to capitalize by offering a wider breadth and scope of underground deepfake services.
Second, due to duplication of extortion techniques used by many ransomware groups, there is an endless supply of stolen, sensitive data available on underground forums. This allows deepfake criminals to make their work much more accurate, credible and harder to detect. These are sensitive business data increasingly iindexedmaking it easier to find and use.
Third, cybercriminal groups on the dark web now also have bigger budgets. The ACTI team regularly sees cyber threat actors with R&D and outreach budgets ranging from $100,000 to $1 million and even $10 million. This allows them to experiment and invest in services and tools that can enhance their social engineering capabilities, including active cookie sessions, high-fidelity deepfakes, and specialized AI services such as vocal deepfakes.
Help is on the way
follow the SIFT approach detailed in the FBI’s March 2021 alert. SIFT stands for Stop, Investigate the source, Find trusted coverage, and Trace the original content. This may include studying the issue to avoid hasty emotional reactions, resisting the urge to repost questionable material, and watching for the telltale signs of deepfakes.
It can also help to think about the motives and reliability of the people posting the information. If a phone call or email supposedly from a boss or friend seems odd, don’t answer. Call the person directly to verify. As always, check “from” email addresses for spoofing and find multiple, independent, and reliable sources of information. In addition, online tools can help you determine if images are being reused for sinister purposes or if multiple legitimate images are being used to create counterfeits.
The ACTI team also suggests incorporating deepfake and phishing training – ideally for all employees – and developing standard operating procedures for employees to follow if they suspect an internal or external message is a deepfake, and to monitor for potentially harmful deepfakes (through automated searches and alerts). ).
It can also help to plan crisis communications prior to victimization. This may include preparing responses for press releases, suppliers, authorities and customers and providing links to authentic information.
An escalating battle
Currently, we are witnessing a silent battle between automated deepfake detectors and the emerging deepfake technology. The irony is that the technology used to automate deepfake detection will likely be used to improve the next generation of deepfakes. To stay ahead, organizations should consider resisting the temptation to make security an “afterthought” status. Hasty security measures or a failure to understand how deepfake technology can be exploited can lead to breaches and consequent financial loss, reputational damage and regulatory action.
In short, organizations must focus strongly on combating this new threat and training employees to be vigilant.
Thomas Willkan is a cyber threat intelligence analyst at accent.
Data decision makers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.
To read about advanced ideas and up-to-date information, best practices and the future of data and data technology, join DataDecisionMakers.
You might even consider contributing an article yourself!
Read more from DataDecisionMakers