<p>For decades, cybercriminals have impersonated targets’ trusted contacts to convince them to send funds, credentials or sensitive data. Thanks to deepfake and voice cloning technology, however, security awareness training — the usual countermeasure to social engineering attacks — is arguably no longer enough.</p>
<p>Traditional security awareness training relies on pattern recognition: Does this email look suspicious? Does that link seem off? But <a href=”https://www.techtarget.com/searchsecurity/tip/Real-world-AI-voice-cloning-attack-A-red-teaming-case-study”>highly convincing deepfake audio</a> and video attacks mean users can no longer rely on instinct or context cues to determine if a message is legitimate.</p>
<p>”Recognition-based training breaks down when an employee believes they’re talking to an executive with an urgent request,” said Diana Rothfuss, director of global strategy for risk, fraud and compliance solutions at data and AI software provider SAS. “To defend against this type of threat, organizations have to get their employees to go beyond ‘does this look right?'”</p>
<p>The vast majority of fraud professionals — 77% — say <a href=”https://www.techtarget.com/searchsecurity/tip/Prepare-for-deepfake-phishing-attacks-in-the-enterprise”>deepfake attacks are increasing</a>, according to the <a target=”_blank” href=”https://www.sas.com/en_us/news/press-releases/2026/march/acfe-anti-fraud-technology-study-deepfakes.html” rel=”noopener”>2026 Anti-Fraud Technology Benchmarking Report</a>, co-published by SAS and the Association of Certified Fraud Examiners (ACFE). Just 7% described their organizations as more than moderately prepared to detect or prevent deepfakes. As a result, some security experts are calling on organizations to implement and normalize proof-based systems, processes and policies to verify that people are who they say they are and short-circuit deepfake attacks.</p>
<section class=”section main-article-chapter” data-menu-title=”Prove it: Separating authority from authentication”>
<h2 class=”section-title”><i class=”icon” data-icon=”1″></i>Prove it: Separating authority from authentication</h2>
<p>The core principle of a proof-based approach is that no single interaction, whether voice, video or text, can authorize a sensitive action on its own — what SAS’ Rothfuss described as “separating authority from authentication.” That sounds straightforward but runs against how most employees are wired to respond to executive requests.</p>
<p>Consider, for example, a 2024 incident in which <a target=”_blank” href=”https://www.cfodive.com/news/scammers-siphon-25m-engineering-firm-arup-deepfake-cfo-ai/716501/” rel=”noopener”>threat actors used deepfake technology</a> to steal $25 million from global engineering firm Arup. A finance employee, believing he was on a video conference with senior executives, wired the money at the attackers’ request.</p>
<blockquote class=”main-article-pullquote”>
<div class=”main-article-pullquote-inner”>
<figure>
To defend against this type of threat, organizations have to get their employees to go beyond ‘does this look right?’
</figure>
<figcaption>
<strong>Diana Rothfuss</strong>Director of global strategy for risk, fraud and compliance solutions, SAS
</figcaption>
<i class=”icon” data-icon=”z”></i>
</div>
</blockquote>
<p>While such highly sophisticated deepfake video attacks are still relatively rare, audio cloning is a light lift for cybercriminals. Experts say such incidents present a clear mandate for finance and IT teams to formalize processes for verifying wire transfer requests, rather than handling them on an ad hoc basis.</p>
<p>”Proof-based verification policies should not be that hard; frankly, they should already exist,” said Ira Winkler, field CISO at cybersecurity company Aisle. “There should now be operational procedures in place, such as email verification of a financial transfer before transferring the money, even with ‘visual’ instruction.”</p>
<p>Equally important, Winkler added, staff must be trained on such policies and understand that there are no exceptions — even if they receive verbal instructions from a senior executive over the phone or on Zoom. “This is not just for deepfakes, but for fraud protections in general,” he said.</p>
<p>Specific authentication controls that do not depend on a human user’s recognition of a voice or face include the following:</p>
<h3><b>Out-of-band, two-factor verification</b></h3>
<p>Before fulfilling sensitive requests — e.g., fund transfers, credential resets and privileged access changes — users require confirmation through two separate, pre-approved c
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
Read the original article: