Deepfake phishing is here, but many enterprises are unprepared

<p>Deepfake-related cybercrime is on the rise as threat actors exploit AI to deceive and defraud unsuspecting targets, including enterprise users. Deepfakes use deep learning, a category of AI that relies on neural networks, to generate synthetic image, video and audio content.</p>
<p>While <a href=”https://www.techtarget.com/whatis/definition/deepfake”>deepfakes</a> can be used for benign reasons, threat actors create them with the primary objective of duping targets into enabling them access to digital and financial assets. In 2025, 41% of security professionals reported deepfake campaigns had recently targeted executives at their organizations, according to a Ponemon Institute <a target=”_blank” href=”https://blackcloak.io/ponemon-digital-executive-protection-report-2025-thank-you/” rel=”noopener”>survey</a>. Deloitte’s Center for Financial Services also recently <a target=”_blank” href=”https://www.deloitte.com/us/en/insights/industry/financial-services/deepfake-banking-fraud-risk-on-the-rise.html” rel=”noopener”>warned</a> that financial losses resulting from generative AI could reach $40 billion by 2027, up from $12.3 billion in 2023.</p>
<p>As deepfake technology becomes both more convincing and widely accessible, CISOs must take proactive steps to protect their organizations and end users from fraud.</p>
<section class=”section main-article-chapter” data-menu-title=”3 ways CISOs can defend against deepfake phishing attacks”>
<h2 class=”section-title”><i class=”icon” data-icon=”1″></i>3 ways CISOs can defend against deepfake phishing attacks</h2>
<p>Even as attackers race to capitalize on deepfake technology, research suggests that enterprises’ defensive capabilities are lagging. Just 12% have safeguards in place to detect and deflect <a href=”https://www.techtarget.com/searchsecurity/tip/Real-world-AI-voice-cloning-attack-A-red-teaming-case-study”>deepfake voice phishing</a>, for example, and only 17% have deployed protections against AI-driven attacks, according to a <a target=”_blank” href=”https://www.verizon.com/business/resources/T26f/reports/2025-mobile-security-index.pdf” rel=”noopener”>2025 Verizon survey</a>.</p>
<p>It’s crucial that CISOs take the following key steps to identify and repel synthetic AI attacks.</p>
<h3>1. Practice good organizational cyber hygiene</h3>

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from Search Security Resources and Information from TechTarget

Read the original article: