Cybersecurity in the age of AI means bigger, faster threats

<p>With attackers able to move at AI speed, defenders can’t rely on the techniques and instincts they’ve come to trust. Even the best of best practices won’t meet the threat, said speakers at the recent SecureWorld conference in Boston.</p>
<p>An organization that wants to be resilient in the AI age needs to detect and fend off malicious activity as it occurs.</p>
<p>”That means putting in place stronger identity controls,” said Jack Butler, a senior enterprise solutions engineer at Sumo Logic, a SecOps vendor. “That means putting in place the more robust logging program and correlation engines to detect across all of these in real time and reassess signals of trust. It needs to be reassessed dynamically.”</p>
<section class=”section main-article-chapter” data-menu-title=”Identity protection needs to meet the threat”>
<h2 class=”section-title”><i class=”icon” data-icon=”1″></i>Identity protection needs to meet the threat</h2>
<p>As for what to do about the substantial challenge of managing identities associated with people, machines and AI agents, panelists at SecureWorld emphasized visibility.</p>
<p>”Know what is in your environment, and know what it is doing,” recommended Chandra Pandey, CEO of Seceon, a security vendor. “If you know what is in your environment with machines, humans and all that — in real time — and you know what you’re doing, you have done 80% of your work.”</p>
<p>Reckoning with all that discovery isn’t easy, especially with the nearly incalculable numbers of nonhuman identities (NHIs) in use in modern IT environments. <a href=”https://www.techtarget.com/searchsecurity/definition/What-is-machine-identity-management”>Machine identity management</a> and <a href=”https://www.techtarget.com/searchsecurity/tip/CISOs-guide-to-nonhuman-identity-security”>NHI security</a> pose a big and growing challenge for security teams.</p>
<p>”Make sure that you’re really asking yourself: What systems do you have — human and nonhuman identities — and what they have access to,” Butler said. “Make sure that you are assuming zero trust. You’re going to get pwned, and, when you do, they’re going to take access.”</p>
<p>”Start with AI agents,” advised Kelsey Brazill, vice president of market strategy at P0 Security, an identity security vendor. “They’re new, so there’s less baggage there, and it’s easier to implement some best practices and standards. And then that sets you up to extend that to all of the NHIs in your system.”</p>
<p>SOC analysts have seen AI used against them for a while, but defenders haven’t shifted their thinking enough to fully confront AI’s weaponization, said Patricia Titus, field CISO at security vendor Abnormal AI.</p>
<p>”Stop constantly looking for indicators of compromise,” Titus recommended. “By the time somebody gets hit and your SOC analysts write a rule and plug it into your systems, it could already be too late for your organization. We have to start thinking a little bit differently and start looking at attributing behavior.”</p>
<p>With AI’s help, threat actors can be deliberate about who they target. This means attackers rely less on classic, spray-and-pray intrusion attempts, Titus said, and can instead use AI to quickly cull through vast amounts of data to craft specific attacks on a particular individual. Those <a href=”https://www.techtarget.com/searchsecurity/tip/Generative-AI-is-making-phishing-attacks-more-dangerous”>highly targeted tactics</a> tend to be more successful.</p>
<p>Fayyaz Rajpari, senior director of GSI at SaaS security vendor AppOmni, said he has seen many compromises in the past year that had nothing to do with humans and instead involved cloud services, SaaS, NHIs, tokens and AI agents. That type of malicious behavior is hard to defend against, he said. “You have to start figuring out how you can leverage AI against these AI-generated attacks and interconnected systems. It’s difficult, but that’s just the reality.”</p>
</section>
<section class=”section main-article-chapter” data-menu-title=”Can AI agents be secured?”>
<h2 class=”section-title”><i class=”icon” data-icon=”1″></i>Can AI agents be secured?</h2>
<p>AI agents are good at evading whatever guardrails cybersecurity teams put in place. “Their job is to finish a workload. If they have to go around to the backdoor and beg another agent to give them access, which we’ve already seen, they will get granted access,” Titus said.</p>
<p>To respond, teams need to design AI models that will mask data and take other protective measures, said Peter Steyaert, a senior manager of systems engineering at Fortinet. “You’re going to have to limit exposure. It’s going to have to be an accepted risk level through accepte

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from Search Security Resources and Information from TechTarget

Read the original article: