As agentic AI becomes more common across industries, companies face a new cybersecurity challenge: how to verify and secure systems that operate independently, make decisions on their own, and appear or disappear without human involvement.
Consider a financial firm where an AI agent activates early in the morning to analyse trading data, detect unusual patterns, and prepare reports before the markets open. Within minutes, it connects to several databases, completes its task, and shuts down automatically. This type of autonomous activity is growing rapidly, but it raises serious concerns about identity and trust.
“Many organisations are deploying agentic AI without fully thinking about how to manage the certificates that confirm these systems’ identities,” says Chris Hickman, Chief Security Officer at Keyfactor.
“The scale and speed at which agentic AI functions are far beyond what most companies have ever managed.”
AI agents are unlike human users who log in with passwords or devices tied to hardware. They are temporary and adaptable, able to start, perform complex jobs, and disappear without manual authentication.
This fluid nature makes it difficult to manage digital certificates, which are essential for maintaining trusted communication between systems.
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article: