There’s been a lot of chatter over the use of AI in various fields, and because it’s my professional focus, I’m most interested in how it’s used in cybersecurity. Now, that doesn’t mean that I’m not aware of how it’s used…or more appropriately, misused…in other fields, as well. For example, how it’s been misused in the legal field has been around for more than 2 years now, and just last year, we saw the term “AI slop” be adopted in the software dev/cybersecurity field.
Something we also saw in 2025 was the release of the Anthropic report regarding how AI was used by threat actors, in a cyber espionage campaign. The report is 14 pages long, with the title page, table of contents, and a 2-pg Executive Summary; the contents of the report itself starts on pg 6.
The “TL;DR” of the report, if you need it, is that nation-state threat actors used Claude to target 30 organizations, and “…to execute 80-90% of tactical operations independently at physically impossible request rates.”
That’s right…they used AI to run up to an estimated 90% of their attack chain autonomously.
So, what does this mean? It means that “low and slow” was out the window, and that the attack chains were automated a “physically impossible request rates”.
That’s it. Everything was faster. Reading through the report, it becomes clear that tools and techniques employed were akin to those commonly observed in human-operated attacks, but the OODA loop was much smaller, much tighter, and iterated through much faster than humanly possible. On the defender’s side, this means that artifacts were generated (and hopefully, alerts fired) much closer together than what would’ve been observed earlier in the year.
In response to the report, Matthew shared his thoughts, in which he shared the following:
Th
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
Read the original article: