A new wave of cyberattacks is using large language models as an offensive tool, according to recent reporting from Anthropic and Oligo Security. Both groups said hackers used jailbroken LLMs-some capable of writing code and conducting autonomous reasoning-to conduct real-world attack campaigns. While the development is alarming, cybersecurity researchers had already anticipated such advancements.
Earlier this year, a group at Cornell University published research predicting that cybercriminals would eventually use AI to automate hacking at scale. The evolution is consistent with a recurring theme in technology history: Tools designed for productivity or innovation inevitably become dual-use. Any number of examples-from drones to commercial aircraft to even Alfred Nobel’s invention of dynamite-demonstrate how innovation often carries unintended consequences.
The biggest implication of it all in cybersecurity is that LLMs today finally allow attackers to scale and personalize their operations simultaneously. In the past, cybercriminals were mostly forced to choose between highly targeted efforts that required manual work or broad, indiscriminate attacks with limited sophistication.
Generative AI removes this trade-off, allowing attackers to run tailored campaigns against many targets at once, all with minimal input. In Anthropic’s reported case, attackers initially provided instructions on ways to bypass its model
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article:
