As synthetic intelligence has become a staple in modern organizations, the field has transformed how they analyze data, make automated decisions, and defend their digital perimeters, moving from experimental labs to the operational bloodstream. However, with the incorporation of these systems deeper into company infrastructure, the technology itself is becoming both a strategic asset and a desirable target for companies.
Adversaries seeking leverage are now studying, imitating, and in some cases quietly manipulating the same models used to draft code, triage alerts, and streamline workflows. As Fast Company points out, this dual reality is redefining cyber risk, putting AI at the heart of both defense strategy and offensive innovation.
Insights from Google Cloud’s AI Threat Tracker indicate that this shift is accelerating rapidly.
There has been a significant increase in model extraction attempts, or “distillation” attempts, which are attempts by attackers to systematically query proprietary artificial intelligence systems to estimate their underlying capabilities, without ever breaching a network in its traditional sense, according to the report.
Google Threat Intelligence observes that state-aligned and financially motivated actors affiliated with China, Iran, North Korea, and Russia are integrating artificial intelligence tools into nearly every stage of the intrusion lifecycle.
A growing number of these campaigns include automated reconnaissance, vulnerability mapping, and highly tailored social engineering, which can be carried out with minimal direct human intervention and are increasingly modular, scalable, and effective.
In accordance with these findings, a newly released assessment by Google Threat Intelligence Group indicates a more operational phase of the threat landscape has begun.
This analysis warns that adversaries are no longer considering artificial intelligence a peripheral experiment, but are instead embedding it directly into live attack workflows.
In particular, the targeting and misuse of Gemini models is highlighted, reflecting a broader trend in which commercially available generative systems are systematically evaluated, stressed, and sometimes incorporated into malicious toolchains.
Researchers documented instances in which active malware strains initiated direct calls to Gemini during runtime through the application programming interface. In the absence of hard-coding all functional components within the malware binary, operators dynamically requested task-specific source code as the intrusion progressed from the model.
As part of the HONESTCUE malware family, structured prompts were issued to obtain C# code snippets that were subsequently executed within its attack chain. By externalizing portions of its logic, the malware was able to reduce its static footprint and complicate detection strategies that utilize signature matching or behavioral heuristics.
Further, the report describes sustained efforts to perform model extraction attacks, also known as distillation attacks. These operations involved the generation of large volumes of carefully sequenced queries that mapped response patterns and approximated internal decision boundaries by threat actors.
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article:
