Cybersecurity investigators at Google have confirmed that state-sponsored hacking groups are actively relying on generative artificial intelligence to improve how they research targets, prepare cyber campaigns, and develop malicious tools. According to the company’s threat intelligence teams, North Korea–linked attackers were observed using the firm’s AI platform, Gemini, to collect and summarize publicly available information about organizations and employees they intended to target. This type of intelligence gathering allows attackers to better understand who works at sensitive companies, what technical roles exist, and how to approach victims in a convincing way.
Investigators explained that the attackers searched for details about leading cybersecurity and defense companies, along with information about specific job positions and salary ranges. These insights help threat actors craft more realistic fake identities and messages, often impersonating recruiters or professionals to gain the trust of their targets. Security experts warned that this activity closely resembles legitimate professional research, which makes it harder for defenders to distinguish normal online behavior from hostile preparation.
The hacking group involved, tracked as UNC2970, is linked to North Korea and overlaps with a network widely known as Lazarus Group. This group has previously run a long-term operation in which attackers pretended to offer job opportunities to professionals in aerospace, defense, and energy companies, only to deliver malware instead. Researchers say this group continues to focus heavily on defense-related targets and regularly impersonates corporate recruiters to begin contact with victims.
The misuse of AI is not limited to one actor. Multiple hacking groups connected to China and Iran were also found using AI tools to support different phases of their operations. Some groups used AI to gather targeted intelligence, including collecting email addresses and account details. Others relied on AI to analyze software weaknesses, prepare technical testing plans, interpret documentation from open-source tools, and debug exploit code. Certain actors used AI to build scanning tools and malicious web shells, while others created fake online identities to manipulate individuals into interacting with them. In several cases, attackers claimed to be security researchers or competition participants in order to bypass safety restrictions built into AI systems.
Researchers also identified malware that directly communicates with AI services to generate harmful code during an attack. One such tool, HONESTCUE, requests programming instructions from AI platforms and receives source code that is used to build additional malicious components on the victim’s system. Instead of storing files on disk, this malware compiles and runs code directly in memory using legitimate system tools, making detection and forensic analysis more difficult. Separately, investigators uncovered phishing kits designed to look like cryptocurrency exchanges. These fake platforms were built using automated website creation tools from Lovable AI and were used to trick victims into handing over login credentials. Parts of this activity were linked to a financially motivated group known as UNC5356.
Security teams also reported an increase in so-called ClickFix campaigns. In these schemes, attackers use public sharing features on AI platforms to publish convincing step-by-step guides that appear to fix common computer problems. In reality, these instructions lead users to install malware that steals personal and financial data. This trend was first flagged in late 2025 by Huntress.
Another growing threat involves model extraction attacks. In these cases, adversaries repeatedly query proprietary AI systems in order to observe how they respond and then train their own models to i
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
Read the original article:
