Cybersecurity researchers have identified a new darknet-based artificial intelligence tool that allows threat actors to automate cyberattacks, generate malicious code and produce illegal content, raising concerns about the growing criminal misuse of AI.
The tool, known as DIG AI, was uncovered by researchers at Resecurity and first detected on September 29, 2025. Investigators said its use expanded rapidly during the fourth quarter, particularly over the holiday season, as cybercriminals sought to exploit reduced vigilance and higher online activity.
DIG AI operates on the Tor network and does not require user registration, enabling anonymous access. Unlike mainstream AI platforms, it has no content restrictions or safety controls, researchers said.
The service offers multiple models, including an uncensored text generator, a text model believed to be based on a modified version of ChatGPT Turbo, and an image generation model built on Stable Diffusion.
Resecurity said the platform is promoted by a threat actor using the alias “Pitch” on underground marketplaces, alongside listings for drugs and stolen financial data. The tool is offered for free with optional paid tiers that provide faster processing, a structure researchers described as a crime-as-a-service model.
A
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article:
