GreyNoise, a cybersecurity company, has discovered two campaigns against the infrastructure of large language models (LLMs) where the attackers used misconfigured proxies to gain illicit access to commercial AI services. Starting late December 2025, the attackers scanned over 73 LLM endpoints and had more than 80,000 sessions in 11 days, using harmless queries to evade detection. These efforts highlight the growing threat to AI systems as attackers begin to map vulnerable systems for potential exploitation.
The first campaign, which started in October 2025, focused on server-side request forgery (SSRF) vulnerabilities in Ollama honeypots, resulting in a cumulative 91,403 attack sessions. The attackers used malicious registry URLs via Ollama’s model pull functionality and manipulated Twilio SMS webhooks to trigger outbound connections to their own infrastructure. A significant spike during Christmas resulted in 1,688 sessions over 48 hours from 62 IP addresses in 27 countries, using ProjectDiscovery’s OAST tools, indicating the involvement of grey-hat researchers rather than full-fledged malware attacks.
The second campaign began on December 28 from IP addresses 45.88.186.70 and 204.76.203.125. This campaign systematically scanned endpoints that supported
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article:
