We’re happy to announce a collaboration with Hugging Face, an open platform that fosters collaboration and transparency in AI, to make security insights more accessible to the community. VirusTotal’s analysis results are now integrated directly into the Hugging Face platform, helping users understand potential risks in model files, datasets, and related artifacts before they download them.
Security context where you need it
When you browse a file on Hugging Face, you’ll now see security information coming from different scanners, including VirusTotal results. In the example below, VirusTotal detects the file as unsafe and links directly to its public report for full details.
Addressing new challenges
As AI adoption grows, we see familiar threats taking new forms, from tampered model files and unsafe dependencies to data poisoning and hidden backdoors. These risks are part of the broader AI supply chain challenge, where compromised models, scripts, or datasets can silently affect downstream applications.
At VirusTotal, we’re also evolving to meet the challenges of this new landscape. We’re developing AI-driven analysis tools such as Code Insight, which uses LLMs to understand and explain code behavior, and we’re adding support for specialized tools for model/serialization formats, including picklescan, safepickle, and ModelScan, to help surface risky patterns and unsafe deserialization flows.
Our collaboration with Hugging Face strengthens this effort. By connecting VirusTotal’s analysis with Hugging Face’s AI Hub, we can expand our research into threats targeting AI models and share that visibility across the i
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
Read the original article: