A Major Flaw in the AI Testing Framework MLflow can Compromise the Server and Data

MLflow, an open-source framework used by many organizations to manage and record machine-learning tests, has been patched for a critical vulnerability that could enable attackers to extract sensitive information from servers such as SSH keys and AWS credentials. Since MLflow does not enforce authentication by default, and a growing percentage of MLflow deployments are directly exposed to the internet, the attacks can be carried out remotely without authentication.
“Basically, every organization that uses this tool is at risk of losing their AI models, having an internal server compromised, and having their AWS account compromised,” Dan McInerney, a senior security engineer with cybersecurity startup Protect AI, told CSO. “It’s pretty brutal.”
McInerney discovered the flaw and privately reported it to the MLflow project. It was fixed in the framework’s version 2.2.1, which was released three weeks ago, but no security fix was mentioned in the release notes.
Path traversal used to include local and remote files
MLflow is a Python-based tool for automating machine-learning workflows. It includes a number of components that enable users to deploy models from various ML libraries, handle their lifecycle (including model versioning, stage transitions, and annotations), track experiments to record and compare parameters and results, and even package ML code in a reproduci

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: