Ethical Hackers Uncover 38TB Microsoft Data Breach via Azure Storage

 

The recent Microsoft data leak, stemming from the inadvertent sharing of open-source training data on GitHub by AI researchers, has been successfully addressed. Microsoft swiftly responded to a vulnerability that exposed a significant 38TB of private data from its AI research division. 
The breach was uncovered by ethical hackers from cloud security firm Wiz, who identified a shareable link utilizing Azure Statistical Analysis System tokens on June 22, 2023. Promptly reporting their findings to the Microsoft Security Response Center, the SAS token was invalidated by June 24. Subsequently, on July 7, the token on the original GitHub page was replaced.
The exploit revolved around Shared Access Signature (SAS) tokens, a feature of Azure for file-sharing. Such tokens, when mishandled, can leave systems vulnerable. Wiz’s initial detection of this vulnerability occurred during their search for improperly configured storage containers online, a known entry point for cloud-hosted data breaches. 
Their investigation led them to ‘robust-models-transfer’, a repository housing open-source code and AI models used for image recognition within Microsoft’s AI research division.
The root of the problem traced back to a Shared Access Signature token associated with an internal storage account. A Microsoft employee, while engaged in the developmen

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: