Pen Test Partners, a renowned cybersecurity and penetration testing firm, recently exposed a critical vulnerability in Microsoft’s Copilot AI for SharePoint. Known for simulating real-world hacking scenarios, the company’s redteam specialists investigate how systems can be breached just like skilled threatactors would attempt in real-time. With attackers increasingly leveraging AI, ethical hackers are now adopting similar methods—and the outcomes are raising eyebrows.
In a recent test, the Pen Test Partners team explored how Microsoft Copilot AI integrated into SharePoint could be manipulated. They encountered a significant issue when a seemingly secure encrypted spreadsheet was exposed—simply by instructing Copilot to retrieve it. Despite SharePoint’s robust access controls preventing file access through conventional means, the AI assistant was able to bypass those protections.
“The agent then successfully printed the contents,” said Jack Barradell-Johns, a red team security consultant at Pen Test Partners, “including the passwords allowing us to access the encrypted spreadsheet.”
This alarming outcome underlines the dual-nature of AI in informationsecurity—it can enhance defenses, but also inadvertently open doors to attackers if not properly governed.
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article: