Why Microsoft Says DeepSeek Is Too Dangerous to Use

 

Microsoft has openly said that its workers are not allowed to use the DeepSeek app. This announcement came from Brad Smith, the company’s Vice Chairman and President, during a recent hearing in the U.S. Senate. He said the decision was made because of serious concerns about user privacy and the risk of biased content being shared through the app.

According to Smith, Microsoft does not allow DeepSeek on company devices and hasn’t included the app in its official store either. Although other organizations and even governments have taken similar steps, this is the first time Microsoft has spoken publicly about such a restriction.

The main worry is where the app stores user data. DeepSeek’s privacy terms say that all user information is saved on servers based in China. This is important because Chinese laws require companies to hand over data if asked by the government. That means any data stored through DeepSeek could be accessed by Chinese authorities.

Another major issue is how the app answers questions. It’s been noted that DeepSeek avoids topics that the Chinese government sees as sensitive. This has led to fears that the app’s responses might be influenced by government-approved messaging instead of being neutral or fact-based.

Interestingly, even though Microsoft is blocking the app itself, it did allow DeepSeek’s AI model—called R1—to be used through its Azure cloud service earlier this year. But that version works differently. Developers can download it a

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: