Anthropic Cracks Down on Claude Code Spoofing, Tightens Access for Rivals and Third-Party Tools

 

Anthropic has rolled out a new set of technical controls aimed at stopping third-party applications from impersonating its official coding client, Claude Code, to gain cheaper access and higher usage limits to Claude AI models. The move has directly disrupted workflows for users of popular open-source coding agents such as OpenCode.
At the same time—but through a separate enforcement action—Anthropic has also curtailed the use of its models by competing AI labs, including xAI, which accessed Claude through the Cursor integrated development environment. Together, these steps signal a tightening of Anthropic’s ecosystem as demand for Claude Code surges.
The anti-spoofing update was publicly clarified on Friday by Thariq Shihipar, a Member of Technical Staff at Anthropic working on Claude Code. Writing on X (formerly Twitter), Shihipar said the company had “tightened our safeguards against spoofing the Claude Code harness.” He acknowledged that the rollout caused unintended side effects, explaining that some accounts were automatically banned after triggering abuse detection systems—an issue Anthropic says it is now reversing.
While those account bans were unintentional, the blocking of third-party integrations themselves appears to be deliberate.
Why Harnesses Were Targeted
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: