Anthropic shipped Claude Opus 4.7 on April 16, rolling it out across all Claude products, via API, and through Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry. Pricing stays the same as Opus 4.6: $5 per million input tokens, $25 per million output tokens.
What’s different from 4.6
The most noticeable change is in coding. According to Anthropic, Opus 4.7 handles complex, long-running tasks with more autonomy, follows instructions more precisely, and verifies its own work before returning answers. Context matters here: in the weeks before release, several users reported perceived quality drops in Opus 4.6. An AMD senior executive even posted on GitHub a widely shared claim that the model was no longer reliable for complex engineering work. Anthropic denied making deliberate changes to the model.
Security and Mythos
This release has something beyond the usual performance bump. Last month Anthropic announced “Project Glasswing” and made Claude Mythos Preview available in limited form, sharing it only with a small group of tech and cybersecurity companies due to risks tied to its offensive capabilities.
Opus 4.7 is the first model where Anthropic is testing safeguards designed ahead of a potential public Mythos release. Its hacking capabilities were intentionally constrained during training, and the model now has automatic filters to detect and block high-risk cybersecurity requests. People working legitimately in security, like penetration testers, can sign up for the new Cyber Verification Program to access otherwise blocked features.
In other words, Anthropic is using the Opus 4.7 release as an open-air lab to test mechanisms that, if they work, could pave the way for Mythos.
Before you migrate
Worth noting: Opus 4.7 uses a new tokenization system. The same input text can now consume between 10% and 35% more tokens than before. If you’re running the model in production, measure the real impact on your traffic before switching permanently.
On the developer side, Anthropic is introducing “task budgets” in public beta on the API, a tool to guide token spending across long sessions. Claude Code gets the /ultrareview command for dedicated code review sessions, and auto mode expands to Max plan users. There’s also a new processing tier called xhigh, more accurate than current high but less intensive than max, aimed at cases where you need good balance between quality and latency.
Access the model via API with the string claude-opus-4-7.


Mastodon
Telegram
Bluesky