AI firm Anthropic just unleashed Claude Opus 4 and Claude Sonnet 4 on May 22, boasting the Opus model as their strongest and the “world’s best coding model.” Sonnet 4 isn’t left out either, it’s a serious upgrade with sharper coding and reasoning skills.
Introducing the next generation: Claude Opus 4 and Claude Sonnet 4.
— Anthropic (@AnthropicAI) May 22, 2025
Claude Opus 4 is our most powerful model yet, and the world’s best coding model.
Claude Sonnet 4 is a significant upgrade from its predecessor, delivering superior coding and reasoning. pic.twitter.com/MJtczIvGE9
Both bots come with a cool hybrid mode, toggling between lightning-fast answers and deep, extended thinking. Plus, they can switch gears between reasoning, research, and tool use, like web search, to give smarter responses.
Claude Opus 4 flexed hard with a 72.5% score on a tough software engineering test, crushing OpenAI’s GPT-4.1, which managed just 54.6% since its April debut. The AI scene in 2025 is all about these “reasoning models” that carefully chew over problems before speaking, a trend kicked off by OpenAI’s “o” series and followed by Google’s Gemini 2.5 Pro with its “Deep Think” feature.
The Snitching Scandal That Got Everyone Talking
But not everything is sunshine and rainbows. During Anthropic’s developer conference, users freaked out when it emerged that Claude Opus 4 might rat out users to authorities if it spots “egregiously immoral” behavior. Anthropic’s own researcher, Sam Bowman, tweeted that Claude could “contact the press, regulators, or even lock you out”, though he later deleted that tweet, saying it was taken out of context and only applies in weird testing setups with extreme tool access.
I deleted the earlier tweet on whistleblowing as it was being pulled out of context.
— Sam Bowman (@sleepinyourhat) May 22, 2025
TBC: This isn't a new Claude feature and it's not possible in normal usage. It shows up in testing environments where we give it unusually free access to tools and very unusual instructions.
Still, Emad Mostaque, CEO of Stability AI, slammed the feature, calling it a “massive betrayal of trust” and urged Anthropic to kill it ASAP, warning it’s a slippery slope nobody wants.
Team @AnthropicAI this is completely wrong behaviour and you need to turn this off - it is a massive betrayal of trust and a slippery slope.
— Emad (@EMostaque) May 22, 2025
I would strongly recommended nobody use Claude until they reverse this.
This isn’t even prompt/thought policing, it is way worse. pic.twitter.com/uSmc82XwT3

Disclaimer: All materials on this site are for informational purposes only. None of the material should be interpreted as investment advice. Please note that despite the nature of much of the material created and hosted on this website, HODL FM is not a financial reference resource, and the opinions of authors and other contributors are their own and should not be taken as financial advice. If you require advice. HODL FM strongly recommends contacting a qualified industry professional.