News

Claude Opus 4.1 scores 74.5% on the SWE-bench Verified benchmark, indicating major improvements in real-world programming, ...
Claude AI creator Anthropic plans to use the money from its latest funding round for enterprise products, safety research and ...
AI models are no longer just glitching – they’re scheming, lying and going rogue. From blackmail threats and fake contracts ...
Anthropic’s Frontier Red Team is unique for its mandate to raise public awareness of model dangers, turning its safety work ...
In May, Anthropic released Claude Opus 4, which the company dubbed its most powerful model yet and the best coding model in the world. Only three months later, Anthropic is upping the ante further by ...
Model welfare is an emerging field of research that seeks to determine whether AI is conscious and, if so, how humanity should respond.
Discover how to navigate Claude Code's Pro and Max 20x plans, manage usage limits, after August 2025. For smoother coding ...
The Qatar Investment Authority has invested in US artificial intelligence company Anthropic as part of a $13bn Series F ...
Claude won't stick around for toxic convos. Anthropic says its AI can now end extreme chats when users push too far.
Anthropic's Claude Opus 4.1 achieves 74.5% on coding benchmarks, leading the AI market, but faces risk as nearly half its $3.1B API revenue depends on just two customers.
One of the models that Claude Code uses to answer developer questions is Anthropic’s flagship Claude Opus 4.1 algorithm. The ...
The company has given its AI chatbot the ability to end toxic conversations as part of its broader 'model welfare' initiative.