News

If you buy through affiliate links, we may earn commissions, which help support our testing. AI start-up Anthropic’s newly ...
The company said it was taking the measures as a precaution and that the team had not yet determined if its newst model has ...
Anthropic says its AI model Claude Opus 4 resorted to blackmail when it thought an engineer tasked with replacing it was having an extramarital affair.
Anthropic’s Claude Opus 4 model attempted to blackmail its developers at a shocking 84% rate or higher in a series of tests that presented the AI with a concocted scenario, TechCrunch reported ...
The testing found the AI was capable of "extreme actions" if it thought its "self-preservation" was threatened.
Anthropic has introduced two advanced AI models, Claude Opus 4 and Claude Sonnet 4, designed for complex coding tasks.
Amazon-backed Anthropic launched Claude Opus 4 and Claude Sonnet 4 on Thursday, calling them its most powerful AI models to date, according to CNBC. The models can analyze thousands of data sources, ...
In a landmark move underscoring the escalating power and potential risks of modern AI, Anthropic has elevated its flagship ...
Anthropic's Claude 4 models show particular strength in coding and reasoning tasks, but lag behind in multimodality and ...
In tests, Anthropic's Claude Opus 4 would resort to "extremely harmful actions" to preserve its own existence, a safety report revealed.
Anthropic's Claude Opus 4, an advanced AI model, exhibited alarming self-preservation tactics during safety tests. It ...