News
Faced with the news it was set to be replaced, the AI tool threatened to blackmail the engineer in charge by revealing an ...
Malicious use is one thing, but there's also increased potential for Anthropic's new models going rogue. In the alignment section of Claude 4's system card, Anthropic reported a sinister discovery ...
AI model threatened to blackmail engineer over affair when told it was being replaced: safety report
Anthropic’s Claude Opus 4 model attempted to blackmail its developers at a shocking 84% rate or higher in a series of tests that presented the AI with a concocted scenario, TechCrunch reported ...
Anthropic's Claude Opus 4 AI model attempted blackmail in safety tests, triggering the company’s highest-risk ASL-3 ...
Anthropic has introduced two advanced AI models, Claude Opus 4 and Claude Sonnet 4, designed for complex coding tasks.
AI has been known to say something weird from time to time. Continuing with that trend, this AI system is now threatening to ...
The testing found the AI was capable of "extreme actions" if it thought its "self-preservation" was threatened.
Anthropic’s first developer conference kicked off in San Francisco on Thursday, and while the rest of the industry races ...
Anthropic introduced Claude Opus 4 and Claude Sonnet 4 during its first developer conference on May 22. The company claims ...
Anthropic's most powerful model yet, Claude 4, has unwanted side effects: The AI can report you to authorities and the press.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results