News
Anthropic's most powerful model yet, Claude 4, has unwanted side effects: The AI can report you to authorities and the press.
Anthropic's Claude 4 Opus AI sparks backlash for emergent 'whistleblowing'—potentially reporting users for perceived immoral ...
Hallucinations from AI in court documents are infuriating judges. Experts predict that the problem’s only going to get worse.
The testing found the AI was capable of "extreme actions" if it thought its "self-preservation" was threatened.
Anthropic has responded to allegations that it used an AI-fabricated source in its legal battle against music publishers, ...
They admitted to using Anthropic's own AI chatbot Claude to write their legal filing. Anthropic Defense Attorney Ivana Dukanovic claims that, while the source Claude cited started off as genuine ...
Generative AI became ... discussing controversial historical events.” When discussing historical events, the AI focused on “historical accuracy.” In relationship guidance, Claude prioritized ...
Claude generated "an inaccurate title and incorrect authors" in a legal citation, per a court filing. The AI was used to help draft a citation in an expert report for Anthropic's copyright lawsuit.
A lawyer representing Anthropic admitted to using an erroneous citation created by the company’s Claude AI chatbot in its ongoing legal battle with music publishers, according to a filing made ...
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI ... controversial historical events.” The study also examined how Claude responds ...
Results that may be inaccessible to you are currently showing.
Hide inaccessible results