News

Anthropic’s developers recently upgraded the AI model Claude Sonnet 4 to support up to 1 million tokens of context, thereby ...
Overview Claude 4 achieved record scores on the SWE-bench and Terminal-bench, proving its coding superiority.Claude Sonnet 4 now supports a one-million-token co ...
I tested ChatGPT-5 against Claude Sonnet 4 on real coding tasks — from debugging to animations. Here’s which AI chatbot came ...
Anthropic has expanded the capabilities of its Claude Sonnet 4 AI model to handle up to one million tokens of context, five ...
Claude Sonnet 4 can now support up to one million tokens of context, marking a fivefold increase from the prior 200,000, ...
Anthropic upgrades Claude Sonnet 4 to a 1M token context window and adds memory, enabling full codebase analysis, long ...
The model’s usage share on AI marketplace OpenRouter hit 20 per cent as of mid-August, behind only Anthropic’s coding model.
Anthropic AI has increased the context window for their Claude Sonnet 4 model to 1 million tokens, which is 5 times more than ...
The company today revealed that Claude Sonnet 4 now supports up to 1 million tokens of context in the Anthropic API — a five-fold increase over the previous limit.
Anthropic has expanded Claude Sonnet 4’s context window to 1 million tokens, matching OpenAI’s GPT-4.1 and enhancing its ability to process large code bases and document sets in one request.
Anthropic’s latest move to expand the context window, now in public beta, might encourage Google Gemini users to give it ...
With this larger context window, Claude Sonnet 4 can process codebases with 75,000+ lines of code in a single request.