OpenAI’s GPT-5.5 will be generally available in Microsoft Foundry. Explore now.
Model details
OpenAI’s GPT-5.5 will be generally available in Microsoft Foundry. Explore now.
OpenAI has announced GPT-5.5, an agentic model designed to work through complex tasks autonomously by switching between multiple tools.
OpenAI posted a cryptic message, "NS41", on X that community members decoded to reveal "5.5", signaling an imminent upgrade to ChatGPT as `GPT-5.5`. The teaser follows a busy cadence of model and feature updates from OpenAI and implies a staged, incremental release rather than a ground-up new architecture. For practiti
GPT-5.5, OpenAI's latest GPT model, is now rolling out on GitHub Copilot. In our early testing, GPT-5.5 delivers its strongest performance on complex,...
So when it comes to models that the general public can access, GPT-5.5 has retaken the crown for OpenAI, achieving the state-of-the-art across 14 benchmarks.
A first look at GPT-5.5 from Harvey.
This exact model name is also listed by 8 other providers.
Keep Reviews Moving
When AI speeds up shipping, review queues get exposed fast. CodeRabbit reviews pull requests quickly, catches issues that surface late, and adds coverage before code reaches production.
Developers already feel this
GPT-5.5 is designed as an agentic model, built to handle complex, real-world workflows that require autonomy and multi-step reasoning. Rather than requiring constant guidance, the model is engineered to understand tasks early, navigate through ambiguity, and independently manage the execution of processes like writing and debugging code, researching online, and creating documents or spreadsheets. By moving across different tools until a task is finished, it aims to reduce the need for manual oversight, allowing users to delegate messy, multi-part projects with the confidence that the model will plan, execute, and verify its own work.
The model represents an incremental evolution in intelligence, focusing on efficiency and performance gains rather than a ground-up architectural overhaul. It achieves a higher level of reasoning and task completion compared to its predecessors while maintaining consistent per-token latency, making it both more capable and more efficient for demanding applications. Its development involved extensive red-teaming and feedback from nearly 200 early-access partners to refine its capabilities in areas like agentic coding and scientific research. By utilizing parallel test-time compute in its Pro configuration, the model is positioned to handle increasingly sophisticated knowledge work and long-term action-oriented tasks.
Why teams adopt it
Discuss this model
Add corrections, implementation notes, pricing changes, or usage caveats for other readers.