OpenAI: GPT-5.5
GPT-5.5 is OpenAI’s frontier model designed for complex professional workloads, building on GPT-5.4 with stronger reasoning, higher reliability, and improved token efficiency on hard tasks. It features a 1M+ token...
Try OpenAI: GPT-5.5 in Kilo Code
Experience this model with the most popular open source coding agent. Free to start, pay only for AI usage. Use in popular IDEs like VS Code, JetBrains, command line, or cloud agents.
Downloads
models supported
to Start
Access 500+ models including OpenAI: GPT-5.5 and many more in Kilo Code
Benchmarking OpenAI: GPT-5.5
Coding Performance
Coding benchmarks and performance metrics for development tasks
Coding Benchmarks
| Benchmark | Score |
|---|---|
| AA Coding Index | 58.5% |
| SciCode | 55.9% |
| TerminalBench Hard | 59.8% |
| LCR | 73.3% |
| IFBench | 71.6% |
Speed & Efficiency
| Metric | Value |
|---|---|
| Output Speed | 65 tok/s |
Performance metrics from Artificial Analysis
OpenClaw Benchmarks
PinchBench measures how OpenAI: GPT-5.5 performs on real OpenClaw agent tasks: multi-step execution, tool use, recovery, latency, and cost.
Average score
#26 of 50 official models
Average time
15 runs · per OpenClaw task
Average cost
Per benchmark run
Category breakdown
Best verified PinchBench v2 run by OpenClaw task family.
Top task results
Highest-scoring benchmark tasks from the same submission.
Autonomous task execution
OpenAI: GPT-5.5 shows emerging average success across OpenClaw-style benchmark runs, useful for recurring research, browser, and file-based automations.
Tool use and recovery
PinchBench tasks stress multi-step planning, tool calls, and judge-verified completion rather than single prompt coding snippets.
Agent workflow fit
Its deliberate average runtime and premium run cost help set expectations for long-running KiloClaw agents and production workflows.
Agentic benchmarks from the PinchBench Leaderboard
Real-World Usage
Real-world usage statistics from the Kilo Code community
Weekly Token Usage
Mode Rankings (Last Week)
Where this model ranks for each built-in mode
Code
Write, modify, and refactor code
Ask
Get answers and explanations
Debug
Diagnose and fix software issues
Orchestrator
Coordinate tasks across multiple modes
Real-world metrics from the Kilo Code Leaderboard
Pricing
Cost per 1 million tokens
Example Cost
Analyzing a 10,000 line codebase (≈40k input tokens, 10k output tokens) costs approximately $0.5000
Coding Capabilities
Features and parameters relevant to coding tasks
Coding Features
Pricing details from OpenRouter
Technical Details
Architecture and implementation specifications
- Model ID
- openai/gpt-5.5
- Artificial Analysis Slug
- gpt-5-5-high
- Created
- April 24, 2026
- Tokenizer
- GPT
- Input Modalities
- file, image, text
- Context Window
- 1,050,000 tokens
- Max Completion Tokens
- 128,000 tokens
- Input Price
- $5.00 per 1M tokens
- Output Price
- $30.00 per 1M tokens
- Cache Read Price
- $0.50 per 1M tokens
- Content Moderation
- Enabled
Ready to try OpenAI: GPT-5.5?
Install Kilo Code and start using OpenAI: GPT-5.5 for your coding projects today. Choose from 500+ AI models with complete freedom.
- 1.
Install Kilo Code
Get the extension from VS Code Marketplace, JetBrains Plugin Repository, or the CLI.
- 2.
Open the model selector
Click the model name in the Kilo Code chat panel to open the selector.
- 3.
Choose your model
Search or browse to find and select your preferred model.
- 4.
Start coding
Use Code, Ask, Debug, or Plan mode — the model is ready immediately.