OpenAI: GPT-5.4

GPT-5.4 is OpenAI’s latest frontier model, unifying the Codex and GPT lines into a single system. It features a 1M+ token context window (922K input, 128K output) with support for...

76.3% PinchBench§#63 Code Mode$2.50/1M input tokens
Context Window
1,050,000 tokens
Max Output
128,000 tokens
Input Modalities
textimagefile

Try OpenAI: GPT-5.4 in Kilo Code

Experience this model with the most popular open source coding agent. Free to start, pay only for AI usage. Use in popular IDEs like VS Code, JetBrains, command line, or cloud agents.

3M+

Downloads

500+

models supported

Free

to Start

Access 500+ models including OpenAI: GPT-5.4 and many more in Kilo Code

Benchmarking OpenAI: GPT-5.4

PinchBench data · refreshed daily

OpenClaw Benchmarks

PinchBench measures how OpenAI: GPT-5.4 performs on real OpenClaw agent tasks: multi-step execution, tool use, recovery, latency, and cost.

Average score

76.3%

#23 of 50 official models

Average time

127m 7s

40 runs · per OpenClaw task

Average cost

$8.429

Per benchmark run

Category breakdown

Best verified PinchBench v2 run by OpenClaw task family.

Productivity100.0% · 1/1 cleared
Csv Analysis84.4% · 0/1 cleared

Top task results

Highest-scoring benchmark tasks from the same submission.

Productivity
Sanity Check
100.0%
Csv Analysis
US Cities Multi-Criteria Filtering
84.4%

Autonomous task execution

OpenAI: GPT-5.4 shows emerging average success across OpenClaw-style benchmark runs, useful for recurring research, browser, and file-based automations.

Tool use and recovery

PinchBench tasks stress multi-step planning, tool calls, and judge-verified completion rather than single prompt coding snippets.

Agent workflow fit

Its deliberate average runtime and premium run cost help set expectations for long-running KiloClaw agents and production workflows.

Agentic benchmarks from the PinchBench Leaderboard

Real-World Usage

Real-world usage statistics from the Kilo Code community

Weekly Token Usage

Mode Rankings (Last Week)

Where this model ranks for each built-in mode

Code

Write, modify, and refactor code

#63

Ask

Get answers and explanations

#27

Debug

Diagnose and fix software issues

No data

Orchestrator

Coordinate tasks across multiple modes

No data

Real-world metrics from the Kilo Code Leaderboard

Pricing

Cost per 1 million tokens

Input Tokens
$2.50
per 1M tokens
Output Tokens
$15.00
per 1M tokens

Example Cost

Analyzing a 10,000 line codebase (≈40k input tokens, 10k output tokens) costs approximately $0.2500

Coding Capabilities

Features and parameters relevant to coding tasks

Coding Features

Function Calling
Can call external functions/APIs
Tool Choice
Control over function selection
Structured Outputs
JSON schema validation
Reasoning Tokens
Extended thinking for complex problems

Pricing details from OpenRouter

Technical Details

Architecture and implementation specifications

Model ID
openai/gpt-5.4
Created
March 5, 2026
Tokenizer
GPT
Input Modalities
text, image, file
Context Window
1,050,000 tokens
Max Completion Tokens
128,000 tokens
Input Price
$2.50 per 1M tokens
Output Price
$15.00 per 1M tokens
Cache Read Price
$0.25 per 1M tokens
Content Moderation
Disabled

Ready to try OpenAI: GPT-5.4?

Install Kilo Code and start using OpenAI: GPT-5.4 for your coding projects today. Choose from 500+ AI models with complete freedom.

  1. 1.

    Install Kilo Code

    Get the extension from VS Code Marketplace, JetBrains Plugin Repository, or the CLI.

  2. 2.

    Open the model selector

    Click the model name in the Kilo Code chat panel to open the selector.

  3. 3.

    Choose your model

    Search or browse to find and select your preferred model.

  4. 4.

    Start coding

    Use Code, Ask, Debug, or Plan mode — the model is ready immediately.