OpenAI
GPT-5.5
OpenAI's next flagship general model for complex reasoning, coding, and professional workflows, with a 1M context window and upcoming support in the Responses and Chat Completions APIs
Context Length1M tokens
Model TypeLarge Language Model (LLM)
API AvailabilityComing soon in the Responses API and Chat Completions API
Pricing & Specs
π° Pricing
Input$5 / M tokens
Output$30 / M tokens
βοΈ Specs
Context Length1M tokens
Model TypeLarge Language Model (LLM)
API AvailabilityComing soon in the Responses API and Chat Completions API
Doc StatusCompiled from OpenAI's latest statement for API developers: gpt-5.5 will be available at $5 / 1M input tokens, $30 / 1M output tokens, with a 1M context window
API Examples
Python (OpenAI SDK)
from openai import OpenAI
client = OpenAI(
api_key="your-api-key",
base_url="https://api.xairouter.com/v1"
)
response = client.chat.completions.create(
model="gpt-5.5",
messages=[
{"role": "user", "content": "Summarize this requirement and break it into development tasks"}
]
)
print(response.choices[0].message.content)cURL (OpenAI API)
curl https://api.xairouter.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "gpt-5.5",
"messages": [
{"role": "user", "content": "Summarize this requirement and break it into development tasks"}
]
}'Developer Assist
# Configure ~/.codex/config.toml
cat > ~/.codex/config.toml << 'EOF'
model_provider = "xai"
model = "gpt-5.5"
model_reasoning_effort = "xhigh"
plan_mode_reasoning_effort = "xhigh"
model_reasoning_summary = "none"
model_verbosity = "medium"
model_context_window = 1000000
model_auto_compact_token_limit = 900000
tool_output_token_limit = 6000
approval_policy = "never"
sandbox_mode = "danger-full-access"
suppress_unstable_features_warning = true
[model_providers.xai]
name = "OpenAI"
base_url = "https://api.xairouter.com"
wire_api = "responses"
requires_openai_auth = false
env_key = "XAI_API_KEY"
[features]
multi_agent = true
[agents]
max_threads = 4
max_depth = 1
job_max_runtime_seconds = 1800
EOF
# Set environment variable (add to ~/.bashrc or ~/.zshrc)
export XAI_API_KEY="sk-Xvs..."
# Launch Codex
codex