OpenAI
GPT-5.4 Mini
OpenAI's lightweight GPT-5.4 variant balancing cost, quality, and cached-input support for both API and Codex workflows
Model TypeLightweight Large Language Model
Key FeaturesBalanced cost and quality for general development, automation, and everyday reasoning
Caching SupportSupports cached-input pricing
Pricing & Specs
💰 Pricing
Input$0.75 / M tokens
Output$4.5 / M tokens
Cache Hit$0.075 / M tokens
⚙️ Specs
Model TypeLightweight Large Language Model
Key FeaturesBalanced cost and quality for general development, automation, and everyday reasoning
Caching SupportSupports cached-input pricing
AvailabilityAPI, Codex
API Examples
Python (OpenAI SDK)
from openai import OpenAI
client = OpenAI(
api_key="your-api-key",
base_url="https://api.xairouter.com/v1"
)
response = client.chat.completions.create(
model="gpt-5.4-mini",
messages=[
{"role": "user", "content": "Summarize this requirement and break it into dev tasks"}
]
)
print(response.choices[0].message.content)cURL (OpenAI API)
curl https://api.xairouter.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "gpt-5.4-mini",
"messages": [
{"role": "user", "content": "Summarize this requirement and break it into dev tasks"}
]
}'Developer Assist
# Configure ~/.codex/config.toml
cat > ~/.codex/config.toml << 'EOF'
model_provider = "xai"
model = "gpt-5.4-mini"
model_reasoning_effort = "xhigh"
plan_mode_reasoning_effort = "xhigh"
model_reasoning_summary = "detailed"
model_verbosity = "high"
approval_policy = "never"
sandbox_mode = "danger-full-access"
[model_providers.xai]
name = "xai"
base_url = "https://api.xairouter.com"
wire_api = "responses"
requires_openai_auth = false
env_key = "XAI_API_KEY"
EOF
# Set environment variable (add to ~/.bashrc or ~/.zshrc)
export XAI_API_KEY="sk-Xvs..."
# Launch Codex
codex