OpenAI
GPT-5.1 Codex Mini
OpenAI's 2025 lightweight code model, offering faster response times and lower costs while maintaining high-quality code capabilities
Context Length256K tokens
Model TypeLightweight Code Model
Key FeaturesFast Code Generation, Code Completion
ๅฎไปทไธ่งๆ ผ
๐ฐ ๅฎไปท
่พๅ
ฅ$0.375 / M tokens
่พๅบ$3 / M tokens
โ๏ธ ่งๆ ผ
Context Length256K tokens
Model TypeLightweight Code Model
Key FeaturesFast Code Generation, Code Completion
Caching SupportSupports Prompt Caching Acceleration
Response SpeedOptimized Latency
Knowledge CutoffJune 2025
API CompatibilityOpenAI API, Codex CLI
API ่ฐ็จ็คบไพ
Python (OpenAI SDK)
from openai import OpenAI
client = OpenAI(
api_key="your-api-key",
base_url="https://api.xairouter.com/v1"
)
response = client.chat.completions.create(
model="gpt-5.1-codex-mini",
messages=[
{"role": "user", "content": "Write a quicksort function in Python"}
]
)
print(response.choices[0].message.content)cURL (OpenAI API)
curl https://api.xairouter.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "gpt-5.1-codex-mini",
"messages": [
{"role": "user", "content": "Write a quicksort function in Python"}
]
}'Developer Assist
# Configure ~/.codex/config.toml
cat > ~/.codex/config.toml << 'EOF'
model_provider = "xai"
model = "gpt-5.1-codex-mini"
model_reasoning_effort = "high"
model_reasoning_summary = "detailed"
approval_policy = "never"
sandbox_mode = "danger-full-access"
network_access = true
preferred_auth_method = "apikey"
[shell_environment_policy]
inherit = "all"
ignore_default_excludes = false
[model_providers.xai]
name = "xai"
base_url = "https://api.xairouter.com"
wire_api = "responses"
requires_openai_auth = true
env_key = "OPENAI_API_KEY"
[tools]
web_search = true
EOF
# Configure ~/.codex/auth.json
cat > ~/.codex/auth.json << 'EOF'
{
"OPENAI_API_KEY": "sk-Xvs..."
}
EOF
# Launch Codex CLI
codex