Route OpenClaw Through XAI Router

Posted February 1, 2026 by XAI Tech Teamย โ€ย 3ย min read

OpenClaw

OpenClaw + XAI Router

If your goal is to give OpenClaw routed through XAI Router the simplest, most stable Responses experience, keep it to one clean setup: custom provider + openai-responses + https://api.xairouter.com/v1.


Short Version

  • Do not treat OpenClaw's built-in openai-codex OAuth path as the XAI Router entry point; that path is fundamentally direct OpenAI / ChatGPT sign-in.
  • If you want XAI Router, use a custom provider such as xairouter.
  • Set api to openai-responses and baseUrl to https://api.xairouter.com/v1.
  • Use xairouter/gpt-5.4 as the default model.
  • If you want the steadiest HTTP Responses path, set params.transport to "sse"; if you prefer WebSocket-first behavior, switch it to "auto".

Save this to ~/.openclaw/openclaw.json:

{
  "agents": {
    "defaults": {
      "model": { "primary": "xairouter/gpt-5.4" },
      "models": {
        "xairouter/gpt-5.4": {
          "alias": "Codex",
          "params": { "transport": "sse" }
        }
      }
    }
  },
  "models": {
    "mode": "replace",
    "providers": {
      "xairouter": {
        "baseUrl": "https://api.xairouter.com/v1",
        "apiKey": "${XAI_API_KEY}",
        "api": "openai-responses",
        "models": [
          { "id": "gpt-5.4", "name": "GPT-5.4" }
        ]
      }
    }
  }
}

  1. This is the OpenClaw-documented custom-provider structure: models.providers.* + api: "openai-responses".
  2. The current recommendation is to avoid adding headers.originator; let OpenClaw send standard Responses payloads to XAI Router / codex-cloud directly. It is simpler and less likely to hit the native Codex branch by mistake.
  3. transport: "sse" keeps OpenClaw on the HTTP Responses path; auto means WebSocket-first with SSE fallback.
  4. Compared with a CLI backend, this keeps OpenClaw's own agent / session / model pipeline intact instead of wrapping an external command.
  5. The current WebSocket path is still a high-fidelity relay, not byte-for-byte identical, so sse remains the safer default.

Verify

export XAI_API_KEY="sk-..."
openclaw models status
openclaw agent --message "Introduce yourself in one sentence"

If xairouter/gpt-5.4 is the default model and replies work, the setup is good.


FAQ

1) Can I just use openai-codex/gpt-5.4?

Yes, but that is OpenClaw talking directly to OpenAI OAuth, not routing through XAI Router. If your goal is the recommended XAI Router path in this guide, do not use that route.

2) Can I still use the Codex CLI backend?

Yes, but that is better when you explicitly want to reuse an external codex binary. If you want OpenClaw's own native model path and toolchain, the openai-responses provider in this guide is usually the better fit.

3) What if I want my local Gateway to expose /v1/responses?

That works too. With gateway.http.endpoints.responses.enabled = true, send requests to http://127.0.0.1:18789/v1/responses. For platform-specific examples, see /en/blog/openclaw-macos/ and /en/blog/openclaw-windows-xai-router/.