The plugin's `catalog.run` hook already exchanged a GitHub OAuth token
for a short-lived Copilot API token and resolved the per-account baseUrl,
but it returned `models: []` and the bundled openclaw runtime relied
entirely on the static manifest catalog. That meant:
- Static `contextWindow` values were a conservative 128k for every
model, far below reality (gpt-5.4/5.5 are 400k, claude-opus-4.6/4.7
internal variants are 1M, claude-sonnet-4 is 200k, etc.).
- Newly published Copilot models (gpt-5.5, gpt-5.1*, gemini-3-pro-preview,
the claude-opus-*-1m internal variants, etc.) didn't appear at all
until the manifest was patched.
- Per-account entitlement was invisible — every user saw the same
hardcoded 22-model list regardless of plan.
Wire it up:
- Add `fetchCopilotModelCatalog` in `extensions/github-copilot/models.ts`.
Calls `${baseUrl}/models` with the resolved Copilot API token and the
same Editor-Version / Copilot-Integration-Id headers used elsewhere in
the plugin. Maps each entry to a `ModelDefinitionConfig`:
- `contextWindow` ← `capabilities.limits.max_context_window_tokens`
- `maxTokens` ← `capabilities.limits.max_output_tokens`
- `input` ← `["text", "image"]` if `supports.vision`, else `["text"]`
- `reasoning` ← `Array.isArray(supports.reasoning_effort) && supports.reasoning_effort.length > 0`
- `api` ← `anthropic-messages` for Anthropic vendor or claude*
ids; otherwise `openai-responses`
Filters out non-chat objects (embeddings) and internal routers
(`accounts/...` ids). Dedupes by id. 10s default timeout.
- Update the `catalog.run` hook in `extensions/github-copilot/index.ts`
to call the new function after token-exchange and return the live
results. On any HTTP/parse failure it falls back to `models: []`,
which preserves the static manifest catalog as the visible fallback —
no behavior regression for users with `discovery.enabled: false` or
in offline scenarios.
- Bump `modelCatalog.discovery."github-copilot"` from `"static"` to
`"refreshable"` in `openclaw.plugin.json` so the catalog hook is
actually invoked at runtime. Without this the discovery infrastructure
treats the provider as static-only and never calls `catalog.run`.
- Add `gpt-5.5` to the static manifest catalog and `DEFAULT_MODEL_IDS`
with the correct values from the API (`contextWindow: 400000`,
`maxTokens: 128000`, `reasoning: true`, multimodal). This means users
on `discovery.enabled: false` still get gpt-5.5 visible without
needing to override `models.providers.github-copilot.models` in their
config.
Tests added (5, all passing alongside the existing 24):
- `fetchCopilotModelCatalog` maps a representative `/models` response
(chat models incl. an internal 1M-context Anthropic variant, a router,
an embedding) to the right `ModelDefinitionConfig` shape with real
context windows.
- baseUrl trailing slash is normalized.
- Duplicate ids in the API response are deduped (first wins).
- Non-2xx HTTP raises so the caller can fall back to the static catalog.
- Empty token / baseUrl reject synchronously without calling fetch.
Targeted run: `pnpm test extensions/github-copilot/models.test.ts` →
29/29 pass. `pnpm exec oxfmt --check extensions/github-copilot/` clean.
`pnpm tsgo:core` clean.
Real-world proof:
Built locally and dropped the resulting tarball into a downstream
container with `gh auth login --hostname github.com` (Copilot
subscription on the linked account). Before this change,
`openclaw models list --provider github-copilot` returned the 22-entry
static catalog with every entry showing 128k context. After this change,
the same command (with `--refresh`) returns 30 entries with API-accurate
context windows including the new gpt-5.1 family, the claude-opus-*-1m
variants, and the corrected `gemini-3*-preview` ids.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(github-copilot): preserve all reasoning IDs and add gpt-5.3-codex support
The existing guard (8fd15ed0e5) only skipped rewriting reasoning item IDs
when encrypted_content was a non-null string. When gpt-5.3-codex is used
via GitHub Copilot, the model falls through to the forward-compat catch-all
with reasoning:false, so encrypted_content is never requested and arrives
as null — bypassing the guard and causing a rewrite. Copilot validates
reasoning item IDs server-side regardless of whether the client includes
encrypted_content, so the rewritten id triggers the 400 error.
Two changes:
1. connection-bound-ids.ts: skip ALL reasoning items unconditionally.
Reasoning items always reference server-side state bound to their
original ID; rewriting any of them breaks Copilot's lookup.
2. models.ts + index.ts: extend the forward-compat cloning logic to
cover gpt-5.3-codex (adds it to the template-target set and to
CODEX_TEMPLATE_MODEL_IDS so it can also serve as a template source
for gpt-5.4). Adds gpt-5.3-codex to COPILOT_XHIGH_MODEL_IDS for
the thinking profile.
Thanks @InvalidPandaa.
* docs(github-copilot): clarify gpt-5.3-codex is a no-op template for itself
https://claude.ai/code/session_01EAFmq4WyKkiUkVAqRXp4Bm
* fix(github-copilot): remove dead reasoning prefix branch in deriveReplacementId
https://claude.ai/code/session_01EAFmq4WyKkiUkVAqRXp4Bm
* fix(github-copilot): align reasoning id replay tests
* test(plugin-sdk): use cjs sidecar for require fast path
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Peter Steinberger <steipete@gmail.com>