This post covers what shipped in v2026.4.14.
What shipped
- OpenAI Codex/models: add forward-compat support for
gpt-5.4-pro, including Codex pricing/limits and list/status visibility before the upstream catalog catches up. (#66453) Thanks @jepson-liu. - Telegram/forum topics: surface human topic names in agent context, prompt metadata, and plugin hook metadata by learning names from Telegram forum service messages. (#65973) Thanks @ptahdunbar.
- Agents/Ollama: forward the configured embedded-run timeout into the global undici stream timeout tuning so slow local Ollama runs no longer inherit the default stream cutoff instead of the operator-set run timeout. (#63175) Thanks @mindcraftreader and @vincentkoc.
- Models/Codex: include
apiKeyin the codex provider catalog output so the Pi ModelRegistry validator no longer rejects the entry and silently drops all custom models from every provider inmodels.json. (#66180) Thanks @hoyyeva. - Tools/image+pdf: normalize configured provider/model refs before media-tool registry lookup so image and PDF tool runs stop rejecting valid Ollama vision models as unknown just because the tool path skipped the usual model-ref normalization step. (#59943) Thanks @yqli2420 and @vincentkoc.
- Slack/interactions: apply the configured global
allowFromowner allowlist to channel block-action and modal interactive events, require an expected sender id for cross-verification, and reject ambiguous channel types so interactive triggers can no longer bypass the documented allowlist intent in channels without auserslist. Open-by-default behavior is preserved when no allowlists are configured. (#66028) Thanks @eleqtrizit. - Media-understanding/attachments: fail closed when a local attachment path cannot be canonically resolved via
realpath, so arealpatherror can no longer downgrade the canonical-roots allowlist check to a non-canonical comparison; attachments that also have a URL still fall back to the network fetch path. (#66022) Thanks @eleqtrizit. - Agents/gateway-tool: reject
config.patchandconfig.applycalls from the model-facing gateway tool when they would newly enable any flag enumerated byopenclaw security audit(for exampledangerouslyDisableDeviceAuth,allowInsecureAuth,dangerouslyAllowHostHeaderOriginFallback,hooks.gmail.allowUnsafeExternalContent,tools.exec.applyPatch.workspaceOnly: false); already-enabled flags pass through unchanged so non-dangerous edits in the same patch still apply, and direct authenticated operator RPC behavior is unchanged. (#62006) Thanks @eleqtrizit. - Google image generation: strip a trailing
/openaisuffix from configured Google base URLs only when calling the native Gemini image API so Gemini image requests stop 404ing without breaking explicit OpenAI-compatible Google endpoints. (#66445) Thanks @dapzthelegend. - Telegram/forum topics: persist learned topic names to the Telegram session sidecar store so agent context can keep using human topic names after a restart instead of relearning from future service metadata. (#66107) Thanks @obviyus.
- Doctor/systemd: keep
openclaw doctor --repairand service reinstall from re-embedding dotenv-backed secrets in user systemd units, while preserving newer inline overrides over stale state-dir.envvalues. (#66249) Thanks @tmimmanuel. - Ollama/OpenAI-compat: send
streamoptions.includeusagefor Ollama streaming completions so local Ollama runs report real usage instead of falling back to bogus prompt-token counts that trigger premature compaction. (#64568) Thanks @xchunzhao and @vincentkoc. - Doctor/plugins: cache external
preferOvercatalog lookups within each plugin auto-enable pass so largeagents.listconfigs no longer peg CPU and repeatedly reread plugin catalogs during doctor/plugins resolution. (#66246) Thanks @yfge. - GitHub Copilot/thinking: allow
github-copilot/gpt-5.4to usexhighreasoning so Copilot GPT-5.4 matches the rest of the GPT-5.4 family. (#50168) Thanks @jakepresent and @vincentkoc. - Memory/embeddings: preserve non-OpenAI provider prefixes when normalizing OpenAI-compatible embedding model refs so proxy-backed memory providers stop failing with
Unknown memory embedding provider. (#66452) Thanks @jlapenna. - Agents/local models: clarify low-context preflight hints for self-hosted models, point config-backed caps at the relevant OpenClaw setting, and stop suggesting larger models when
agents.defaults.contextTokensis the real limit. (#66236) Thanks @ImLukeF.