OpenClaw fixes plugin ecosystem split: Codex and Pi hooks unified, load time reduced by 90%

According to Beating Monitoring, the open-source AI Agent platform OpenClaw released version 2026.4.22.
The biggest change is the alignment of the lifecycle between Codex harness and Pi.
Previously, plugins behaved inconsistently across Pi harness and Codex harness paths, with the same plugin potentially missing hook calls under different harnesses.
This version addresses this by synchronizing key hooks such as before_prompt_build, before_compaction / after_compaction, after_tool_call, before_message_write, llm_input / llm_output / agent_end, etc., so plugin developers no longer need to adapt separately for both paths.
The synchronization also adds a plugin extension interface on the Codex side, supporting asynchronous tool_result middleware.
This is another systematic update in OpenClaw’s Codex integration, following this week’s fix by the OpenAI Codex team for the silent fallback issue in Codex harness authentication.

Another architectural addition is the TUI local embedding mode:
Users can run agent conversations directly in the terminal without starting the Gateway, while still maintaining the plugin approval mechanism.

The default thinking level of inference models has been silently raised from off/low to medium;
users who haven’t manually set the inference level will find that the model now outputs its reasoning process by default after the upgrade.

In terms of performance, plugin loading now uses native Jiti, reducing startup time by 82% to 90%;
the runtime of doctor --non-interactive has decreased by approximately 74%.
Kimi K2.6 multi-turn agent calls no longer get interrupted due to incorrect cleaning of tool_call ID.
On Linux, subprocesses automatically increase oom_score_adj, and under memory pressure, the kernel prioritizes terminating temporary workers over the Gateway main process.
The configuration system now includes a last-known-good recovery to prevent accidental overwrites of configs that could cause Gateway crash loops.

In the new provider section, xAI now supports image generation (grok-imagine-image / grok-imagine-image-pro), TTS, and STT;
Tencent Cloud is integrated as an official provider plugin, including Hy3 preview models and pricing.
When enabling web search for OpenAI models, it now directly uses OpenAI’s native web_search tool, no longer routing through OpenClaw’s hosted search channel.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin