Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
OpenClaw Fixes Plugin Ecosystem Split: Codex and Pi Hooks Unified, Load Time Reduced by 90%
According to Beating monitoring, the open-source AI Agent platform OpenClaw released version 2026.4.22. The most significant change is the alignment of the lifecycle between Codex harness and Pi. Previously, plugins behaved inconsistently across the two paths of Pi harness and Codex harness, with the same plugin potentially missing hook calls under different harnesses. This version has completed the critical hooks such as before_prompt_build, before_compaction / after_compaction, after_tool_call, before_message_write, llm_input / llm_output / agent_end, eliminating the need for plugin developers to adapt for both paths separately. The synchronization also introduced a plugin extension interface on the Codex side, supporting asynchronous tool_result middleware. This follows the recent fix by the OpenAI Codex team for the silent fallback issue in Codex harness certification, marking another round of systematic repairs in Codex integration for OpenClaw. Another architectural addition is the TUI local embedding mode: users can run agent conversations directly in the terminal without starting the Gateway, while still retaining the plugin approval mechanism. The default thinking level of the inference model has been raised from the previous off/low silent state to medium, meaning users who have not manually set the inference level will find that the inference model now begins to output the thinking process by default after the upgrade. In terms of performance, plugin loading now uses native Jiti, reducing startup time by 82% to 90%; the runtime of doctor --non-interactive has decreased by about 74%. Kimi K2.6 multi-round agent calls will no longer be interrupted due to incorrect cleaning of tool_call IDs. On Linux, subprocesses automatically increase oom_score_adj, allowing the kernel to prioritize terminating temporary workers over the Gateway main process under memory pressure. The configuration system has added a last-known-good recovery feature to prevent config from being accidentally overwritten, which could lead to a Gateway crash loop. New provider integrations include xAI for image generation (grok-imagine-image / grok-imagine-image-pro), TTS, and STT; Tencent Cloud has been built in as an official provider plugin, now including the Hy3 preview model and pricing. When OpenAI models enable web search, they now directly use the native OpenAI web_search tool, no longer going through the OpenClaw-managed search channel.