Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
OpenAI Codex installs surge to 90M in a single week, fueled by GPT-5.5 rollout
OpenAI’s Codex racked up 90 million installs in a single week. That’s not a cumulative lifetime number. That’s seven days of downloads.
The surge, powered by the company’s recently launched GPT-5.5 model, pushed Codex past both its own prior weekly benchmarks and rival Anthropic’s Claude code tool.
What’s driving the numbers
The timing isn’t coincidental. GPT-5.5’s rollout brought meaningful upgrades to the Codex platform, including a 400K token context window for the tool itself and API support for contexts stretching up to 1 million tokens.
Token efficiency also got a serious upgrade. GPT-5.5 reportedly uses approximately 40% fewer tokens per task compared to its predecessor.
API pricing for the new model sits at $5 per million input tokens and $30 per million output tokens.
Performance benchmarks tell a similar story. On NVIDIA systems, Codex powered by GPT-5.5 is delivering what’s described as a 35x lower cost per million tokens. Over 10,000 NVIDIA employees are reportedly experiencing significant improvements in both token generation speed and overall coding efficiency.
The competitive landscape just tilted
The 90 million install figure matters less as an absolute number and more as a relative one. Codex didn’t just grow. It overtook Claude code, which had been building steady momentum as Anthropic’s answer to AI-assisted programming.
The jump to a 1 million token context window at the API level is particularly significant. Most real-world software projects contain far more code than earlier models could process in a single session. Developers were forced to chunk their code into smaller pieces and hope the AI could stitch together a coherent understanding. That constraint is loosening considerably.
What this means for the crypto and tech ecosystem
The 40% reduction in token usage per task also has direct economic implications for any team running AI-assisted development at scale. Crypto projects, particularly those funded by token treasuries with volatile valuations, are perpetually cost-conscious.
There’s a flip side worth watching. As AI coding tools become ubiquitous, the volume of code being produced across all software, including smart contracts, will increase dramatically. The same tools that help developers build faster can also help them introduce bugs faster, particularly in security-critical environments like DeFi where a single overlooked edge case can drain a protocol.