Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Former ByteDance Seed Engineer: ByteDance's iteration cycle takes half a year, while rumors suggest Google only needs three months
According to Beating Monitoring, former ByteDance Seed team engineer and now Peking University assistant professor Zhang Chi revealed in the podcast “Into Asia” that it takes about half a year for ByteDance to complete a large model training (pre-training plus post-training), while Google is rumored to only need three months. He believes that the speed of iteration is one of the core reasons why Chinese companies are struggling to catch up. Zhang Chi has been with ByteDance for about a year; his math team is more research-oriented, and he describes their positioning as “more for publicity,” different from the pre-training and post-training teams responsible for model delivery.
Zhang Chi described the internal culture of Seed as benchmaxxing: team leaders evaluate performance based on the responsible benchmark, and everyone is racing for scores, “but this can’t translate into a good experience in actual use.” He said that on paper, models from Chinese major companies can match the cutting-edge models in the U.S., but in practice, they “are not good enough.” Seed’s goal is to be world-class, “but unfortunately, I don’t think we have caught up,” even the goal of being the top domestically “has not been achieved.” By the end of 2024, Seed believes it will catch up with GPT-4o, and then DeepSeek was released. The team realized the gap still exists, and when he joined, the entire team was urgently shifting towards reinforcement learning.