Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Hugging Face Open Sources ml-intern, an ML Research Agent for Automated Paper Reading, Data Selection, and Training
According to monitoring by Dongcha Beating, Hugging Face has open-sourced ml-intern, an ML research agent capable of autonomously completing the entire process of “reading papers, organizing datasets, initiating GPU training, evaluating results, and iterating improvements.” The project is built on their own smolagents framework and offers both CLI and web-based access, with the code available on GitHub. The toolchain of ml-intern is constructed around the Hugging Face ecosystem: it retrieves papers from arXiv and HF Papers and conducts deep reading along citation chains; it browses datasets on HF Hub, checks quality, reformats them, and then inputs them for training; when there is no local GPU, it can call HF Jobs to initiate cloud training tasks, automatically reading evaluation outputs, diagnosing failure reasons, and rerunning after training is completed. By default, it uses Claude Sonnet 4.5 to drive the decision loop, with a maximum of 300 iterations per run and automatic compression of context exceeding 170k tokens. Hugging Face provided three case studies in their release post. In a scientific reasoning task, the agent identified the OpenScience and NemoTron-CrossThink datasets from the citation chain of a benchmark paper, filtered out seven variants from ARC, SciQ, and MMLU based on difficulty, and ran 12 rounds of SFT on Qwen3-1.7B, increasing the GPQA score from 10% to 32% in under 10 hours. In a medical scenario, the agent determined that the quality of existing datasets was insufficient and autonomously wrote a script to generate 1,100 synthetic data points, expanding the dataset by 50 times for training, surpassing Codex by over 60% on HealthBench. In a competitive math scenario, the agent independently wrote a GRPO training script and initiated training on an A100 through HF Spaces, observing reward collapse and conducting ablation experiments to investigate the cause.