Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
More than three years ago, when I was still using Sovits, the voice model required separation (removing background noise) to isolate the dry voice before training.
Then, the dataset needed to be filtered, removing parts with high background noise, and training would begin.
Typically, training around 8,000 steps yields the best voice fidelity; if it exceeds 8,000 steps and the score remains below 25, the dataset and training are basically useless.
If you insist on continuing, training all the way past 14,000 steps, a so-called "divergence" will occur, ultimately resulting in the output voice being either "severely electronic" or "unrecognizable as human or ghost."
Does this resemble the development process of quantitative trading?
The process of extracting the dry voice is like giving the machine a dataset for self-learning and prediction models.
Removing parts with high background noise is like filtering out invalid market data (such as 1-minute surges and crashes).
Training for 8,000 steps avoids severe overfitting; training beyond 14,000 steps leads to "divergence" (severe overfitting), ultimately making the real-world results as random as flipping a coin.
Although not in the same field, the underlying logic is the same.
In the future, it’s hard to say whether the ones who will beat us are not industry insiders themselves, but people crossing over from other fields...