Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
ULMFiT: The 2018 paper that made today's LLM fine-tuning methods possible
How did ULMFiT connect to today’s LLM way of doing things?
What actually happened
fast.ai co-founder Jeremy Howard talked about the relationship between ULMFiT (Universal Language Model Fine-tuning) and modern large language models. He put it plainly: ULMFiT is a pretraining approach copied from the vision side—first doing self-supervised language modeling pretraining on general text, then using “two-step fine-tuning” to adapt to specific NLP tasks—today’s mainstream LLMs are fundamentally still doing this.
The value of this 2018 paper is that it showed you can achieve strong NLP transfer learning with very little labeled data, while also setting new text classification records at the time.
Why this piece of history is worth knowing
Comparison with methods from the same period
The table below summarizes how the three differ in representation, training, and adaptation strategies:
Core takeaway
How to assess its impact
Points to remember
Importance level: Medium
Category: Technical insights, AI research, industry trends
Summary: For today’s LLM narrative, you’re not exactly early to the game, but understanding ULMFiT fine-tuning details is still useful for building and optimizing systems; the real beneficiaries are the builders in engineering and research and the teams that invest long-term—this matters much less to short-term traders.