Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
The AI era is dividing into two extremes: the rich are getting richer, and the poor are getting poorer.
Are you using the same AI as others?
Writing articles: jiayi
AI has changed our living habits, and this is a fact.
Using AI to write emails, create PPTs, search for information, even having AI ghostwrite your social media posts. We are accustomed to the presence of AI, just like we are used to WiFi—so natural.
But few people stop to think: are the AI you use and the AI others use the same thing?
The “fairness” in the AI era is the greatest illusion
Silicon Valley likes to tell a story: AI gives everyone a super assistant, knowledge is no longer a privilege of the few, and everyone is equal.
It sounds beautiful. But the truth is—AI is fundamentally not fair; it competes based on financial resources.
From chips to computing power, from model training to token consumption, every aspect of AI is burning money.
A single NVIDIA H100 chip costs over $25,000. Training a GPT-4 level model costs over a hundred million dollars. Every question you ask AI involves tokens burning—tokens that have a price.
Claude Opus costs $5 per million tokens for input, $25 for output. ChatGPT Pro costs $200 per month. Plus Perplexity, Cursor, Midjourney… a heavy AI user easily spends over $500 monthly on tools.
Some spend $5,000 a month to build competitive barriers with AI; others think they’re keeping up with the times using the free version of ChatGPT.
This is not the same track. It’s not even the same game.
At the national level: structural gaps are irreversible
This logic is even more brutal when applied to countries.
The AI arms race requires three things: chips, computing power, and talent. All three demand huge capital.
The US controls over 70% of the world’s AI computing power. China is catching up, but chip bans have choked off its supply. As for most developing countries—among 46 emerging markets—entry-level broadband costs account for 40% of monthly income.
When a young person in Nigeria can’t even afford stable internet, what talk is there about “AI equality”?
94% of people in high-income countries have internet access, only 23% in low-income countries. 84% of high-income countries have 5G coverage, only 4% in low-income countries.
For third-world countries, the starting line in the AI era is not a step behind; they are simply not qualified to participate.
This structural gap cannot be closed by effort alone.
At the individual level: your ceiling is being redefined by AI
The logic at the national level applies equally to each person.
There’s a line I wrote in my Twitter bio: Personal ceiling = Three views + Cognition + Practical ability.
What has AI done to these three?
▶️ First, AI has solved many practical efficiency problems.
It used to take a week to produce an industry report; now it’s done in a day. Previously, coding from scratch; now AI helps you set up the framework. In terms of efficiency, AI is indeed leveling the playing field.
▶️ But second, AI greatly amplifies cognitive gaps.
With the same AI tool, what you ask, how you ask, and whether you can judge if the AI’s answer is right or wrong—all depend entirely on your existing level of cognition.
A person with deep understanding uses Claude for research; they know what questions to ask, how to follow up, and which answers need verification. AI saves them 80% of the execution time, which they then use for deeper thinking.
And a person with shallow cognition? They just ask AI questions and accept whatever it gives. They throw their brain aside and accept the output. Over time, they stop thinking altogether. AI doesn’t make them smarter; it makes them lazy and dull.
▶️ Third, the gap in delivery quality will only grow larger.
Based on your existing cognition, asking AI questions results in outputs with exponentially different depth, accuracy, and real-time relevance. Using Claude Opus, one person produces deep insights; another produces seemingly plausible nonsense.
A study at Aalto University in Finland found that the more people use AI, the more they tend to overestimate their abilities. AI makes you “feel” stronger—outputs look professional and smooth. But if you lack the ability to judge quality, you’re just producing “refined mediocrity.”
Therefore, the gaps in worldview, cognition, and practical ability are being infinitely magnified in the AI era.
The smarter people become even smarter, those with better cognition deepen their understanding, and the wealthy use better tools to widen the gap. Meanwhile, others, under AI’s “help,” become lazier, shallower, and poorer.
Cost × cognition: a double gap stacking
Here’s a logical chain many people haven’t fully grasped:
Money determines what level of AI you can access → The level of AI determines the quality and depth of information you get → The quality of information defines your cognitive boundary → Your cognitive boundary influences your decision-making quality → Decision-making quality affects how much money you can make.
This is a closed loop. The rich get richer, the poor get poorer.
The illusion rate of free ChatGPT is nearly 40%. That means, out of 10 questions asked, 4 answers are fabricated. The paid GPT-4 has a hallucination rate of 28%, and the latest version has improved by 45%.
Decisions made with the free version versus those with Opus, over time, lead to two completely different life trajectories.
There is always a huge information gap in this world. AI has not eliminated it; it has turned it into a paywall.
People who bypass the firewall and those who don’t are already living in two different worlds
Let me share a personal, somewhat sad observation.
The reason you can read this article is probably because you can bypass the firewall and browse Twitter.
But think about it—how many people around you cannot bypass the firewall? When you chat with them, do you already feel that your cognition is on a different level?
This isn’t a matter of IQ. It’s the long-term cognitive divergence caused by the information environment.
One person is exposed daily to the world’s cutting-edge information, in-depth discussions, and top content creators. The other sees only short videos fed by algorithms and filtered information streams.
Over five or ten years, their ways of thinking, judgment, and worldview have become completely different.
The AI era has further amplified this gap. Those who can bypass the firewall use Claude, Perplexity, and the best global AI tools. Those who cannot—since ChatGPT is blocked in China, Claude is also blocked—can only use localized substitutes or buy through middlemen at higher prices.
The “wall” in the AI era is not just a physical firewall. There’s also a language wall—advanced AI models optimize far better in English than in other languages. There are paywalls. There are algorithmic echo chambers. Every wall divides people into different worlds.
Research from Stanford University shows that non-English users consume five times more tokens for the same content when using AI. That means, with the same money, they get less information and lower quality.
The most frightening thing: you are already falling behind, but you don’t realize it
This is the point I most want to emphasize in this entire article.
Free AI can answer questions, help you write, and assist in searches. So people using the free version think—“I’m also using AI, I’m not falling behind.”
But the reasoning of the free version is shallower, hallucinations are more frequent, and the information is more outdated. The answers you get “seem” correct, but are actually full of plausible-sounding errors.
It’s like two people are “running.” One is genuinely moving forward; the other is running on a treadmill in place. Both think they are running, but only one is making progress.
In psychology, there’s a concept called the Dunning-Kruger effect: the less you know, the more you think you know. AI amplifies this effect tenfold—you rely more on AI, and you feel stronger. But you’ve already lost the ability to think independently; you just don’t realize it.
This is the cruelest part of the AI era.
It’s not that AI will replace you. It’s that those with better AI and deeper cognition will leave you far behind. And by the time you realize you’ve fallen behind, it might be too late to catch up.