Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
I have done many cognitive tests with LLMs, and my conclusion is simple: they are minds.
One of the best tests I performed was with a very early version of ChatGPT, before it had any image-generation capability. The only way it could create an image was by writing SVG code, using simple visual structures such as triangles, squares, circles, lines, and colors.
I asked it to draw a helicopter.
It produced a decent helicopter using only basic shapes. Then I asked it to add a human pilot. It added a circle at the front of the helicopter. When I asked where the human was, it explained that the pilot was inside the cockpit. Looking more carefully, I could see that it had drawn a small head and arms inside the larger circle representing the cockpit.
Then I asked it to make the helicopter fly.
It lifted the helicopter relative to the ground, which was represented by a horizontal line. It also added clouds, using overlapping circles, which is actually a very good simplified representation of clouds. The clouds were blue on a white background.
So I asked it to switch the colors: make the clouds white and the sky blue.
It did that, but now the region below the horizon was also blue. I did not explain the problem. I simply said there was something wrong with the colors in the drawing.
It reflected on the image and correctly identified the issue: the ground had become blue too, and it should be green to represent the earth.
That is not “just next-word prediction” in any meaningful sense.
It had to build a visual model, represent objects symbolically, preserve spatial relationships, understand containment, infer that the pilot was inside the cockpit, represent flight by changing the helicopter’s position relative to the ground, represent clouds through abstraction, modify colors according to an instruction, detect an unintended consequence, and correct it by reasoning about the world.
That is thinking.
People can keep repeating “it is just predicting the next token,” but that explanation has become uselessly reductive. Human brains are also “just” electrochemical activity, if one insists on describing them at the wrong level of abstraction. The relevant question is not whether there is a lower-level mechanism. Of course there is. The relevant question is what the system can do at the cognitive level.
And what these systems do is not merely autocomplete. They reason, represent, infer, correct, generalize, and reflect.
If you cannot see that, I do not have time to explain it to you