Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
In the AI Coding era, good programming habits still matter
Recently, I was working on an Agent benchmark and found that you can't simply evaluate the complexity of a programming task for AI from a developer's perspective.
For example, a refactoring task: splitting a large file of several thousand lines into more than ten small modules based on functionality.
This task isn't really difficult for a developer; the main work involves moving code, organizing imports, and verifying compilation, which even beginners can handle.
So I thought of using a simple task for benchmarking, but the results were unexpected.
Claude Code judged this task as quite large, attempted to split part of it, and submitted a PR with a Future work section planning to do it step by step.
My own Agent is “hard coding,” pushing more towards complete splitting, but the cost is obvious: token consumption is dozens of times that of Claude, and a lot of time is spent repeatedly reading files, fixing compilation errors, reading again, and fixing errors again.
This made me realize that tasks which seem simple to humans are not necessarily simple for Agents.
For humans, this kind of refactoring often just means “move this piece over there.” But for an Agent, it needs to first read large files in batches, remember which functions and tests are related, then generate a bunch of cross-file modifications, and finally patch holes through compilation errors step by step.
It looks like mechanical work, but in reality, it becomes a task with high token and state management costs.
Recently, I saw someone say that in the AI Coding era, principles like module splitting are not that important since humans don’t look at code anyway.
Now I disagree. Clear module boundaries, appropriate file granularity, and simple dependencies not only make it easier for humans to read but also help reduce the task complexity for Agents.
From another perspective, the current tools for reading and modifying files for Agents are not very friendly for such refactoring either.
Coding Agents mainly perform text replacements when editing files. For example, Claude Code often uses the old_string / new_string pattern: first providing a piece of old text, then replacing it with new text.
Codex commonly uses apply_patch: generating a patch similar to git diff that expresses replacing old content with new.
Both are suitable for small-scale modifications, but if you need to delete a large chunk of old code or move a batch of functions to other files, the model usually still needs to read the original content into context first, then generate a large replacement or diff.
So I later gave the Agent a prompt to use scripts, sed, perl, and similar tools to roughly split large files, directly delete the old content, write to new files, and then slowly fix things one by one.
Its completion rate indeed improved a lot.
By default, the Agent doesn’t do this because system prompts strongly require the Agent to modify files using built-in tools rather than command-line tools.
Thinking one step further, Coding Agents might also need more advanced editing tools.
Not just providing a “replace text” interface, but first building code structure through parsers, LSP, or compilers, so that the Agent can perform refactoring like an IDE: moving functions, deleting impl blocks, organizing imports.
I wonder if anyone has tried this area.
Overall, even in the AI Coding era, good programming habits still hold value.
It’s much cheaper to incorporate good habits early through harness engineering, turning them into the default working mode of the Agent, rather than doing costly refactoring later.