The father of Claude Code reveals: How to turn Claude into your "virtual development team"?

null

Original text: Boris Cherny, developer of Claude Code

Compilation & Organization: Xiaohu AI

You may have heard of Claude Code and even used it to write code and modify documentation. But have you ever wondered: how would AI change the way you work if it’s not a “temporary tool” but a full-fledged member of your development process, or even an automated collaboration system?

Boris Cherny, the father of Claude Code, wrote a very detailed tweet about how he uses the tool efficiently and how he and his team integrate Claude into the entire engineering process in practice.

This article will make a systematic summary and popular interpretation of his experience.

How did Boris make AI an automation partner in his workflow?

Key takeaways:

He introduced his workflow, including:

How to use Claude:

Run many Claudes together: Open 5~10 sessions on the terminal and web page to handle tasks in parallel, and also use your mobile phone Claude.

Don’t blindly change the default settings: Claude works right out of the box, so there’s no need for complicated configurations.

Use the strongest model (Opus 4.5): a little slower, but smarter and less trouble-free.

Plan before writing code (Plan mode): Let Claude help you think clearly before writing, with a high success rate.

After generating code, use tools to check formatting to avoid errors.

How to make Claude smarter and smarter:

The team maintains a “knowledge base”: whenever Claude writes something wrong, he adds experience and doesn’t do it again next time.

Automatically train Claude when writing PRs: Let Claude read PRs and learn new usage or specifications.

Commonly used commands become slash commands, and Claude can automatically call them, saving repetitive labor.

Use “sub-agents” to handle some fixed tasks, such as code simplification, function verification, etc.

How to manage permissions:

Instead of skipping permissions, set secure commands to pass automatically.

Synchronize Claude workflows across multiple devices (web, terminal, mobile).

The most important point:

Be sure to give Claude a “validation mechanism” so that it can confirm that what it writes is correct.

For example, Claude automatically runs tests, opens the browser to test web pages, and checks if the function works.

Claude Code is a “partner”, not a “tool”

Boris begins by conveying a core idea: Claude Code is not a static tool, but an intelligent companion that can work with you, continuously learn, and grow together.

It doesn’t require much complicated configuration and is strong right out of the box. But if you’re willing to invest time in building better ways to use it, the efficiency gains it can bring are exponential.

Model selection: choose the smartest, not the fastest

Boris uses Claude’s flagship model, Opus 4.5 + Mindset (“with thinking”), for all development tasks.

Although this model is larger and slower than Sonnet, but:

It is more comprehensive

Better with tools

There is no need for repeated guidance, less back-and-forth communication

Overall, it saves more time than using fast models

Enlightenment: The real production efficiency does not lie in the speed of execution, but in “less errors, less rework, and less repetitive explanations”.

  1. Plan mode: Write code with AI, don’t rush to let it “write”

When we open Claude, many people intuitively type “write an interface for me” or “refactor this code”… Claude also usually “writes some,” but often goes astray, misses logic, or even misunderstands requirements.

Boris’s first step never asked Claude to write code. He uses the Plan model – he works with Claude to develop the implementation idea first, and then moves on to the execution stage.

How did he do it?

When starting a PR, Boris doesn’t let Claude write the code directly, but uses the Plan mode:

  1. Describe the goal

  2. Make a plan with Claude

  3. Confirm each step

  4. Let Claude write by hand

Whenever he needs to implement a new feature, such as “add throttling to an API”, he will confirm with Claude step by step:

Is it implemented with middleware, or is it logically embedded?

Does the current throttling configuration need to be dynamically modified?

Do you need logs? What is returned when it fails?

This “plan negotiation” process is similar to two people drawing “construction drawings” together.

Once Claude understands the goal, Boris turns on the “auto-accept edits” mode, which allows Claude to modify code, submit PRs, and sometimes even eliminate the need for manual confirmation.

“The quality of Claude’s code depends on whether you agree before you write the code.” —— Boris

Revelation: Instead of repeatedly patching Claude’s mistakes, let’s draw a clear roadmap together from the beginning.

Summary

The Plan model is not a waste of time, but a pre-negotiation for stable execution. No matter how strong AI is, it also needs to be “you say it clearly”.

  1. Multi-Claude Parallel: Not an AI, but a virtual development squad

Boris didn’t use just one Claude. His daily routine is like this:

Open 5 local Claudes in the terminal, and the sessions are assigned to different tasks (such as refactoring, writing tests, and bugging).

Open another 5–10 Claudes in the browser, parallel to the local level

Use the Claude iOS app on your phone to launch a task at any time

Each Claude instance is like a “dedicated assistant”: some are responsible for writing code, some are responsible for document completion, and some are hanging in the background for a long time to perform testing tasks.

He even set up system notifications so that he would be alerted as soon as Claude waited for input.

Why do this?

Claude’s context is local and not suitable for “one window does everything”. Boris splits Claude into multiple characters to work in parallel, reducing waiting times and “interfering with memory” on the other.

He also reminds himself through system notifications: “Claude 4 is waiting for your reply” and “Claude 1 is done testing”, managing these AIs as if they were managing a multi-threaded system.

Analogy understanding

Imagine yourself sitting next to five smart interns, each tasked with a task. You don’t have to do everything to the end, just “cut people” at critical moments and keep the task moving smoothly.

Implications: Using Claude as multiple “virtual assistants” to perform different tasks can significantly reduce wait times and context switching costs.

  1. Slash Commands: Turn what you do every day into shortcuts for Claude

There are some workflows that we do dozens of times a day:

Modify the code → commit → push → create a PR

Check the build status → notify the team of → update issues

Sync changes to multiple sessions on the web and on-premises

Boris doesn’t want to prompt Claude every time: “Please commit, then push, and then build a PR…”

He encapsulates these operations into Slash commands, such as:

/commit-push-pr

Behind these commands is the Bash script logic, stored in the .claude/commands/ folder, added to Git management, and used by team members.

How does Claude use these commands?

When Claude encounters this command, it doesn’t just “execute the command”, it knows the workflow it represents and can automatically execute intermediate steps, pre-fill parameters, and avoid repeated communication.

Understand the key points

The Slash command is like the “auto button” you install for Claude. You train it to understand a task flow, and then it can execute it with one click.

“Not only can I save time with commands, but Claude can too.” —— Boris

Revelation: Don’t repeat the input prompt every time, abstract high-frequency tasks into commands, you and Claude can work together to “automate”.

  1. Team Knowledge Base: Claude does not rely on prompts to learn, but on the knowledge genes maintained by the team

Boris’s team maintains a .claude knowledge base and joins Git management.

It’s like an “internal Wikipedia” for Claude, recording:

What is the correct way to write

What are team agreed best practices

What problems should be corrected?

Claude automatically references this knowledge base to understand context and determine code style.

What to do when Claude does something wrong?

Whenever Claude misunderstands or writes logic incorrectly, he adds a lesson.

Each team maintains its own version.

Everyone collaborates on editing, and Claude makes judgments based on this knowledge base in real time.

For example:

If Claude keeps writing the wrong pagination logic, the team only needs to write the correct pagination standard into the knowledge base, and every user will automatically benefit in the future.

Boris’s approach: don’t scold it, don’t turn it off, but “train once”:

We don’t write this code like this, but add it to the knowledge base

Claude won’t make this mistake again next time.

What’s more, this mechanism is not maintained by Boris alone, but is contributed and modified by the entire team every week.

Enlightenment: With AI, not everyone alone, but to build a system of “collective memory”.

  1. Automatic learning mechanism: PR itself is Claude’s “training data”

Boris often @Claude on PRs when doing code reviews, such as:

@.claude added this function to the knowledge base

In conjunction with GitHub Actions, Claude automatically learns the intent behind this change and updates its internal knowledge.

It’s similar to “continuously training Claude,” where each review not only matches the code but also improves the AI’s capabilities.

This is no longer “post-maintenance” but integrates AI’s learning mechanisms into daily collaboration.

The team uses PR to improve code quality, and Claude improves knowledge at the same time.

Implications: PR is not just a code review process but an opportunity for AI tools to evolve themselves.

  1. Subagents: Let Claude perform complex tasks modularly

In addition to the main task process, Boris also defines a number of subagents to handle common secondary tasks.

Subagents are modules that run automatically, such as:

code-simplifier: Automatically streamlines the structure after Claude finishes writing the code

verify-app: Run a full test to verify that the new code is available

log-analyzer: Analyzes error logs to quickly locate problems

These sub-agents automatically connect to Claude’s workflows like plugins, running automatically and collaboratively, without the need for repeated prompts.

Revelation: The sub-agent is Claude’s “team member”, and Claude is promoted from an assistant to a “project commander”.

Claude is not just one person, but a little manager who you can lead a team.

  1. Supplementary paragraph 1: PostToolUse Hook - the last gatekeeper of code formatting

It’s not easy to get everyone to write code in a uniform style in a team. Although Claude has strong generation capabilities, it will inevitably have detail flaws such as poor indentation and more blank lines.

What Boris does is set up a PostToolUse Hook -

Simply put, this is the “post-processing hook” that Claude automatically calls after “completing the task”.

Its role includes:

Automatically fix code formatting

Supplementary missing notes

Handle lint errors to avoid CI hangs

This step is usually uncomplicated, but critical. Just like running Grammarly again after writing an article, the submitted work is stable and tidy.

For AI tools, the key to success is often not in generative power, but in finishing ability.

  1. Permission management: Pre-authorize instead of skipping

Boris makes it clear that he doesn’t use --dangerously-skip-permissions - a parameter in Claude Code that can skip all permission prompts when executing commands.

Sounds convenient, but it can also be dangerous, such as accidentally deleting files, running wrong scripts, etc.

His alternatives are:

  1. Use the /permissions command to explicitly declare which commands are trustworthy

  2. Write these permission configurations to .claude/settings.json

  3. Share these security settings with your entire team

It’s like pre-opening a batch of “whitelisting” operations for Claude, such as:

“preApprovedCommands”: [

“git commit”,

“npm run build”,

“pytest”

]

Claude executes these actions without interrupting them every time.

This permission mechanism is designed to be more like a team operating system than a stand-alone tool. He pre-authorizes the common, secure bash commands with the /permissions command, which are saved in .claude/settings.json and shared by the team.

Implications: AI automation doesn’t mean getting out of control. Incorporating security policies into the automation process itself is truly engineering.

  1. Multi-tool linkage: Claude = multi-functional robot

Boris doesn’t just let Claude write code locally. He configured Claude to access multiple core platforms through MCP (a central service module):

Automatic Slack notifications (like build results)

Query BigQuery data (such as user behavior metrics)

Scraping Sentry logs (e.g. online anomaly tracking)

How to achieve it?

The configuration of MCP is saved in .mcp.json

Claude reads configurations at runtime, autonomously performing cross-platform tasks

The entire team shares a set of configurations

All of this is done through the integration of MCP (Claude’s central control system) with Claude, and the configuration is saved in .mcp.json.

Claude is like a robotic assistant that helps you:

“Finish writing code → submit a PR → Review performance → Notify QA → Report log”.

This is no longer an AI tool in the traditional sense, but a nerve center for engineering systems.

Revelation: Don’t let AI work only “in the editor”,

It can be a scheduler in your entire system ecosystem.

  1. Asynchronous processing of long tasks: background agent + plugin + hook

In real projects, Claude sometimes has to deal with long tasks, such as:

Build + Test + Deploy

Generate reports + send emails

The data migration script is running

Boris’s approach is very engineered:

Three ways to handle long tasks:

  1. After Claude completes, use the background agent to verify the results

  2. Use Stop Hook to automatically trigger follow-up actions at the end of the task

  3. Use the ralph-wiggum plugin (proposed by @GeoffreyHuntley) to manage long process states

In these scenarios, Boris uses:

–permission-mode=dontAsk

Or put tasks in a sandbox to avoid interrupting the process due to permission prompts.

Claude is not a “constant watch”, but a collaborator that you can trust in your hosting.

Implications: AI tools are not only suitable for short and fast operations, but also for long-term, complex processes - provided you build a “hosting mechanism” for them.

  1. Automatic verification mechanism: Claude’s output value is not worth it, it depends on whether it can verify itself

One of the most important things about Boris’s experience is:

Any result output by Claude must have a “validation mechanism” to check its correctness.

He will add a validation script or hook to Claude:

After writing the code, Claude automatically runs test cases to verify that the code is correct

Simulate user interactions in the browser to validate the front-end experience

Automatically compare logs and metrics before and after operation

If it doesn’t pass, Claude will automatically modify and re-execute. until passed.

It’s like Claude brought a “closed-loop feedback system” himself.

This not only improves quality, but also reduces the cognitive burden on people.

Enlightenment: What really determines the quality of AI results is not the number of parameters of the model, but whether you have designed a “result checking mechanism” for it.

Summary: Instead of replacing humans, let AI work together like humans

Boris’s approach does not rely on “hidden features” or dark technology, but uses Claude engineeredly to upgrade it from a “chat tool” to an efficient working system component.

His Claude usage has several core features:

Multi-session parallelism: clearer division of tasks and higher efficiency

Plan First: Plan mode improves Claude’s goal alignment

Knowledge system support: The team jointly maintains the AI knowledge base and continuously iterates

Task automation: Slash commands + sub-agents, allowing Claude to work like a process engine

Closed-loop feedback mechanism: Each output of Claude has verification logic, ensuring stable and reliable output

In fact, Boris’s approach shows a new way of using AI:

Upgrade Claude from a “conversational assistant” to an “automated programming system”

Transform knowledge accumulation from the human brain into a knowledge base for AI

Transform processes from repetitive manual operations to automated workflows that are scripted, modular, and collaborative

This approach does not rely on dark magic, but is a manifestation of engineering ability. You can also learn from this to use Claude or other AI tools more efficiently and smartly.

If you often feel that “it knows a little bit but is unreliable” or “I always need to fix the code I write”, the problem may not be with Claude, but because you haven’t given it a mature collaboration mechanism.

Claude can be a qualified intern or a stable and reliable engineering partner, depending on how you use it.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • بالعربية
  • Português (Brasil)
  • 简体中文
  • English
  • Español
  • Français (Afrique)
  • Bahasa Indonesia
  • 日本語
  • Português (Portugal)
  • Русский
  • 繁體中文
  • Українська
  • Tiếng Việt