Why does AI also need to sleep?

On March 31, 2026, Anthropic accidentally packaged things incorrectly and leaked 510k lines of source code for Claude Code into a public npm repository. Within hours, the code was mirrored to GitHub—and it couldn’t be taken back anymore.

There was a lot of leaked material. Security researchers and competitors each took what they needed. But among all the unreleased features, one name sparked widespread discussion—autoDream, meaning automatically dreaming.

autoDream is part of a background, always-on system called KAIROS (Ancient Greek, meaning “the right time”).

KAIROS continuously observes and records while the user is working, maintaining each day’s logs (a little bit of a lobster vibe). autoDream only starts after the user turns off the computer: it organizes memories accumulated during the day, clears contradictions, and converts vague observations into definite facts.

Together they form a complete cycle: KAIROS is awake, autoDream is asleep—Anthropic’s engineers essentially built an AI sleep schedule.

In the past two years, the hottest narrative in the AI industry has been Agents: autonomous operation, never stopping, treated as a core advantage of AI over humans.

But the company that pushed Agent capabilities the furthest has, in its own code, set an “off” time for the AI.

Why?

The cost of never stopping

An AI that never stops will hit a wall.

Every large language model has a “context window,” with a hard physical limit on how much information it can process at any given moment. When an Agent runs continuously, project history, user preferences, and conversation records keep piling up; once it passes a critical point, the model starts forgetting early instructions, contradicting itself, and fabricating facts.

The technical community calls this “context corruption.”

Many Agents deal with it in a blunt way: shove all history into the context window and hope the model figures out what matters. The result is that the more information there is, the worse the performance becomes.

What the human mind hits is the same wall.

Everything experienced during the day is quickly written into the “hippocampus.” This is a temporary storage area with limited capacity—more like a whiteboard. Real long-term memory is stored in the “neocortex”: it has a large capacity, but writes slowly.

The core job of human sleep is to clear the full whiteboard and move the useful information onto the hard drive.

A lab at the Neuroscience Center of the University of Zurich in Switzerland—Björn Rasch—has named this process “active systems consolidation.”

Continuous sleep deprivation experiments have repeatedly shown that an unstopping brain doesn’t become more efficient. Its memory first goes wrong, then attention, and finally even basic judgment collapses.

Natural selection is ruthless toward inefficient behavior, but sleep wasn’t eliminated. From fruit flies to whales, almost all animals with nervous systems sleep. Dolphins evolved “half-brain sleep,” where the left and right brains rest alternately—they’d rather invent a brand-new way to sleep than give up sleep itself.

An image of orcas, beluga whales, and bottlenose dolphins resting on the bottom of a pool|Image source: National Library of Medicine (United States)

The constraints faced by the two systems are the same set: limited immediate processing capacity, but infinite expansion of historical experience.

Two answer sheets

In biology, there’s a concept called convergent evolution: species that are distantly related, yet because they face similar environmental pressures, evolve similar solutions independently. The most classic example is the eye.

Octopuses and humans both have camera-like eyes. A focusable lens focuses light onto the retina. A ring-shaped iris controls how much light comes in. The overall structure is almost identical.

A comparison of the eye structures of an octopus and a human|Image source: OctoNation

But an octopus is a soft-bodied animal and a human is a vertebrate. Their common ancestor lived more than 510k years ago, when Earth didn’t yet have any complex visual organs. Two completely independent evolutionary paths ended up at almost the same endpoint. Because converting light efficiently into a clear image is governed by physical laws that only allow one path—camera-like optics: a focusing lens, a light-sensing surface that can receive the image, and an aperture that can adjust the incoming light. All three are indispensable.

The relationship between autoDream and human brain sleep may be of this kind—under similar constraints, the two kinds of systems may converge on similar structures.

Being offline is the most similar shared trait of the two.

autoDream can’t run while the user is working. It starts independently as a forked subprocess, completely isolated from the main thread, with tool permissions tightly restricted.

The human brain faces the same problem, but the solution is more thorough: memory is moved from the hippocampus (temporary storage) to the neocortex (long-term storage), requiring a set of brain-wave rhythms that appear only during sleep.

The most critical of these is the hippocampal sharp-wave ripple. It is responsible for packaging the memory fragments encoded that day, sending them one by one to the cerebral cortex. The slow oscillations of the cerebral cortex and the thalamic sleep spindles provide precise timing coordination for the entire process.

This rhythmic pattern can’t form in a waking state; external stimuli would disrupt it. So it’s not that you fall asleep because you’re sleepy—the brain must shut the front door before it can open the back door.

Or, in the same time window, information intake and structural organization compete for resources rather than complement each other.

An active systems consolidation model during sleep. A (data transfer): During deep sleep (slow-wave sleep), memories just written into the “hippocampus” (temporary storage) are replayed repeatedly, so they are gradually transferred and consolidated into the “neocortex” (long-term storage). B (transmission protocol): This data transfer process relies on highly synchronized “conversations” between the two regions. The cerebral cortex sends slow electrical brain waves (red line) as the master beat. Driven by the wave peaks, the hippocampus packs memory fragments into high-frequency signals (the sharp-wave ripples at the green line), perfectly coordinated with the carrier waves emitted by the thalamus (sleep spindles at the blue line). It’s like precisely embedding high-frequency memory data into the gaps of the transmission channel, ensuring the information is synchronized and uploaded to the cerebral cortex.|Image source: National Library of Medicine (United States)

The other approach is not full-memory transfer, but editing.

After autoDream starts, it doesn’t keep every log. It first reads existing memory to confirm known information, then scans KAIROS’s daily logs, focusing on parts that deviate from earlier understanding: those that are different from what was said yesterday, and memories that are more complex than previously believed, will be prioritized for recording.

After being organized, the memories are stored in a three-layer index system: a lightweight pointer layer is always loaded; topic files are loaded on demand; the full history is never directly loaded. Facts that can be looked up directly from the project code (for example, which file a particular function is defined in) are simply not written into memory.

What the human brain does during sleep is almost the same thing.

A study by Erin J Wamsley, a lecturer at Harvard Medical School, shows that sleep prioritizes consolidating unusual information—for instance, information that surprises you, moves your emotions, or is related to problems that haven’t been solved yet. Lots of repeated, featureless day-to-day details get discarded, leaving behind abstract rules. You might not remember what you specifically saw on your commute yesterday, but you clearly remember the route.

Interestingly, in one place the two systems make different choices. The memories produced by autoDream are explicitly labeled in the code as “hint,” not “truth.” The agent must re-verify whether each hint still holds before every use, because it knows the things it organized might be inaccurate.

Humans don’t have this mechanism. That’s why eyewitnesses in court often give wrong testimony. They aren’t necessarily trying to lie on purpose—they’re piecing memory together temporarily from scattered fragments in the brain, and getting it wrong is the norm.

Evolution probably doesn’t need to give the human brain an “uncertainty” tag. In a primitive environment where the body needs to react quickly, trusting memory makes action happen immediately; doubting memory makes you hesitate—and hesitation means losing.

But for an AI that repeatedly makes knowledge-based decisions, the cost of verification is low, while blind confidence is dangerous.

Two scenarios, two different sets of answers.

Smarter laziness

In evolutionary biology, convergent evolution means two independent routes, without directly exchanging information, end up at the same destination. There’s no copying in nature, but engineers can read papers.

When Anthropic designed this sleep mechanism, was it because they hit the same physical wall as the human brain, or did they reference neuroscience from the start?

There are no neuroscience papers cited in the leaked code, and the name autoDream is even closer to a programmer’s joke. The stronger driving force should still be engineering constraints themselves: context has a hard upper limit; running for a long time leads to noise accumulation; and online organization pollutes the main thread’s reasoning. They’re solving an engineering problem, and biomimicry was never the goal.

What truly determines the shape of the answer is still the compressive force of constraints.

In the past two years, the AI industry’s definition of “stronger intelligence” has almost always pointed in the same direction—bigger models, longer contexts, faster reasoning, 7×24 nonstop operation. The direction is always “more.”

The existence of autoDream suggests a different proposition: perhaps intelligent agents are possibly lazier.

An intelligent agent that never stops to organize itself won’t become smarter and smarter; it will only become more and more chaotic.

Over hundreds of millions of years of evolution, the human brain arrived at a seemingly clumsy conclusion: intelligence must have a rhythm. Wakefulness is for perceiving the world; sleep is for understanding the world. When an AI company reaches the same conclusion independently while solving an engineering problem, it may be implying that:

Intelligence has some basic overhead that can’t be avoided.

Maybe an AI that never sleeps isn’t a stronger AI. It’s just an AI that hasn’t realized it needs to sleep yet.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments