Tencent open-sources Agent memory system, OpenClaw saves up to 61% of tokens

robot
Abstract generation in progress

AIMPACT News, May 14 (UTC+8), according to Dongcha Beating monitoring, Tencent Cloud Database team spent 6 months specifically tackling the long conversation forgetfulness problem, and recently officially open-sourced TencentDB Agent Memory. This is a local-first memory engine designed for AI Agents, defaulting to SQLite + sqlite-vec as the local backend, which can be installed as an OpenClaw plugin and also supports Hermes Gateway integration. Its core is not to directly insert historical conversations into a vector database, but to split memories into two structures. Long-term memory is layered with L0 raw conversations, L1 atomic facts, L2 scene chunks, and L3 user profiles; short-term task memory externalizes lengthy tool logs into refs files, writes step summaries into jsonl, and uses Mermaid canvas to preserve task structure and node indices. In complex workflows with over 30 steps, the Agent typically only reads lightweight Mermaid diagrams, and when verifying details, it returns to the original logs via node_id. Official benchmarks show that after integrating OpenClaw, the token consumption for WideSearch tasks dropped from 221.31M to 85.64M (a 61.38% reduction), with a 51.52% increase in pass rate. In the long-term memory evaluation PersonaMem, accuracy improved from 48% to 76%. The value of this design lies in its ability to retain the complete path from high-level profiles and task canvases down to the original text, rather than using a one-time summary to swallow all historical details. (Source: BlockBeats)

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pinned