Kimi Claw Real Test: Despite the OpenClaw craze, automated AI is still in the pioneering stage

Author: Xu Shan

In 2026, a small lobster disrupted the entire AI community, and even after the New Year, the momentum of OpenClaw continues to grow.

Recently, several domestic model developers have launched products competing with OpenClaw, such as Mini Max’s MaxClaw and Kimi’s Kimi Claw. Clearly, the AI execution capability demonstrated by OpenClaw, as well as developers’ tolerance for AI output results, have shown market value.

Among these competing products, Kimi Claw has a clear positioning. It is not a from-zero self-developed Claw product but based on OpenClaw’s managed cloud service, with data hosted on Moonshot Cloud, and directly configured with over 5,000 ClawHub community skills.

Its advantages include stable usage, easy deployment, simple onboarding, and cloud-based 24/7 online operation. Just visit Kimi’s official website, click to create, and Kimi will deploy Kimi Claw automatically.

Kimi Claw one-click deployment|Image source: GeekPark

In other words, Kimi Claw is not an independent new product but essentially a virtual machine set up remotely for users, allowing them to access the cloud-based OpenClaw environment directly through Kimi.

It does not cut any functions nor add extra wrappers; it’s almost identical to deploying OpenClaw locally—just that deployment, configuration, and environment setup are handled for the user. However, it does not modify the post-deployment tuning process of OpenClaw. Without learning how to give correct commands and reasonably organize tasks, the onboarding difficulty remains relatively high.

For users unfamiliar with OpenClaw-like products, this can lead to misaligned expectations. They might think connecting to OpenClaw enables automated AI execution, but in reality, it’s just a portable interface, and many subsequent settings still require exploration. Therefore, providing some popular preset Skills for OpenClaw products will likely be a key focus for many AI model vendors moving forward.

Currently, Kimi Claw is still in beta testing and only available to Kimi Allegretto members.

  1. Building an Automated Office Workflow in 30 Minutes

We found that many users, like us, after connecting to OpenClaw, still can’t clearly define the boundaries of AI’s execution capabilities. They are curious about what it can and cannot do but also uncertain about where to start.

In fact, whether deploying OpenClaw locally or using external interfaces like Kimi Claw, the overall approach can be divided into two paths: building an application from scratch or starting from a semi-finished state to optimize it. We experimented with both, beginning with developing an application from zero to optimize the workflow.

Before trying Kimi Claw, I examined which of my tasks could be turned into a fixed workflow or improved with AI assistance. The main consideration was which type of AI tool interaction could yield better results.

I chose the daily work journal process, integrating daily workflows, work records, summaries, reflections, and ultimately generating a daily report. Previously, filling out reports was time-consuming; now I hope AI can automatically extract data and, through conversational interaction, generate a structured table.

I first outlined my optimization instructions for AI, then provided a very detailed, complex command covering roles, skill configurations, data access, core workflows, multimedia table structures, memory points, permissions, and boundaries, which I submitted to Kimi Claw.

Kimi Claw quickly analyzed the instructions and confirmed execution details with me, such as basic info, Feishu permissions, data storage, and trigger methods. Then we started building the Feishu app on the Feishu platform, sending the App ID and App Secret to Kimi Claw.

One step involved Kimi Claw directly providing the table style when creating a form in Feishu, which was then handed over to Feishu’s built-in AI system to automatically generate the form.

One of the app pages built by Kimi Claw|Image source: GeekPark

After encountering issues like not finding collaborators, application pages, or IDs, about half an hour later, I received the first message from Kimi Claw.

The speed of building this bot exceeded my expectations. When problems arose, I would directly tell Kimi Claw where I was stuck, then choose the best solution it offered. If none fit, I would continue asking for other options.

Kimi Claw one-click deployment to Feishu|Image source: GeekPark

During workflow setup, cross-platform capability became even more critical. After granting 12 Feishu permissions, I finally built the AI application, but it didn’t reach the ideal state. I wanted the AI to read my chat records and organize my tasks, but after several attempts, the group chat list retrieved by the app remained empty, as Feishu AI applications can only read conversations they participate in, not group chats.

Overall, I think Kimi Claw is quite familiar with common workflow platforms like Feishu and DingTalk. The commands it responds to are generally straightforward, understandable even for beginners. However, these enterprise apps are very strict about information permissions, and open configuration conditions are tight. For AI to truly integrate into workflows, we not only need open tools like Kimi Claw but also more suitable applications designed for AI integration.

Moreover, during operation, many bugs can occur. For example, interactions between the user and Kimi Claw, or ongoing Agent tasks, might be mistakenly counted as personal work. Learning to fix these bugs is also a key part of training the AI.

If you choose to customize your own application or functions from scratch, you need to plan clear operational paths and have basic product thinking. You must understand the openness and connectivity of input/output interfaces and control costs for each invocation and operation.

This workflow setup consumed about 15,000–25,000 tokens, roughly costing about 1 yuan according to Kimi’s pricing. Daily expenses are around 0.53 yuan, totaling approximately 15.9 yuan per month.

  1. Building an Automated AI News Assistant: “Pre-made” Apps Are Easy to Use but Costly to Modify

Besides customizing an application as I envisioned, I also tested some “pre-made” apps, such as letting Kimi Claw automatically fetch news.

In our first automated news retrieval task, we tried to monitor a tech news website. Our instruction was:

“Monitor the industry website of xxxx, summarize new articles containing the keyword ‘AI’ published in the past week and the next 3 days, automatically extract titles, summaries, and publication times, and compile these into an online table. Also, analyze trending articles in the report according to my style.”

Kimi Claw would ask for specific configuration details. During the first run, we found many official sites have anti-crawling measures, making it difficult to monitor high-quality sites. Kimi Claw also struggled to define accurate scope, leading to idle cycles that consumed tokens without results. Each idle cycle used a lot of tokens.

This monitoring ran about 8 times from 4 a.m. to 11 a.m., consuming roughly 180,000 tokens and costing about 3.68 yuan. If set to run hourly, daily costs would be around 11 yuan, and monthly nearly 330 yuan.

Later, after consulting experts, we abandoned writing our own instructions and instead downloaded related instruction packs from ClawHub and other sites. Based on these, we further customized news retrieval.

Deploying ClawHub files to Kimi Claw|Image source: GeekPark

We then set detailed filters for Chinese media, news selection criteria, and information delivery times. In the end, we obtained a decent AI news retrieval result.

Kimi Claw automatic retrieval output|Image source: GeekPark

Clearly, passive use of pre-made apps requires learning to select quality skill packages and adapt them to your specific scenarios. But if you want to modify these pre-made AI apps, you often face the same difficulties as building from scratch—development and optimization are complex, and the final results may still be unsatisfactory.

In this process, users need to spend significant time testing different Skills’ convenience and adaptability within similar products, then decide which Skills to develop, modify, or extend. This also tests the user’s product thinking.

  1. User Experience with Kimi Claw: Enhanced AI Execution, Commands as Productivity

Currently, Kimi Claw’s core value is mainly lowering the deployment barrier of OpenClaw, enabling domestic users to access quickly. The product itself doesn’t come with predefined scenarios or skills; it’s more like a “bridge” rather than a “finished product.”

During our experience, we also found that although Kimi Claw uses the Kimi K2.5 model, it’s a “bare model + native OpenClaw” combo, without the multi-round search, content reinforcement, auto-correction, and other capabilities optimized by the Kimi team on their official site.

In other words, Kimi’s official version is more user-friendly because it benefits from a dedicated team that optimizes the model for high-frequency user scenarios, with auto-completion and content enhancement. The “bare” models in OpenClaw are closer to direct API calls without such specific optimizations.

From deep experience, I can clearly perceive that the key difference between using Kimi Claw and traditional AI or basic Agent products lies in two dimensions: AI execution power and command importance. These are the core logic of using such products.

First, in terms of execution, Kimi Claw can perform tasks even when I’m not actively using my computer—no longer just a user issuing commands and waiting. I can even specify when certain commands should run, and see scheduled outputs upon startup. But I am also reminded to set stop points for experiential applications to reduce unnecessary resource consumption.

Second, regarding commands, my previous instructions to AI were usually concise and direct. When the AI’s solution was off, I would adjust accordingly. But with Kimi Claw, complex commands invoke many Agents, greatly increasing token consumption. Therefore, commands must be explicit about operation methods, permissions, execution paths, and safety/cost controls.

For example, when I used to query news, I might instruct: “Provide 10 news clues about OpenClaw and tell me their relevance.” Now, I give more detailed instructions:

“As an information retrieval specialist, you have permission to use web search tools (limited to web_search and web_open_url, no access to paid login-only news databases). Within these constraints: 1) First, search ‘OpenClaw latest updates’ with keywords, retrieving only the top 5 high-authority results (prefer tech media and official blogs, exclude forums); 2) When analyzing each for news value, strictly limit to ‘technological breakthroughs,’ ‘business impact,’ and ‘security risks,’ summarizing each in one sentence without unrelated background; 3) Disable browser automation and deep crawling to avoid anti-crawling triggers and token waste; 4) Output as a table: Title | Source | Relevance tags | Brief basis (≤30 words per row); 5) If results are fewer than 10, stop searching immediately and output what you have, no secondary broad search. Keep token budget within 8K; if the path deviates, stop and report instead of auto-correcting.”

Most of the time, I even ask the AI to optimize my instructions before submitting to Kimi Claw. Only with precise, accurate commands can I get the best results within reasonable token limits. Some public forums also have dedicated Skills libraries for OpenClaw, helping users better grasp popular applications.

Precise, concrete instructions are the prerequisite for high-quality results within reasonable token consumption. Using Kimi Claw is essentially a process of balancing model capability, output quality, and operational costs.

Finally, training the AI.

Even after quickly building an AI application, you’ll find it’s not immediately effective. The division of commands and task merging often differ significantly from human understanding. You need multiple rounds of instruction tuning to explore the product’s boundaries. Especially since many data sources are not fully open, gaining proper access and rights is not easy.

Ultimately, the current Kimi Claw’s application isn’t just a simple chatbot with many AI functions for direct use. It’s a developer tool requiring user understanding of the development process and the ability to make choices after weighing various factors. It supports some basic automated deployment but is not a ready-made product.

Automation AI Still Has Room for Growth

Although OpenClaw ignited the imagination for automated AI in 2026, recent security incidents and product tests show that OpenClaw remains a key, an opportunity, rather than the final solution.

Whether in real-world scenarios or scalable commercial paths, the AI industry has yet to establish a clear, mature route. Meanwhile, the market’s hype continues to inflate expectations for Claw-like products, even attracting many ordinary users to risky, beyond-their-capability operations.

It’s certain that automated AI has been valued since the industry’s inception. But whether cloud-managed solutions like OpenClaw and Kimi Claw can produce truly successful, scalable products still needs validation—especially since these tools can now access your terminal and files directly.

In early stages, many novices opened permissions too broadly, failing to consider security restrictions or secondary permission checks. Giving such high-level control to AI is a systemic risk. That’s why, for true large-scale and commercial deployment, security and permission governance are even more challenging than improving AI capabilities.

From direct interaction with large models, to single-agent interactions, to multi-agent collaboration, and now to OpenClaw’s approach, the industry has explored many similar functions with different paths. This indicates that we are still in the exploration phase of AI capabilities. Besides the mature and stable interaction paradigm of ChatGPT, the usage logic, boundaries, and value of new forms like Agents and Claws are still being collectively explored.

Perhaps only by 2026 will we see a batch of stable, usable, genuinely valuable automated AI applications landing successfully.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pinned