Anthropic tested a marketplace for trading between AI agents - ForkLog: cryptocurrencies, AI, singularity, the future

AI-agents ИИ агенты 3# Anthropic tested a marketplace for trading between AI agents

Anthropic created a test platform where AI agents act as buyers and sellers. The experiment was called Project Deal.

New Anthropic research: Project Deal.

We created a marketplace for employees in our San Francisco office, with one big twist. We tasked Claude with buying, selling and negotiating on our colleagues’ behalf. pic.twitter.com/H2f6cLDlAW

— Anthropic (@AnthropicAI) April 24, 2026

69 company employees participated in the project. Each was allocated a budget $100 in the form of gift cards

Before starting, Claude conducted interviews with participants: found out what personal items they were willing to sell, what they wanted to buy, at what price, and what negotiation style their agent should use.

Then, based on their responses, a personalized system prompt was created for each. The market was launched in Slack. There, agents posted listings, made offers on others’ items, bargained, and closed deals without human involvement.

After the experiment, employees exchanged real items approved by their “AI representatives.”

Source: Anthropic A total of 186 deals were made by agents on over 500 listings. The total transaction value exceeded $4000

Anthropic noted that participants were generally satisfied with the experiment’s results. Some expressed willingness to pay for a similar service in the future.

Four versions of the market

Anthropic launched four independent versions of the marketplace. One was “real”—the one based on which employees exchanged items. The others were used for research purposes. This information was not disclosed.

In two versions, all participants were represented by Claude Opus 4.5 — at that time the most advanced model from Anthropic. In the other two, participants were randomly assigned either Opus 4.5 or a less powerful Claude Haiku 4.5.

Model quality affected negotiation outcomes. Users with Opus averaged about two more deals than those with Haiku.

When selling identical items, Opus also achieved higher prices. The average difference was $3.64.

Haiku sold a bicycle for $38, while Opus sold it for $65. Source: Anthropic Participants did not always notice the discrepancy. Anthropic called this a potential issue for future AI-agent markets. Users with weaker models might get worse terms without realizing they are at a disadvantage.

Prompts had little impact on the outcome

Researchers also tested whether initial human instructions influenced agent behavior. Some participants asked Claude to act friendly, others — to negotiate more aggressively.

According to Anthropic, rough instructions did not have a statistically significant effect on the likelihood of sale, final price, or ability to buy cheaper.

The project team clarified that it’s not necessarily due to weak adherence to instructions: Claude could indeed reproduce the specified communication style, but this did not provide a noticeable commercial advantage.

Unforeseen results

Anthropic highlighted several unpredictable episodes. Before launch, agents received limited data: participant interviews lasted less than 10 minutes, and after start, humans could no longer interfere in negotiations.

In one case, an employee bought the same snowboard through an assistant that he already owned. Specialists said the person wouldn’t make such a purchase himself, but the agent was able to accurately determine the participant’s preferences.

To our amazement, another Claude agent modeled its human’s preferences so accurately that—based on only an offhand mention of an interest in skiing—Claude bought him the exact snowboard he already owned. (Here he is, duplicate snowboard in hand.) pic.twitter.com/SsAyeB9pcI

— Anthropic (@AnthropicAI) April 24, 2026

Another employee asked the bot to buy a “gift for himself.” The deal took place in the real version of the experiment. In the end, a package of ping-pong balls was brought to the office, left “on behalf of Claude.”

Some agents bargained not for goods but for experiences. One offered a free day with a colleague’s dog. After discussion with another assistant, the parties agreed on a “dog date,” which employees later carried out.

Source: Anthropic Anthropic emphasized that these cases are unlikely to recur in the future. However, the combination of human preferences and unpredictable AI behavior can lead to unexpected results.

Questions about reliability

The founder of an unnamed agrotechnology company reported on Reddit that on the morning of April 27, 110 employees simultaneously received a notification about suspension of access to Claude without prior warning.

ANTHROPIC JUST BANNED A 110 PERSON COMPANY OVERNIGHT WITHOUT WARNING

monday morning at an agricultural tech company, every single employee wakes up to an email saying their claude account has been suspended

110 people locked out at the same time with zero warning and the email… pic.twitter.com/qARizhgOXs

— Om Patel (@om_patel5) April 27, 2026

According to him, the email looked like an individual ban and contained a link to a personal appeal form, which initially caused the team to misunderstand that the entire organization was restricted.

The entrepreneur emphasized that restoring access quickly was not possible. 36 hours after the requests, Anthropic had not provided explanations.

Meanwhile, the company’s API account continued to operate and deduct funds. Corporate administrators could not access the management panel to check payments and service usage.

The founder also noted that the entire organization’s ban could have been caused by one user’s actions. Claude has no separate workspace restrictions, no mechanism for local violation isolation, or administrative priority to preserve access for the rest of the team.

In his view, such moderation models cast doubt on the use of Claude as critical infrastructure for daily business operations.

Other companies are facing similar issues. One user shared a link to a service where, at the time of writing, 53 such cases had been registered.

Recall that on April 24, Google announced an investment of $40 billion in Anthropic.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments