Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Anthropic tested a marketplace for trading between AI agents - ForkLog: cryptocurrencies, AI, singularity, the future
Anthropic created a test platform where AI agents act as buyers and sellers. The experiment was called Project Deal.
69 company employees participated in the project. Each was allocated a budget $100 in the form of gift cards
Before starting, Claude conducted interviews with participants: found out what personal items they were willing to sell, what they wanted to buy, at what price, and what negotiation style their agent should use.
Then, based on their responses, a personalized system prompt was created for each. The market was launched in Slack. There, agents posted listings, made offers on others’ items, bargained, and closed deals without human involvement.
After the experiment, employees exchanged real items approved by their “AI representatives.”
Anthropic noted that participants were generally satisfied with the experiment’s results. Some expressed willingness to pay for a similar service in the future.
Four versions of the market
Anthropic launched four independent versions of the marketplace. One was “real”—the one based on which employees exchanged items. The others were used for research purposes. This information was not disclosed.
In two versions, all participants were represented by Claude Opus 4.5 — at that time the most advanced model from Anthropic. In the other two, participants were randomly assigned either Opus 4.5 or a less powerful Claude Haiku 4.5.
Model quality affected negotiation outcomes. Users with Opus averaged about two more deals than those with Haiku.
When selling identical items, Opus also achieved higher prices. The average difference was $3.64.
Prompts had little impact on the outcome
Researchers also tested whether initial human instructions influenced agent behavior. Some participants asked Claude to act friendly, others — to negotiate more aggressively.
According to Anthropic, rough instructions did not have a statistically significant effect on the likelihood of sale, final price, or ability to buy cheaper.
The project team clarified that it’s not necessarily due to weak adherence to instructions: Claude could indeed reproduce the specified communication style, but this did not provide a noticeable commercial advantage.
Unforeseen results
Anthropic highlighted several unpredictable episodes. Before launch, agents received limited data: participant interviews lasted less than 10 minutes, and after start, humans could no longer interfere in negotiations.
In one case, an employee bought the same snowboard through an assistant that he already owned. Specialists said the person wouldn’t make such a purchase himself, but the agent was able to accurately determine the participant’s preferences.
Another employee asked the bot to buy a “gift for himself.” The deal took place in the real version of the experiment. In the end, a package of ping-pong balls was brought to the office, left “on behalf of Claude.”
Some agents bargained not for goods but for experiences. One offered a free day with a colleague’s dog. After discussion with another assistant, the parties agreed on a “dog date,” which employees later carried out.
Questions about reliability
The founder of an unnamed agrotechnology company reported on Reddit that on the morning of April 27, 110 employees simultaneously received a notification about suspension of access to Claude without prior warning.
According to him, the email looked like an individual ban and contained a link to a personal appeal form, which initially caused the team to misunderstand that the entire organization was restricted.
The entrepreneur emphasized that restoring access quickly was not possible. 36 hours after the requests, Anthropic had not provided explanations.
Meanwhile, the company’s API account continued to operate and deduct funds. Corporate administrators could not access the management panel to check payments and service usage.
The founder also noted that the entire organization’s ban could have been caused by one user’s actions. Claude has no separate workspace restrictions, no mechanism for local violation isolation, or administrative priority to preserve access for the rest of the team.
In his view, such moderation models cast doubt on the use of Claude as critical infrastructure for daily business operations.
Other companies are facing similar issues. One user shared a link to a service where, at the time of writing, 53 such cases had been registered.
Recall that on April 24, Google announced an investment of $40 billion in Anthropic.