Amazon wins the first round: Do intelligent agents need platform permission to start shopping on behalf of users?

robot
Abstract generation in progress

21st Century Business Herald reporter Xiao Xiao reports

The “sovereignty struggle” between intelligent agents and internet platforms has begun to take shape.

On March 9 local time, a federal court in California approved Amazon’s injunction, prohibiting Perplexity’s AI agent from continuing to access the Amazon website.

The focus of the controversy is Perplexity’s Comet AI browser, which is a typical intelligent agent product: users only need to issue commands in the web chat box, and the AI can automatically complete a series of web operations, including writing emails, booking flights, and placing orders on e-commerce platforms.

In November of last year, Amazon took the lead in publicly opposing such behavior, accusing Perplexity of infringing by accessing the Amazon website through an AI agent and demanding an immediate stop to the related actions.

According to Amazon, Perplexity’s agent allows users to authorize the AI to browse Amazon pages and log into user accounts to complete tasks within password-protected areas. In this process, the agent can obtain private account information from Amazon and transmit the relevant data back to Perplexity’s servers.

Amazon believes this behavior violates the U.S. Computer Fraud and Abuse Act—although it received user consent, it still constitutes “unauthorized access to protected computer systems” without platform authorization. To prove damages, Amazon also submitted a series of evidence to the court, including multiple cease-and-desist orders sent to Perplexity, as well as the manpower and technical costs incurred by the company to prevent agent access.

According to media reports, Amazon also pointed out that Perplexity’s agent behavior impacts its advertising business. The company needs to modify Amazon’s advertising system and develop new detection mechanisms to filter AI traffic, “because advertisers only pay for exposure to real human users.”

Based on these reasons, Amazon applied to the court for a preliminary injunction, requesting that Perplexity’s access be prohibited during the case proceedings.

Perplexity, however, presented a completely different position in court. The company described Amazon as an innovation “bully,” arguing that AI is merely a tool for users and should not be discussed separately from the user. The actions authorized by users should not be considered “unauthorized access.” If the court supports Amazon’s injunction, it would hinder the iterative development of intelligent agent products.

U.S. courts currently stand on the side of Amazon. The ruling noted that Amazon has a high likelihood of winning and has provided sufficient evidence. The court specifically cited a landmark old case—the 2009 Facebook lawsuit against the aggregation site Power.com. The ruling stated that even with user consent, once the platform explicitly rescinds authorization and issues a cease notice, continued access by third parties still constitutes “unauthorized access.”

Therefore, the court ordered Perplexity to temporarily stop logging into Amazon member accounts on behalf of users and to destroy all obtained Amazon data. Perplexity filed an appeal this Tuesday.

Although this is just a preliminary injunction and the case still requires formal hearings, it is sufficient to glimpse the attitude of U.S. courts. More importantly, as “lobster” type intelligent agents become popular, similar debates will continue to arise.

More and more AI is beginning to directly operate web pages on behalf of users, breaking the internet order dominated by apps for over a decade. We previously reported that starting from the end of 2024, some mobile AI assistants will be able to perform tasks across applications, such as ordering takeout, booking flights, or sending messages. This capability fascinates users but causes unease for platforms, as intelligent agents may bypass the ad traffic and recommendation algorithm systems of apps.

When AI can act on behalf of users, besides user authorization, is there also a need for the platform’s “dual authorization”? There are currently no clear laws or regulations in the domestic context to answer this core dispute.

In June of last year, the China Software Industry Association released a non-mandatory group standard titled “Safety Requirements for Intelligent Agent Task Execution,” requiring intelligent agents to obtain authorization from both users and third-party apps. However, in the new version released in October, the requirement for “dual authorization” did not appear again.

Before there is a unified signal from regulators, many platforms are proactively setting boundaries. Amazon and eBay have explicitly prohibited AI agents from accessing their websites in their “User Agreements” and have blocked dozens of agents, including ChatGPT.

Domestically, several major apps have also taken “banning” actions against mobile intelligent agents. On March 10, Xiaohongshu released a community announcement clearly prohibiting the use of AI to simulate real people, create non-real content, or engage in false interactions. If all notes are published and distributed by AI, the platform will impose bans upon discovery.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin