Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Case Study | AI-generated content is not a grounds for exemption from liability! The publisher who fails to fulfill the verification obligation and infringes on others' right to reputation should be held responsible
Currently, AI technology is developing rapidly and has been deeply integrated into multiple traditional sectors such as entertainment and culture, finance, and advertising and marketing. As new and old scenarios intersect and blend, civil disputes arising from this trend are also increasing. Recently, the Beijing Internet Court tried a case involving an infringement of the right to reputation, triggered by the use of generative AI to publish content.
Basic Case Overview
The plaintiff is an organization engaged in live-streaming business. After one of its contracted hosts unfortunately passed away, public attention was sparked. The defendant, through its social media account, published a video titled “The host’s death #养生# Health #养生就是养健康# Entrepreneurship #NutritionalProduct,” stating that the host “had always been getting by by taking medicine and drinking alcohol, staying strong despite insomnia and anxiety,” “streamed live for 15 hours every day, memorizing lines late at night,” “kept anti-depression medication on him and falsely claimed it was throat-lozenges,” “the doctor asked him to rest and treat his illness, but the team urged him to livestream,” and so on. The plaintiff argued that the above content fabricated facts and disparaged the organization’s reputation, and filed a lawsuit with the court. It requested the court to order the defendant to stop the infringement, publicly apologize and make amends to the plaintiff, and compensate for economic losses and reasonable expenses for safeguarding rights, totaling 30,000 yuan.
During the trial, the defendant had already deleted the video at issue and argued that the video copy was not authored by the defendant; instead, it was generated by AI. To support this claim, the defendant submitted a screen recording showing content generation. The screen recording shows that the defendant issued the AI with instructions: “The host has passed away; write a narration script about dying from illness; advocate for attention to health.” After the AI generated a first draft, the defendant requested additions concerning online reporting. After the AI searched and modified the script based on multiple articles from several platforms, the defendant directly used the revised script to record and publish the video in question. The defendant therefore claimed that its content had a basis in online information and did not constitute an infringement of the right to reputation.
The Court’s Findings
In this case, the “host” mentioned in the challenged video is a contracted host of the plaintiff. Accordingly, it can be determined that the “team” referred to in the video points to the plaintiff in this case. Regarding whether the defendant’s related remarks have a factual basis, the defendant argued that the content at issue was generated by AI and that the AI-generated content referenced multiple public reports. In this regard, the court held that, as a user of generative artificial intelligence services, when using AI-generated content to produce and publish videos, the defendant has a legal obligation to conduct necessary verification of the relevant information. In this case, when the defendant published the video at issue, it neither indicated the source of the information nor checked the authenticity and credibility of the source, and it also failed to submit effective evidence proving that statements such as “streaming 15 hours every day, still memorizing lines in the middle of the night,” and “the doctor told him to rest and treat his illness, yet the team kept urging him to livestream,” were objectively true. After the video in question was published, multiple netizens made negative evaluations of the plaintiff in the comment section, objectively resulting in damage outcomes of the plaintiff’s social reputation being lowered. In summary, the defendant’s act of publishing the video at issue constitutes an infringement of the plaintiff’s right to reputation, and therefore the defendant shall bear the corresponding tort liability according to law.
Judgment Outcome
The court ordered the defendant to apologize to the plaintiff by issuing an apology statement through its social media account, and to compensate the plaintiff for certain economic losses.
At present, the judgment in this case has taken effect.
Judge’s Comments
Currently, generative artificial intelligence technology is spreading quickly and has been deeply integrated into daily life, becoming an important tool for the public to obtain information, assist cognition, carry out creation, and make decisions. At the same time, AI-generated content has limitations. Its outputs are prone to factual deviations, logical fallacies, and even “hallucinations.” If such content is used or disseminated directly without verification, it easily misleads the public and may even cause users to fall into incorrect judgments and decisions, thereby leading to various disputes and risks. Therefore, in the process of using, disseminating, and publishing AI-generated content, it is essential to clearly recognize that artificial intelligence is not absolutely reliable, and it cannot replace human review and fact-checking. Content users, publishers, and relevant operating entities all bear unshirkable subject responsibilities and legal verification obligations. They may not use the excuse that “the content was generated by AI” to evade the careful review responsibilities for the authenticity, legality, and compliance of information. No matter whether the content is in the form of text, images, audio/video, or other types, it should be verified before publishing, reposting, or using—checking whether the facts are true and whether the source is legitimate—so as to ensure the content does not infringe on others’ rights to reputation, privacy, copyright, or other lawful interests.
Judge Profile
Judge Wu Jiao
Wu Jiao is a judge of the Second Division for Comprehensive Trial at the Beijing Internet Court. Many of the cases written by her have won awards in analyses of excellent cases across the national court system and have been selected as China’s courts’ annual cases. She has also published multiple articles in publications such as the People’s Court News.