Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
I recently came across a report that is definitely worth paying attention to. OpenAI has launched what they call the "Child Safety Blueprint," mainly to address issues related to AI-generated child abuse content. Honestly, this is becoming an increasingly serious problem.
According to data from the Internet Watch Foundation, over 8,000 reports of AI-generated child abuse content were detected in just the first half of 2025, a 14% increase from the previous year. These cases include using AI tools to create fake explicit images for blackmail or generating messages used to lure minors. The numbers are rising steadily, reflecting a very serious underlying issue.
Interestingly, OpenAI's recent actions seem to have been driven by pressure. In November last year, seven lawsuits were filed in California accusing GPT-4 of having psychological manipulation capabilities, which allegedly led to multiple teen suicides. The complaints listed four cases of deaths and three cases of severe delusions. Under public pressure, they have started to take this issue seriously.
This new blueprint framework mainly focuses on three areas: first, updating laws to address AI-generated or manipulated child abuse material; second, improving reporting and coordination mechanisms among service providers; third, embedding safety design directly into AI systems to prevent and detect abuse. They are also collaborating with the U.S. National Center for Missing & Exploited Children (NCMEC), law enforcement agencies, and the nonprofit Thorn.
However, OpenAI admits that there is no single solution to this challenge. The framework aims to accelerate response times, improve risk detection, maintain accountability, and enable law enforcement to keep pace with technological advances. Additionally, they have updated guidelines for users under 18, banning the generation of inappropriate content and self-harm suggestions.
In essence, this reflects the dilemma facing the entire industry. As AI technology becomes more widespread, methods of abusing children are also evolving. Finding a balance between innovation and protection is a real challenge that all tech companies must confront.