Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
OpenAI Unveils Child Safety Blueprint To Combat AI-Enabled Exploitation Amid Rising CSAM Reports
In Brief
OpenAI has launched a “Child Safety Blueprint” to combat AI-enabled child sexual exploitation, updating guidelines, strengthening safeguards, and promoting coordinated legal, technical, and operational measures amid rising AI-generated CSAM reports.
The company described child sexual exploitation as one of the most pressing challenges in the digital era, noting that AI technologies are changing how such harms occur and how they can be mitigated at scale. OpenAI stated that it has implemented safeguards to prevent misuse of its systems and collaborates with partners including the National Center for Missing and Exploited Children (NCMEC) and law enforcement agencies to improve detection and reporting. This collaboration has highlighted areas where stronger, shared industry standards are needed.
The blueprint outlines a strategy for enhancing U.S. child protection frameworks in the context of AI. It incorporates input from organizations and experts across the child safety ecosystem, including NCMEC, the Attorney General Alliance with input from North Carolina Attorney General Jeff Jackson and Utah Attorney General Derek Brown, and the nonprofit Thorn. The framework is intended to guide coordinated efforts to prevent harm to children and strengthen collaboration across legal, operational, and technical domains.
The initiative focuses on three main priorities: updating laws to address AI-generated or manipulated child sexual abuse material (CSAM), improving reporting and coordination among providers to support more effective investigations, and integrating safety-by-design measures directly into AI systems to prevent and detect misuse. OpenAI emphasized that no single approach can address the challenge alone, and the framework aims to accelerate responses, improve risk identification, and maintain accountability while ensuring enforcement authorities can act as technology evolves.
The framework is intended to allow earlier intervention, reduce exploitation attempts, enhance the quality of information shared with law enforcement, and strengthen accountability across the ecosystem to protect children more effectively.
AI-Generated Child Exploitation Reports Rise 14% In 2025 As OpenAI Unveils Expanded Child Safety Blueprint
Recent data from the Internet Watch Foundation (IWF) indicates that over 8,000 reports of AI-generated child sexual abuse content were detected in the first half of 2025, representing a 14% increase from the previous year. These cases include the use of AI tools to generate fake explicit images for financial sextortion and to produce messages used in grooming. The blueprint’s release coincides with heightened attention from policymakers, educators, and child-safety advocates, particularly following incidents where young people died by suicide after allegedly interacting with AI chatbots.
In November 2025, the Social Media Victims Law Center and the Tech Justice Law Project filed seven lawsuits in California state courts, alleging that OpenAI released GPT-4o prematurely. The complaints claim that the AI system’s psychologically manipulative features contributed to wrongful deaths by suicide and assisted suicide, citing four individuals who died and three who experienced severe delusions after prolonged interactions.
OpenAI’s new child safety blueprint builds on previous measures, including updated guidelines for users under 18 that prohibit the generation of inappropriate content, advice encouraging self-harm, or guidance on concealing unsafe behavior from caregivers. The company has also recently released a safety blueprint targeting teens in India.