Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Netflix open-source video erasure model VOID: not only removing objects but also recalculating the physical motion of remaining objects
AIMPACT News, April 14 (UTC+8), Netflix Research Institute in collaboration with Sofia University INSAIT in Bulgaria developed VOID (Video Object and Interaction Deletion), an AI framework capable of removing objects from videos and re-simulating the remaining scene’s physical behavior. Released under the Apache 2.0 license on Hugging Face on April 3, it is Netflix Research Institute’s first publicly released AI tool.
Traditional video erasure tools excel at filling backgrounds, correcting shadows and reflections, but struggle with scenes where objects are in physical contact (collisions, supports, pushes). VOID’s core ability is understanding physical causality: removing a middle domino from a row won’t cause subsequent dominoes to fall; removing a person jumping into a pool won’t cause water splashes; removing someone holding a guitar will allow the guitar to naturally fall.
The technical pipeline consists of three layers:
Meta’s SAM2 performs object segmentation, Google’s Gemini analyzes scene semantics, generating a four-value “quadmask” that labels the main object, overlapping areas, affected areas, and background, informing the model not only what to erase but also what will change as a result.
The first-stage inference, fine-tuned based on Alibaba’s CogVideoX-Fun-V1.5-5b-InP (a 5 billion parameter diffusion Transformer), generates physically plausible counterfactual trajectories.
An optional second stage, “optical flow noise stabilization,” initializes time-correlated noise with the motion predicted in the first stage to prevent object deformation in long segments.
Training data was generated from two sets of physical simulations: approximately 1,900 sets of Kubric rigid body dynamics data and about 4,500 sets of HUMOTO human motion capture data, trained on 8 A100 80GB GPUs. In a preference test with 25 participants, VOID achieved a 64.8% selection rate, significantly outperforming commercial tool Runway at 18.4%. Inference requires over 40GB of VRAM (A100 level). The paper has not yet undergone peer review, and Netflix has not announced plans to incorporate it into production workflows. (Source: GitHub)