Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
NVIDIA launches Nemotron3 Nano Omni model: capable of unified processing of video, audio, images, and text, enhancing multimodal reasoning efficiency
BlockBeats News, April 29 — NVIDIA officially launched Nemotron 3 Nano Omni, the newest member of the Nemotron 3 series, integrating unified multimodal reasoning into a single efficient open-source model. NVIDIA says that agentic systems typically need reasoning that performs a single perception-to-action loop across screens, documents, audio, video, and text, yet they still rely on fragmented chains of models—separate technical stacks for vision, audio, and text. This increases the number of reasoning hops and orchestration complexity, drives up inference costs, and at the same time weakens consistency of cross-modal context. Nemotron 3 Nano Omni is designed to replace this fragmented vision-language-audio technical stack, acting as a multimodal perception and contextual sub-agent (sub-agent) within agentic systems.
In terms of accuracy, Nemotron 3 Nano Omni achieves leading results on the document intelligence leaderboard, and it is also leading on the video and audio understanding leaderboards. On the open industry benchmark MediaPerf for video understanding models, Nemotron 3 Nano Omni achieves the highest throughput in each task and delivers the lowest inference cost in video-level annotation tasks.
In terms of performance, under a fixed per-user interaction threshold, for video inference, Nemotron 3 Nano Omni maintains higher overall system throughput and can achieve up to approximately 9.2 times the effective system capacity compared with other open-source omni models; for multi-document reasoning, it can achieve up to approximately 7.4 times the effective system capacity. NVIDIA says the model is intended to replace traditional multi-model concatenation architectures, reduce inference complexity and costs, and promote the application of multimodal AI in scenarios including finance, healthcare, scientific research, and media.