Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Domestic world model soars to the top of the global rankings! Decisively ahead of Google and NVIDIA by a wide margin, with 3D accuracy approaching perfection.
GigaWorld-1 is indeed an impressive world model from our domestic sources!
The latest work from Jiga Vision, GigaWorld-1, directly surpasses Google and Nvidia, with WorldArena ranking first globally.
Moreover, it is the only embodied world model to break the 60-point threshold in overall scores.
What does this mean? Just taking the three core dimensions as an example, it leads by a wide margin:
Physics Adherence: An improvement of a full 16% compared to the second place.
3D Accuracy: Nearly perfect.
Visual Quality: Also significantly ahead.
In other words, GigaWorld-1 is a truly versatile embodied world model that is not only visually realistic but also geometrically precise and physically accurate.
This means that Jiga Vision, a Chinese team led by Tsinghua University and composed of core talents from top companies like Alibaba, Baidu, and Horizon Robotics, has successfully completed a textbook-style technological leap.
Emerging from the Most Stringent “Touchstone”
As we know, WorldArena is recognized as the “touchstone” in the field of world models.
It was jointly developed by Tsinghua University along with Princeton University, National University of Singapore, Peking University, Hong Kong University, Chinese Academy of Sciences, Shanghai Jiao Tong University, and University of Science and Technology of China, among eight top domestic and international universities and research institutions.
It has abandoned the narrow testing of single dimensions, instead constructing a three-dimensional evaluation system that includes 16 sub-core indicators and 3 real application tasks, aimed at the most stringent pressure testing of embodied world models’ perceptual accuracy, understanding of physical laws, three-dimensional spatial cognition, as well as action prediction and implementation capabilities.
Because of this, WorldArena attracted almost all leading world model teams globally to compete, with the initial list of participants including Google, Nvidia, and others.
The final result surprised everyone; it was not a tech giant, but this low-profile technical expert — Jiga Vision.
Its latest GigaWorld-1 successfully claimed the top prize with its hardcore strength!
The Perfect Fusion of Explicit Action Modeling and Differentiable Physics Engine
So why did GigaWorld-1 achieve such impressive results?
Firstly, from a technical standpoint, GigaWorld-1 is an AC-WM (Action-Conditioned World Model) specifically designed for embodied scenarios.
Compared to traditional world models, GigaWorld-1 deeply inherits and advances the core architecture of EmbodieDreamer released by Jiga Vision last July.
This solution not only introduces an explicit action modeling mechanism which fundamentally ensures geometric consistency during the video generation process, but also innovatively integrates a differentiable physics engine to acquire precise physical parameters for robotic arms, achieving realistic simulations and strict adherence to complex physical interactions.
Building upon this cutting-edge architecture, Jiga Vision further incorporated over ten thousand hours of high-quality real robotic operation video data for training, significantly enhancing the model’s generalization ability and high-precision action adherence in open scenes.
Currently, the core code and some datasets of GigaWorld-1 have been open-sourced.
In just half a month since the open-source release, GigaWorld-1’s download count on HuggingFace has rapidly surpassed 16,000, showcasing the high recognition of its technological strength within the academic and industrial communities, as well as its immense influence in the developer community.
At the same time, GigaWorld-1 will serve as the official Baseline, strongly supporting the upcoming GigaBrain Challenge@CVPR 2026 international competition to be held in the United States in three months, actively empowering global developers and jointly promoting the prosperous development of the embodied intelligence ecosystem.
(Competition official website:)
This raises a critical question —
Who is Jiga Vision?
The First Domestic Company Specializing in World Models
In the industry, Jiga Vision is one of the few dual-focused players in both production and investment, working diligently on technology while also securing substantial financing.
Earlier this month, Jiga Vision announced the completion of nearly 1 billion yuan in Pre-B round financing, with an impressive lineup of investors —
Leading chip and automotive industry capitals such as Zhongxin Juyuan, Shanghai Semiconductor Investment Fund, Linxin Capital, Xingyuan Capital, and Wanlin International, along with significant state-owned platforms like CICC Capital, Su Chuang Investment, and Huachuang Capital.
Moreover, this is not the first time Jiga Vision has attracted capital attention.
As early as November 2025, Huawei’s Hubble Investment, in conjunction with Huakong Fund, completed a hundred million-level strategic investment in Jiga Vision.
In fact, Huawei has long been focused on world models, previously listing them as the top ten technological trends for the intelligent world by 2035.
However, instead of directly engaging in world models like global tech giants Google, Nvidia, and Tesla, it found the most promising target in the Chinese market through Hubble Investment — Jiga Vision.
Jiga Vision is the first domestic company to lay out world models, possessing industry-leading in-depth accumulation in both model architecture and data engines.
The company’s positioning is quite clear, focusing on physical AI and dedicated to world model-driven general intelligence in the physical world. Its technological moat is built on the “world model × embodied brain” dual-driven strategy, successfully securing dual championships in embodied brain and world model categories in world-class authoritative assessments.
Its product matrix includes the world model platform GigaWorld, the embodied foundational model GigaBrain, and the general embodied ontology Maker, among other full-stack software and hardware products for physical AI.
GigaWorld: The “Digital Sandbox” of the Physical World
GigaWorld is Jiga Vision’s self-developed world model platform that can simulate the operational rules of the physical world and generate high-fidelity synthetic data.
Compared to traditional simulators, GigaWorld can generate high-fidelity, controllable, and diverse embodied interactive data through geometric consistency and physical accuracy in world model modeling, achieving data amplification.
This enables the trained VLA model to achieve nearly 300% performance improvement across three generalization dimensions: new textures, new perspectives, and new object positions.
More critically, GigaWorld can bring about 10-100 times efficiency improvement.
In the embodied direction, GigaWorld-0 is the world’s first to let embodied world models play a core role in high-level embodied foundational models, with its GitHub open-source code garnering over 1.5k Stars, laying the foundation for technological validation.
This latest GigaWorld-1, which topped WorldArena, is also currently the most advanced AC-WM globally.
In the driving direction, the DriveDreamer series is the world’s earliest application of world models in the physical world.
Additionally, GigaWorld-Policy is also the first globally to achieve comprehensive breakthroughs in world-action model WA real-time performance, success rates, and training efficiency, thoroughly surpassing mainstream WAM inference efficiency and performance, truly bringing world-action models into a large-scale scaling phase.
Empirical data shows that GigaWorld-Policy has achieved a tenfold increase in inference speed and training efficiency, alongside a significant 30% increase in task success rates, marking the formal entry of embodied intelligence into a new era driven by world models.
GigaBrain: The “Universal Brain” for Robots
GigaBrain is the end-to-end vision-language-action foundational model developed by Jiga Vision, which surpassed many models including Pi0.5 to attain the top rank in the world’s largest real-machine evaluation competition.
The subsequent release of GigaBrain-0.5M* is the world’s first embodied foundational model based on world models to efficiently learn and self-evolve through reinforcement learning.
It proposes a reinforcement learning paradigm based on world models and employs an iterative four-phase closed-loop training process.
In high-difficulty long-duration tasks involving complex scenarios like folding paper boxes, coffee preparation, and clothing folding — which require multi-stage operations, fine perception, and continuous decision-making — GigaBrain-0.5M* achieved nearly 100% task success rates and can consistently reproduce results, showcasing exceptional strategy robustness.
The “Dream Team” of Physical AI Gathers
Beyond technology and financing, what stands out even more about Jiga Vision is its core team:
Founder and CEO Huang Guan, a leading engineering PhD from Tsinghua University’s Department of Automation.
He previously served as the head of visual perception technology at Horizon Robotics, a partner & VP of algorithms at Jianzhi Robotics, and has experience working at top research institutions like Microsoft Research Asia and Samsung Research China.
He has fully experienced the technological and industrial development of physical AI over the past decade, leading teams to win world championships in global authoritative AI competitions multiple times and publishing numerous globally recognized AI achievements.
Co-founder and Chief Scientist Zhu Zheng, a youth scholar from the Zhiyuan Institute, has published over 70 top papers with nearly 20,000 citations.
Several of his works have had a substantial impact, being selected for the global top 2% of scientists list for four consecutive years, and he has received multiple honors including the Wu Wenjun Natural Science Award, Best Student Paper Award, and CCF Outstanding Paper Award, and has served as chair for various top conferences, winning multiple competition championships.
Co-founder Sun Shaoyan, formerly the director of Alibaba Cloud and general manager of Horizon’s data closed-loop product line, possesses industry-leading experience in ultra-large-scale data closed-loop products and architecture in the physical world.
He spearheaded the industry’s first intelligent driving data closed-loop system’s implementation, effectively enhancing data processing efficiency and providing vital infrastructure support for the development of intelligent driving technology.
Partner and VP of Engineering Mao Jiming has over 16 years of experience in simulation/engineering/data/distributed architecture.
He has served as the head of simulation and engineering at Baidu Apollo and has held T10-level architect roles at Baidu and Yingche, leading the technical development and implementation of several core projects in autonomous driving and world models. He has a deep accumulation in high-quality data generation, end-to-end autonomous driving architecture design, and distributed system optimization.
Additionally, Jiga Vision’s core team includes top scientists in world models with over ten first-authored papers at top conferences during their PhD, industry experts with over ten years of full-stack production experience in physical AI, Huawei’s talented youth award winners, and top algorithm and infra experts in linear acceleration of WAN cluster, making it one of the few teams in the industry that possesses both cutting-edge innovation capabilities in the next-generation physical AI and traditional full-stack physical AI production experience.
It can be said that this team has fully experienced the development of physical AI over the past decade across CV, autonomous driving, embodied foundational models, and world models, consistently delivering industry-leading world-class results at each stage.
When they come together, they form a “dream team” that continuously leads the technological evolution of embodied world models.
From data engines (Data Engine), to closed-loop simulators (AC-WM), to world action models (WAM), Jiga Vision has always been at the forefront.
Whether in the current iteration of world models and embodied intelligence infrastructure or in the future of AGI, Jiga Vision will continue to build the most solid technological foundation.
Competition official website:
Open source code:
Open source models and data:
Source: Quantum Bit
Risk warning and disclaimer
The market has risks, and investments require caution. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should consider whether any opinions, views, or conclusions in this article align with their specific circumstances. Invest at your own risk.