Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Google Maps integrates Gemini, launching three major AI features focused on enterprise agents
At Google Cloud Next 2026, Google announced three Gemini AI features injected into the Maps and Earth platforms, declaring that maps are no longer just navigation tools.
(Background summary: Google launches the eighth-generation TPU: two AI chips targeting training and inference, challenging Nvidia’s pain points)
(Additional background: Google’s open-source design system Stitch: DESIGN.md enables Claude Code, Codex, and Antigravity to generate high-quality UI)
At Cloud Next in Las Vegas, Google announced that AI will directly generate real-world scenes on maps, analyze satellite imagery from “weeks” down to “minutes,” and open AI models for identifying bridges and power lines to all enterprises.
These three developments seem independent but point in the same direction: Google is transforming maps from navigation tools into the perceptual foundation for enterprise AI agents.
Three keys, a new door
The three features announced at Cloud Next each target a geographic information task that previously required a lot of manual effort.
The first is “Map Grounding”. Enterprise users can input a text prompt into the Gemini Enterprise Agent Platform to generate AI visualizations within real scenes from Google Street View. Advertising group WPP has tested this feature to create immersive customer ads. Currently limited to locations in the U.S., in private preview.
Simply put: brands no longer need to fly to New York first; they can see what an ad billboard would look like on a street corner in Times Square on their computer, with real buildings and sidewalks in the background, not a 3D rendered fake scene.
The second is “Aerial and Satellite Imagery Insights”. This new feature imports Google Earth satellite images into BigQuery for automated analysis. Urban planners can monitor construction progress in residential areas in real time, insurance companies can track post-disaster building damage, and Google claims this reduces manual image interpretation from “weeks” to minutes.
The third involves two “Earth AI Image Models”, which are now available in experimental access via Google Cloud Model Garden. These models are trained to recognize specific objects in satellite images, such as bridges, roads, and power lines.
In the past, enterprises wanting to do the same had to build and train AI systems themselves, a process that could take months. Partner Vantor has integrated these two models into its disaster recovery app Sentry, automatically marking damaged infrastructure after extreme weather.
Maps as the Perception Layer for AI Agents
These three features share a fundamental technical premise: geographic data is not just “where” it is, but a perceptual input that allows AI agents to understand the physical world.
Earlier, Google released Maps Grounding Lite, which is open to all developers via MCP, enabling any large language model (LLM) to access Google Maps’ database of 300 million locations. FIFA World Cup 2026 and the Boston Marathon have adopted this grounding capability as the backend for AI-powered on-site event guides. Travel group TUI uses it to turn static itineraries into real-time personalized recommendations.
This logic aligns with Gemini’s direction into consumer maps: Ask Maps allows users to query “Are there any available charging stations nearby?” through conversation, analyzing data contributed by 500 million community members; Gemini analyzes Street View and aerial images to generate realistic 3D route guidance with building facades.
From consumer to enterprise, the logic is the same: Gemini needs a map as a perceptual foundation to act in the physical world.
The moat of maps has never been just about the density of street view coverage, but about the depth of data accumulated and how many enterprises have integrated maps as an indispensable infrastructure in their workflows.