Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Google DeepMind Releases Lyria 3 Pro: AI Music Transforms from "30-Second Preview" into Full Songs
Google DeepMind launched Lyria 3 Pro on March 25. Just six weeks after the release of the previous version, Lyria 3, this upgrade focuses on one core improvement: increasing the generation duration from 30 seconds to 3 minutes, while enabling the model to truly understand the internal structure of a song.
This is not a minor update. 30 seconds is enough to generate background sounds, but not to compose a full song—no verses, no transitions, no climaxes. The new “structure-aware” capability in Lyria 3 Pro allows users to specify sections like intro, verse, chorus, and bridge in prompts, enabling the model to arrange segments and dynamic changes accordingly. This marks a key step for AI music tools from simple “generators” to true “creative tools.”
Suno and Udio have been doing this for a year
Honestly, Suno and Udio already had this capability early 2025, with longer generation times and more flexible structure control. Google catching up at this point signifies it is entering serious competition in the music AI space—backed by the distribution power of the Gemini ecosystem, Lyria 3 Pro will reach far more users than any standalone music AI tool.
The simultaneous launch of Vertex AI is another signal: Google aims not only to develop consumer tools but also to embed Lyria into enterprise workflows.
What it can do specifically
Supports text, images, and videos as input; the model automatically matches music styles based on content mood. Generated outputs include vocals, lyrics, and instruments, covering multiple languages. All outputs are embedded with SynthID watermarks to indicate AI origin—consistent with DeepMind’s approach to content traceability.
Who can use it and how
Paid Gemini App users can now access the service. Tiered by plan: AI Plus about 10 songs per day, Pro about 20, Ultra about 50. Free users remain limited to the 30-second version of Lyria 3.
Supported languages include English, Japanese, Korean, Hindi, Spanish, Portuguese, German, French, and more, for users aged 18 and above. Operation path: Gemini App → Create Music → select “Thinking” or “Pro” mode.
Developers can access via Google AI Studio and Gemini API; Vertex AI is now in public preview for enterprise on-demand generation scenarios. Google Vids and its music production tool ProducerAI have also begun integration. Enterprise Workspace support is expected within a few days.
Copyright issues remain unresolved
Google states that training data usage complies with relevant agreements with artists but has not disclosed specific sources or licensing scope. This is in the same context as the copyright lawsuits faced by Suno and Udio—the legal debate over AI music training data is still unresolved. Google’s statement is more of a position declaration than a definitive answer.
Lyria 3 Pro is gradually opening to users, with some regions possibly experiencing delays.