Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Luma AI releases UNI-1 multimodal reasoning model
ME News Report, April 3rd (UTC+8), Luma AI recently announced the launch of the UNI-1 multimodal reasoning model, whose core concept is “less manual effort, more intelligence.” The model is built on Unified Intelligence, aiming to understand user intent, respond to instructions, and collaborate with users to generate pixel content. UNI-1 possesses multiple specific capabilities, including common sense scene completion, spatial reasoning, transformation based on rationality, multilingual text rendering (such as Chinese cursive script, Morse code), and reasonable transformation based on reference images (such as turning an apple into apple pie, a cat into a lion). Additionally, it can generate images containing complex spatial information, such as infographics and 3D diagrams. The model also supports video generation, intelligent guidance, cultural perception evaluation, and more, offering free trials, pricing plans, enterprise services, and API access. According to the article, this marks an important step for Luma in the generative AI field toward more understanding of the physical world, more controllable, and more logically consistent visual content generation. (Source: InFoQ)