Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Tether launches mobile local medical AI: 1.7B small model surpasses 16 times larger models, completely eliminating reliance on the cloud
According to Beating Monitoring, the AI research team of USDT issuer Tether announced today the launch of the QVAC MedPsy series medical language models, a localized medical AI designed specifically for smartphones, wearables, and other low-compute devices. It can run without relying on cloud servers, achieving performance far beyond the model size through an efficient architecture: the 1.7B parameter version scored an average of 62.62 on seven closed medical benchmarks, surpassing Google MedGemma-4B by 11.42 points, and outperformed the parameter-heavy MedGemma-27B (nearly 16 times larger) in real clinical scenarios like HealthBench Hard; the 4B parameter version scored even higher at 70.54, fully surpassing larger models while significantly reducing inference token consumption (up to 3.2 times), and is released in a quantized GGUF format (about 1.2GB for 1.7B), suitable for mobile and edge deployment. This release challenges the traditional assumption that “bigger models = better performance,” focusing on improving efficiency through staged post-training medical fine-tuning (supervised, clinical reasoning data + reinforcement learning), achieving true local privacy protection and low-latency inference. Tether CEO Paolo Ardoino stated that this enables medical AI to process sensitive data directly on hospital premises or device endpoints without transmitting to the cloud, thereby reducing costs, latency, and privacy risks, and has the potential to reshape the infrastructure of medical AI, promoting localized deployment especially in developing regions worldwide.