Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Tether launches on-device medical AI that outperforms Google’s models in benchmark tests
Tether’s AI Research Group has released QVAC MedPsy-1.7B and MedPsy-4B, specialized text-only medical language models built to run directly on low-power devices such as smartphones and wearables.
According to the team, these models outperform some big medical AI systems, including Google’s, on various benchmarks, and perform comparably to much larger systems on medical reasoning and knowledge tasks while maintaining full local execution and privacy.
Traditional AI systems in healthcare rely on large cloud-hosted models, requiring sensitive data like patient records and diagnostic inputs to be transmitted to external servers, creating privacy and compliance risks. This architecture is increasingly under pressure as the healthcare AI sector is projected to grow from roughly $36 billion today to potentially over $500 billion by 2033.
Tether’s team says QVAC MedPsy challenges the scaling paradigm by focusing on efficiency.
The 1.7B model is smartphone-friendly. This tiny version scored 62.62 across seven standard medical benchmarks, beating Google’s MedGemma-1.5-4B-it by over 11 points despite being less than half its size, according to researchers. It also outperformed MedGemma 27B in real-world clinical tasks like HealthBench Hard.
The 4B version model hit 70.54 on the same tests, surpassing MedGemma-27B, a model nearly seven times bigger. It delivered strong performance on HealthBench, HealthBench Hard, and MedXpertQA.
These results span eight benchmark sets including MedQA, MedMCQA, MMLU Health, PubMedQA, AfriMedQA, MedXpertQA, and HealthBench, powered by staged medical training combining supervision, curated clinical reasoning data, and reinforcement learning.
These models are not only smart but also very practical, as noted by researchers. They respond quickly with short but still complete answers, saving time and battery life. They’re available in easy-to-use compressed formats that fit comfortably on mobile devices without losing much quality.
Technically, the 4B model generates responses in roughly 909 tokens, compared to about 2,953 for comparable systems, a 3.2x reduction. The 1.7B model averages around 1,110 tokens versus 1,901, cutting output by 1.7x.
Both models are being released in quantized GGUF format, with compressed versions weighing approximately 1.2 GB and 2.6 GB respectively.
The models are now available for free under an open license on Hugging Face.