Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
I came across an interesting development. It seems that Moondream has launched a new service called "Lens," which is specialized in improving the accuracy of visual language models. This happened last week on April 21.
Until now, VLMs performed excellently in laboratory settings, but their accuracy dropped significantly when applied in real-world scenarios. Lens is a fine-tuning service designed to fix that issue, supporting both reinforcement learning and supervised fine-tuning. It operates on a pay-as-you-go API, so you can use only what you need.
What’s remarkable is that it achieves significant improvements with a small amount of data. For example, when used for analyzing live NBA broadcast footage, the F1 score jumped from 28% to 79%. False detections were also greatly reduced.
It’s said to outperform existing models in tasks like identifying countries from street view images and medical image processing. It feels like a step forward in making visual language models practical.
Moondream’s early partner, PTZOptics, plans to incorporate Lens to enhance target tracking and anomaly detection accuracy. Moondream previously released the Photon inference engine, but Lens complements it by balancing speed and accuracy in VLM deployment.
Solving real-world application challenges with technology—these steady improvements will likely lead to widespread adoption of VLMs.