Why are giants (Google, Microsoft, Meta) now racing to "shrink" models instead of enlarging them?



The era of Small Language Models (SLMs) has begun, and the results are astonishing. Here's why the small is the "big" coming:

1- Privacy (Edge AI): Running a high-intelligence model on your phone or laptop means your data never touches the cloud.
2- Response time (Latency): Small models are 10 times faster in specialized tasks...
3- Cost: Running these models locally (via Ollama) provides excellent power at minimal energy cost...

Do you rely on large models (Cloud) in your daily work, or have you started shifting to local models?
$META $GOOGL $MSFT
META1,4%
GOOGL0,82%
MSFT2,12%
View Original
post-image
post-image
[The user has shared his/her trading data. Go to the App to view more.]
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin