Google releases Gemma 4


Sizes of 1B, 13B, 27B, and a dense 31B version. All under the Apache 2.0 license. Commercial use is unrestricted.
This license change is more significant than the model itself. Previously, Gemma used Google's proprietary license with restrictions. Now, with Apache 2.0, it directly competes with Meta's Llama.
Highlights of the model: multimodal — text + vision + audio. The dense 31B version achieved 89.2% on AIME 2026, LiveCodeBench v6 scored 80%, and it has a Codeforces ELO of 2150.
The 27B parameter size is very friendly for local deployment. It can run on a single 4090 GPU.
E4B and E2B are edge versions designed for smartphones and IoT devices. Google is building the ecosystem for Gemini Nano 4.
Llama has dominated the open-source LLM community for too long. This time, Google isn’t just testing the waters — they’re launching a full-scale attack, covering all parameter ranges from 2B to 31B, with an Apache license that removes commercial restrictions, integrating cloud and edge.
This is good news for independent developers and small teams. The more intense the competition, the more free options there are.
For Meta, Llama’s competitive moat is narrowing.
View Original
post-image
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin