Google DeepMind open-sources the Gemma 4 multimodal model family

robot
Abstract generation in progress

ME News Update, April 3 (UTC+8): Google DeepMind has recently open-sourced the Gemma 4 multimodal model family. This model family supports text and image inputs (the smaller models also support audio), and generates text outputs. It includes both pre-training and instruction-tuned variants, with context windows up to 256K tokens, and supports more than 140 languages. The models come in two architectures: Dense and mixture of experts (MoE), with four sizes in total: E2B, E4B, 26B A4B, and 31B.

Its core capabilities include high-performance inference, scalable multimodal processing, on-device optimization, expanded context windows, enhanced encoding and agent capabilities, and native system prompt support. In technical details, the model uses a hybrid attention mechanism: the global layers use unified key-value pairs and a scaled RoPE (p-RoPE). Among them, the E2B and E4B models adopt layer-wise embedding (PLE) technology, with effective parameters fewer than the total parameters. Meanwhile, the 26B A4B MoE model activates only 3.8B parameters during inference, with a runtime speed close to that of a 4B parameter model. (Source: InFoQ)

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin