Google DeepMind opens the Gemma 4 multimodal model family

robot
Abstract generation in progress

ME News, April 3rd (UTC+8): Google DeepMind has recently open-sourced the Gemma 4 multimodal model family. The model family supports text and image inputs (the smaller models also support audio) and generates text outputs. It includes both pre-training and instruction-tuning variants, with context windows up to 256K tokens, and supports over 140 languages. The models adopt two architectures: Dense and Mixture of Experts (MoE). There are four sizes in total: E2B, E4B, 26B A4B, and 31B. Their core capabilities include high-performance inference, scalable multimodal processing, device-side optimization, expanded context windows, strengthened encoding and agent capabilities, and native system prompt support. In terms of technical details, the models use a hybrid attention mechanism, where global layers use unified key-value pairs and scaled RoPE (p-RoPE). Specifically, the E2B and E4B models use the layer-wise embedding (PLE) technology, meaning the effective parameters are fewer than the total parameters. Meanwhile, the 26B A4B MoE model activates only 3.8B parameters during inference, with a running speed close to a 4B parameter model. (Source: InFoQ)

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin