Tencent's move is really impressive. The 1.8 billion parameter translation model surprisingly performs close to the 32 billion level, and it also supports two lightweight versions: 2-bit and 1.25-bit. Running on mobile phones is effortless, directly bringing large model experiences into an affordable range. It seems that model size isn't the only standard; small models can also have great power. In the future, AI applications on mobile devices are set to take off. Tencent's strategy is very precise, controlling costs while ensuring quality. The large model competition is beginning to shift toward a streamlined approach.

View Original
post-image
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments