🎉 #Gate Post# Hits 50,000 Followers!
✨ To celebrate this amazing milestone, we're giving back to our incredible community!
🎁 4 Lucky Winners Will Each Receive $10 Points!
Join:
1️⃣ Follow Gate_Post
2️⃣ Like this post
3️⃣ Drop your congratulations in the comments!
End at 18:00, May 25 (UTC)
Lenovo AI server achieves local deployment of DeepSeek full-blooded large model for the first time, with less than 1TB, supporting 100 concurrency.
On March 3, Jinshi Data reported that recently, Lenovo Group announced that based on the Lenovo Wentian WA7780 G3 server, it has successfully implemented the single-machine deployment of the DeepSeek-R1/V3 671B large model for the first time in the industry, carrying 100 concurrent users with less than the industry-recognized 1TGB memory (actually 768GB) for a smooth experience. According to Lenovo's test data, in a standard test environment with 512 TOKENs, the system can support 100 concurrent users to continuously obtain a stable output of 10 TOKENs per second, compressing the first TOKEN response time to within 30 seconds.