Tencent open-sourced Hy3 preview version, code benchmark tests improved by 40% over the previous generation

騰訊開源Hy3預覽版

Tencent officially open-sourced the Hy3 preview version large language model on April 23 on GitHub, Hugging Face, and ModelScope, and also provides paid API services via Tencent Cloud. According to Decrypt’s report on April 24, Hy3’s preview version began training in late January, and by the publication calendar date, it had been under three months.

Hy3 Model Architecture and Development Background

According to Tencent’s official announcement, the Hy3 preview version uses a mixture-of-experts architecture, routing each query to a designated subset of expert subnetworks rather than asynchronously enabling all parameters, in order to reduce computational requirements.

The parameter count of the previous flagship model Hy2 is over 400 billion. Tencent’s official statement indicates that 295 billion is the configuration optimized for inference efficiency; beyond this scale, the marginal benefit of adding more parameters is no longer worth it.

According to Decrypt’s report, Hy3’s training work was led by Tencent’s chief artificial intelligence scientist Yao Shunyu. After he completed a fundamental infrastructure rebuild for Hy3’s pre-training and reinforcement learning stacking in February 2026, Hy3 training officially began.

Key Benchmark Test Data

Based on the benchmark test results disclosed in Tencent’s official announcement:

SWE-bench Verified (fixing real GitHub code errors): Hy3 preview version 74.4%, Hy2 53.0%; in the same period, GLM-5 77.8%, Kimi-K2.5 76.8%, Claude Opus 4.6 80.8%

Terminal-Bench 2.0 (command-line autonomous task execution): Hy3 preview version 54.4%, Hy2 23.2%

BrowseComp (complex web search tasks): Hy3 preview version 67.1%, Hy2 28.7%

WideSearch: Hy3 preview version 70.2%, higher than GLM-5 and Kimi-K2.5, lower than Claude Opus 4.6’s 77.2%

Tsinghua University mathematics PhD qualification exam (Spring 2026): average score over three runs (avg@3) 88.4, the highest score among Chinese models

2025 Chinese High School Biology Olympiad (CHSBO 2025): 87.8, the highest score among similar Chinese models

Deployment Platforms and API Pricing

According to Tencent’s official announcement, the Hy3 preview version has been deployed on the following platforms: Yuanbao, QQ, Tencent Docs, CodeBuddy, WorkBuddy, and OpenClaw.

Tencent Cloud’s API pricing is $0.18 per million input tokens and $0.59 per million output tokens; the monthly fee for the personal token plan starts at about $4.10. Tencent’s announcement also shows that Hy3’s first-token latency on CodeBuddy and WorkBuddy is 54% lower than the previous generation, end-to-end generation time is shortened by 47%, and it successfully completes a 495-step agent workflow.

Frequently Asked Questions

When will Tencent Hy3 preview version be released, and on which platforms can it be obtained?

According to Tencent’s official announcement and Decrypt’s April 24, 2026 report, the Hy3 preview version was open-sourced on April 23, 2026 (Thursday) on GitHub, Hugging Face, and ModelScope, with Tencent Cloud also simultaneously providing paid API services.

Compared with the previous model Hy2, what are the main differences in Hy3 preview version’s benchmark test results?

According to Tencent’s official announcement, the SWE-bench Verified score rose from Hy2’s 53.0% to 74.4%; BrowseComp rose from 28.7% to 67.1%; and Terminal-Bench 2.0 rose from 23.2% to 54.4%.

What is the API pricing for the Hy3 preview version?

According to Tencent Cloud’s official pricing, the Hy3 preview version API starts at $0.18 per million input tokens and $0.59 per million output tokens; the monthly fee for the personal token plan starts at about $4.10.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments