CoinWorld News reports that the SWE-bench team has released a new benchmark ProgramBench, which evaluates 9 cutting-edge AI models on the task of reconstructing real software, with a 0% pass rate. The benchmark was jointly released by Meta AI research team in collaboration with Stanford and Harvard, requiring AI agents to reconstruct and implement a complete codebase from scratch using only a compiled binary file and user documentation, reproducing the behavior of the original program. The benchmark includes 200 tasks, covering everything from small CLI tools to large projects. Test results show that none of the models met the "full pass" main metric, with Claude Opus 4.7 leading the auxiliary metric "almost pass" by 3%, while all other models scored 0%.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin