Anthropic releases BioMysteryBench: biological questions that 5 experts cannot answer, Claude Mythos can solve 30% of them

robot
Abstract generation in progress

AIMPACT News, April 30 (UTC+8), according to Beating Monitoring from Dongcha, Anthropic released BioMysteryBench, a bioinformatics benchmark with 99 questions. The questions are created by domain experts based on real datasets (DNA/RNA sequencing, proteomics, metabolomics, etc.), with answers derived from the objective properties of the data or metadata verified through experiments, not relying on researchers’ subjective judgment. Typical questions include: determining which gene was knocked out in the experimental group based on RNA-seq data, or inferring parentage from whole-genome sequencing data. The evaluation environment provides Claude with a container pre-installed with common bioinformatics tools, allowing software installation via pip and conda, access to public databases like NCBI and Ensembl for reference genomes, and only the final answer is evaluated, with no restrictions on analysis methods. Among the 99 questions, 76 have at least one human expert answer correctly (solvable by humans), while the remaining 23 remain unsolved after attempts by up to five domain experts (human difficult).
For questions solvable by humans, Claude Opus 4.6 achieves an accuracy of 77.4%, with Mythos Preview further improving this. For the 23 human-difficult questions, Sonnet 4.6 and more advanced models can solve a significant proportion, with Mythos Preview reaching 30%. Trajectory analysis shows two main strategies used by Claude: one is calling upon cross-paper knowledge internalized during training to directly perform reasoning that would require meta-analysis by humans; the other is running multiple analysis methods simultaneously when uncertain, taking the intersection of multiple evidence chains.
Reliability analysis reveals a subtle difference: for questions solvable by humans, 86% of Opus 4.6’s correct answers are at least correct 4 times out of 5 attempts, indicating stable performance; for human-difficult questions, this ratio drops to 44%, with nearly half of the correct answers only being hit 1 or 2 times out of 5, resembling chance success along some reasoning path. Behind the accuracy gap, the reliability gap better indicates the boundary of capability.
Genentech and Roche simultaneously released CompBioBench, a similar design with 100 computational biology questions, where Claude Opus 4.6 achieved an overall 81%, with 69% on the most difficult questions, corroborating the conclusions of BioMysteryBench.
(Source: BlockBeats)

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin