Challenge
Reasoning models usually demand large post-training budgets and very long reasoning chains (32k–64k tokens) to deliver top performance
Built at an affordable post-training cost, Fractal’s Fathom-R1-14B, a 14-billion-parameter reasoning language model derived from Deepseek-R1-Distilled-14B delivers top performance
Fathom-R1-14B performance
Reasoning + Test-time Compute
IIT JEE Advanced 2025 Maths
100% Accuracy (32/32)
Fathom-R1-14B achieves a perfect score on the latest and unseen 18th May IIT JEE 2025 Math questions
All model responses and answers for IIT JEE 2025 Math Qs are available on Hugging Face
*evaluation is limited to text-based integer-answer questions
*results based on internal evaluation of the IIT JEE 2025 Math paper.
Performance may vary and does not guarantee future outcomes.
In media
Follow for the latest updates