Under Experiment
GOALS: SOTA math reasoning for sub-400M parameter LLM
- Base Model: LiquidAI/LFM2-350M-Math
- Dataset: kreasof-ai/RSA-Distillation-Mix
Benchmark
Note: we use thinking token forcing because this model occasionally output response directly without thinking tag.
Standard Decoding:
- AIME 2025: 28.3% (+1.2% from
LiquidAI/LFM2-350M-Math) -> Link for benchmark output: https://docs.google.com/spreadsheets/d/1Gr0AFT08tWQ8ocPK3TEwfwz1h336Pfe4TB_IQxKDt3A/ - HMMT 2025: TBA
- BRUMO 2025: TBA
- CMIMC 2025: TBA
Recursive Self-Aggregation:
- AIME 2025: TBA
- HMMT 2025: TBA
- BRUMO 2025: TBA
- CMIMC 2025: TBA
- Downloads last month
- 20
Hardware compatibility
Log In
to view the estimation
8-bit