Small foundation models

Scaling

In this session, our readings cover:

Required Readings:

rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking

Phi-4 Technical Report

Smaller, Weaker, Yet Better: Training LLM Reasoners via Compute-Optimal Sampling

More Readings:

Synthetica: Large Scale Synthetic Data for Robot Perception

Small Language Models (SLMs) Can Still Pack a Punch: A survey