More on LLM based agents

Scaling

In this session, our readings cover:

Required Readings:

Advances and Challenges in Foundation Agents: From Brain-Inspired Intelligence to Evolutionary, Collaborative, and Safe Systems

A Review of Prominent Paradigms for LLM-Based Agents: Tool Use (Including RAG), Planning, and Feedback Learning

rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking

More Readings:

Agent Laboratory: Using LLM Agents as Research Assistants

Phi-4 Technical Report

Smaller, Weaker, Yet Better: Training LLM Reasoners via Compute-Optimal Sampling

Synthetica: Large Scale Synthetic Data for Robot Perception

Small Language Models (SLMs) Can Still Pack a Punch: A survey