Agents Optimization
- Notes: Agents Optimization
In this session, our readings cover:
Required Readings: MODEL TRAINING & OPTIMIZATION
Core Component: Improving the Agent Brain - Training, Fine-tuning, and Optimization
Techniques for improving model capabilities and efficiency.
Key Concepts: Evaluation frameworks, guardrails, alignment (RLHF, PPO, DPO), risk assessment, jailbreaking defense, fairness, bias mitigation, toxicity prevention, agent safety protocols Key Concepts: Data preparation, instruction tuning, LoRA/DoRA, parameter-efficient fine-tuning, scaling laws, efficiency optimization
| Topic | Slide Deck | Previous Semester |
|---|---|---|
| Platform - Model Customization (Instruction Tuning/LoRA) | W8.1-LoRA-Team5 | 25course |
| LLM Alignment - PPO | W11.2-team6-PPO | 25course |
| LLM Post-training | W14.3.DPO | 25course |
| Open Source LLM - Mistral Data Preparation | W4-OpenSourceLLM | 24course |
| Scaling Law and Efficiency | W11-ScalinglawEfficientLLM | 24course |
| LLM Fine Tuning | W14-LLM-FineTuning | 24course |
| Model Editing and Disgorgement | W10-T5-ModelEditing | 24course |
2025 HIGH-IMPACT PAPERS on this topic
- b. The Landscape of Agentic Reinforcement Learning for LLMs (September 2025)
- Referenced in: https://github.com/zjunlp/LLMAgentPapers
- Taxonomy of agentic RL approaches
- Training methods: GRPO, PPO variations, RLVR
- Policy optimization: Group-in-Group, Stepwise Progress Attribution (SPA-RL)
- Challenges: Reward hacking, sample efficiency, exploration-exploitation
- Applications: Reasoning, planning, multi-agent coordination
- Key Papers Covered:
- GRPO (Group Relative Policy Optimization)
- History Resampling Policy Optimization (SRPO)
- PVPO (Pre-Estimated Value-Based Policy Optimization)
- Two papers on RL for discreate diffusion models:
- A Reparameterized Discrete Diffusion Model for Text Generation / - In recent years, masked diffusion models (MDMs) have emerged as a promising alternative approach for generative modeling over discrete domains. Compared to autoregressive models (ARMs), MDMs trade off complexity at training time with flexibility at inference time. At training time, they must learn to solve an exponentially large number of infilling problems, but at inference time, they can decode tokens in essentially arbitrary order. In this work, we closely examine these two competing effects. On the training front, we theoretically and empirically demonstrate that MDMs indeed train on computationally intractable subproblems compared to their autoregressive counterparts. On the inference front, we show that a suitable strategy for adaptively choosing the token decoding order significantly enhances the capabilities of MDMs, allowing them to sidestep hard subproblems. On logic puzzles like Sudoku, we show that adaptive inference can boost solving accuracy in pretrained MDMs from <7% to ≈90%, even outperforming ARMs with 7× as many parameters and that were explicitly trained via teacher forcing to learn the right order of decoding.
- Train for the Worst, Plan for the Best: Understanding Token Ordering in Masked Diffusions / - This work studies discrete diffusion probabilistic models with applications to natural language generation. We derive an alternative yet equivalent formulation of the sampling from discrete diffusion processes and leverage this insight to develop a family of reparameterized discrete diffusion models. The derived generic framework is highly flexible, offers a fresh perspective of the generation process in discrete diffusion models, and features more effective training and decoding techniques. We conduct extensive experiments to evaluate the text generation capability of our model, demonstrating significant improvements over existing diffusion models. Comments: COLM 2024; Code available at this https URL
