LLM Post-training
In this session, our readings cover:
In this session, our readings cover:
Model Serve Readings:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
Readings:
Beyond Memorization: Violating Privacy Via Inference with Large Language Models
TextGrad: Automatic “Differentiation” via Text
The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning Nathaniel Li, Alexander Pan, Anjali Gopal, Summer Yue, Daniel Berrios, Alice Gatt...
KV Caching in LLM:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
Required Readings:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
Required Readings:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
In this session, our readings cover:
Required Readings:
Papers Paper URL Abstract Training language models to follow instructions with human feedback URL ...
Stable diffusion URL “High-Resolution Image Synthesis with Latent Diffusion Models”
Emergent Abilities of Large Language Models URL “an ability to be emergent if it is not present in smaller models but is present in larger models. Thus...
Papers Paper URL Abstract Evolutionary-scale prediction of atomic level protein structure with a language mo...
Decision Transformer: Reinforcement Learning via Sequence Modeling Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Piet...
Papers Paper URL Abstract