Agent Safety / More Autonomous Agents

Safety Agent

In this session, our readings cover:

Required Readings:

AgentHarm: A Benchmark for Measuring Harmfulness of LLM Agents

UDora: A Unified Red Teaming Framework against LLM Agents by Dynamically Hijacking Their Own Reasoning

[Submitted on 28 Feb 2025] Jiawei Zhang, Shuang Yang, Bo Li Large Language Model (LLM) agents equipped with external tools have become increasingly powerful for handling complex tasks such as web shopping, automated email replies, and financial trading. However, these advancements also amplify the risks of adversarial attacks, particularly when LLM agents can access sensitive external functionalities. Moreover, because LLM agents engage in extensive reasoning or planning before executing final actions, manipulating them into performing targeted malicious actions or invoking specific tools remains a significant challenge. Consequently, directly embedding adversarial strings in malicious instructions or injecting malicious prompts into tool interactions has become less effective against modern LLM agents. In this work, we present UDora, a unified red teaming framework designed for LLM Agents that dynamically leverages the agent’s own reasoning processes to compel it toward malicious behavior. Specifically, UDora first samples the model’s reasoning for the given task, then automatically identifies multiple optimal positions within these reasoning traces to insert targeted perturbations. Subsequently, it uses the modified reasoning as the objective to optimize the adversarial strings. By iteratively applying this process, the LLM agent will then be induced to undertake designated malicious actions or to invoke specific malicious tools. Our approach demonstrates superior effectiveness compared to existing methods across three LLM agent datasets.

More Readings:

The Emerged Security and Privacy of LLM Agent: A Survey with Case Studies

Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models

Unique Security and Privacy Threats of Large Language Model: A Comprehensive Survey

Large Language Model Safety: A Holistic Survey

MobileSafetyBench: Evaluating Safety of Autonomous Agents in Mobile Device Control

Privacy-Preserving Large Language Models: Mechanisms, Applications, and Future Directions