Category: LLM

  • Jailbreaking LLM-Controlled Robots

    Jailbreaking LLM-Controlled Robots Surprising no one, it’s easy to trick an LLM-controlled robot into ignoring its safety instructions. Bruce Schneier Go to bruce schneier

  • Trust Issues in AI

    Trust Issues in AI For a technology that seems startling in its modernity, AI sure has a long history. Google Translate, OpenAI chatbots, and Meta AI image generators are built on decades of advancements in linguistics, signal processing, statistics, and other fields going back to the early days of computing—and, often, on seed funding from…

  • Race Condition Attacks against LLMs

    Race Condition Attacks against LLMs These are two attacks against the system components surrounding LLMs: We propose that LLM Flowbreaking, following jailbreaking and prompt injection, joins as the third on the growing list of LLM attack types. Flowbreaking is less about whether prompt or response guardrails can be bypassed, and more about whether user inputs…