Papers
arxiv:2604.07835

Silencing the Guardrails: Inference-Time Jailbreaking via Dynamic Contextual Representation Ablation

Published on Apr 9
Authors:
,
,
,
,

Abstract

Contextual Representation Ablation (CRA) is a novel inference-time framework that dynamically disables model guardrails by identifying and suppressing refusal-inducing activation patterns in hidden states, demonstrating superior performance over existing methods in bypassing safety constraints.

AI-generated summary

While Large Language Models (LLMs) have achieved remarkable performance, they remain vulnerable to jailbreak attacks that circumvent safety constraints. Existing strategies, ranging from heuristic prompt engineering to computationally intensive optimization, often face significant trade-offs between effectiveness and efficiency. In this work, we propose Contextual Representation Ablation (CRA), a novel inference-time intervention framework designed to dynamically silence model guardrails. Predicated on the geometric insight that refusal behaviors are mediated by specific low-rank subspaces within the model's hidden states, CRA identifies and suppresses these refusal-inducing activation patterns during decoding without requiring expensive parameter updates or training. Empirical evaluation across multiple safety-aligned open-source LLMs demonstrates that CRA significantly outperforms baselines. These results expose the intrinsic fragility of current alignment mechanisms, revealing that safety constraints can be surgically ablated from internal representations, and underscore the urgent need for more robust defenses that secure the model's latent space.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.07835
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.07835 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.07835 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.07835 in a Space README.md to link it from this page.

Collections including this paper 1