Papers
arxiv:2503.13690

Atyaephyra at SemEval-2025 Task 4: Low-Rank Negative Preference Optimization

Published on May 7, 2025
Authors:
,

Abstract

A method for removing sensitive content from large language models using negative preference optimization combined with low-rank adaptation for improved unlearning stability and performance.

AI-generated summary

We present a submission to the SemEval 2025 shared task on unlearning sensitive content from LLMs. Our approach employs negative preference optimization using low-rank adaptation. We show that we can utilize this combination to efficiently compute additional regularization terms, which help with unlearning stabilization. The results of our approach significantly exceed the shared task baselines.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2503.13690
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2503.13690 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2503.13690 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.13690 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.