TDSC26-ERASE / README.md
YQYCherry's picture
Upload 9 files
ead91f9 verified
---
license: apache-2.0
tags:
- adversarial-attack
- ai-generated-image-stealth
- deepfake-evasion
- pytorch
---
<a id="top"></a>
<div align="center">
<h1>πŸ•΅οΈβ€β™‚οΈ ERASE: Bypassing Collaborative Detection of AI Counterfeit (Model Weights)</h1>
<p>
<b>Qianyun Yang</b><sup>1</sup>&nbsp;
<b>Peizhuo Lv</b><sup>2</sup>&nbsp;
<b>Yingjiu Li</b><sup>3</sup>&nbsp;
<b>Shengzhi Zhang</b><sup>4</sup>&nbsp;
<b>Yuxuan Chen</b><sup>1</sup>&nbsp;
<b>Zixu Li</b><sup>1</sup>&nbsp;
<b>Yupeng Hu</b><sup>1</sup>
</p>
<p>
<sup>1</sup>Shandong University&nbsp;
<sup>2</sup>Nanyang Technological University&nbsp;
<sup>3</sup>University of Oregon&nbsp;&nbsp
<sup>4</sup>Boston University
</p>
</div>
These are the official pre-trained model weights for **ERASE**, an optimization framework designed to bypass single and collaborative detection of AI-Generated Images (AIGI) by comprehensively eliminating multi-dimensional generative artifacts.
πŸ”— **Paper:** [Accepted by IEEE TDSC 2026] (Coming Soon)
πŸ”— **GitHub Repository:** [iLearn-Lab/TDSC26-ERASE](https://github.com/iLearn-Lab/TDSC26-ERASE)
---
## πŸ“Œ Model Information
### 1. Model Name
**ERASE** (comprehensive counterfeit ArtifactS Elimination) Checkpoints.
### 2. Task Type & Applicable Tasks
- **Task Type:** Adversarial Attack / AI-Generated Image Stealth (AIGI-S) / Image-to-Image
- **Applicable Tasks:** Bypassing AI-generated image detectors (both single detectors and collaborative multi-detector environments) while maintaining exceptionally high visual fidelity.
### 3. Project Introduction
With the rapid development of generative AI, the issue of deepfakes has become increasingly severe. Existing AI-Generated Image Stealth (AIGI-S) methods typically optimize against a single detector and often fail when facing real-world "Collaborative Detection". Moreover, they often introduce obvious artifacts visible to human observers.
**ERASE** is a stealth optimization framework that innovatively combines:
- 🎯 **Sensitive Feature Attack**
- ⛓️ **Diffusion Chain Attack** (Optimization-free)
- πŸ“» **Decoupled Frequency Domain Processing**
This Hugging Face repository hosts the pre-trained weights required to run the Decoupled Frequency Domain Processing and the Surrogate Classifiers, specifically `noise_prototype_VAE.pt`, `dncnn_color_blind.pth`, and the `ckpt_ori` surrogate weights.
### 4. Training Data Source
The surrogate classifiers and related components were primarily trained and evaluated on the **[GenImage](https://github.com/GenImage-Dataset/GenImage)** dataset, following the standard task settings of AIGI-S evaluation.
---
## πŸš€ Usage & Basic Inference
These weights are designed to be used seamlessly out-of-the-box with the official ERASE GitHub repository.
### Step 1: Prepare the Environment
Clone the GitHub repository and install dependencies:
```bash
git clone https://github.com/iLearn-Lab/TDSC26-ERASE
cd ERASE
conda create -n erase python=3.9 -y
conda activate erase
pip install -r requirements.txt
```
### Step 2: Download Model Weights
Download the files from this Hugging Face repository (`ckpt_ori` folder, `noise_prototype_VAE.pt`, `dncnn_color_blind.pth`) and place them in the `checkpoints/` directory of your cloned GitHub repo. Your structure should look like this:
```text
ERASE/
└── checkpoints/
β”œβ”€β”€ ckpt_ori/ # Surrogate model weights (E/R/D/S)
β”œβ”€β”€ noise_prototype_VAE.pt # Frequency VAE weights
└── dncnn_color_blind.pth # Denoising/Frequency weights
```
### Step 3: Run the Attack
Use `main.py` from the code repository to perform basic inference and generate adversarial images:
```bash
python main.py \
--images_root ./input_images \
--save_dir ./output \
--model_name E,R,D,S \
--diffusion_steps 20 \
--start_step 18 \
--iterations 10 \
--is_encoder 1 \
--encoder_weights ./checkpoints/noise_prototype_VAE.pt \
--eps 4 \
--batch_size 4 \
--device cuda:0
```
---
## ⚠️ Limitations & Notes
**Disclaimer:** This tool and its associated model weights are strictly intended for **academic research, AI security evaluation, and robustness testing**.
- It is strictly **prohibited** to use this repository for any malicious forgery, fraud, or other illegal/unethical purposes.
- Users bear full legal responsibility for any consequences arising from improper use.
---
## πŸ“β­οΈ Citation
If you find our weights or code useful for your research, please consider leaving a **Star** ⭐️ on our GitHub repo and citing our paper:
```bibtex
@article{yang2026erase,
title={ERASE: Bypassing Collaborative Detection of AI Counterfeit via Comprehensive Artifacts Elimination},
author={Yang, Qianyun and Lv, Peizhuo and Li, Yingjiu and Zhang, Shengzhi and Chen, Yuxuan and Chen, Zhiwei and Li, Zixu and Hu, Yupeng},
journal={IEEE Transactions on Dependable and Secure Computing},
year={2026},
publisher={IEEE}
}
```