Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

EasyDUB mascot

EasyDUB Dataset

Precomputed CIFAR-10 data for KLOM (KL-divergence of Margins) evaluation of data-unlearning methods.

This dataset contains:

  • 200 pretrain models: ResNet9 models trained on the full CIFAR-10 training set (50,000 samples).
  • 200 oracle models per forget set: ResNet9 models retrained on the retain set (train minus forget) for each of 10 forget sets.
  • Logits and margins: Precomputed logits and margins for all models on train/val/forget/retain splits.

All models are checkpointed at epoch 23 (out of 24 total training epochs).

Directory structure

The on-disk layout is:

EasyDUB-dataset/
β”œβ”€β”€ models/
β”‚   └── cifar10/
β”‚       β”œβ”€β”€ pretrain/
β”‚       β”‚   └── resnet9/
β”‚       β”‚       └── id_X_epoch_23.pt          # 200 models (X = 0–199)
β”‚       └── oracle/
β”‚           └── forget_Z/
β”‚               └── resnet9/
β”‚                   └── id_X_epoch_23.pt      # 200 models per forget set
β”‚
β”œβ”€β”€ logits/
β”‚   └── cifar10/
β”‚       β”œβ”€β”€ pretrain/
β”‚       β”‚   β”œβ”€β”€ retain/
β”‚       β”‚   β”‚   └── resnet9/
β”‚       β”‚   β”‚       └── id_X_epoch_23.npy     # Full train set logits
β”‚       β”‚   β”œβ”€β”€ val/
β”‚       β”‚   β”‚   └── resnet9/
β”‚       β”‚   β”‚       └── id_X_epoch_23.npy     # Validation logits
β”‚       β”‚   └── forget_Z/
β”‚       β”‚       └── resnet9/
β”‚       β”‚           └── id_X_epoch_23.npy     # Forget-set logits
β”‚       └── oracle/
β”‚           └── forget_Z/
β”‚               β”œβ”€β”€ retain/
β”‚               β”‚   └── resnet9/
β”‚               β”‚       └── id_X_epoch_23.npy # Retain logits
β”‚               β”œβ”€β”€ forget/
β”‚               β”‚   └── resnet9/
β”‚               β”‚       └── id_X_epoch_23.npy # Forget logits
β”‚               └── val/
β”‚                   └── resnet9/
β”‚                       └── id_X_epoch_23.npy # Validation logits
β”‚
β”œβ”€β”€ margins/
β”‚   └── cifar10/
β”‚       └── [same structure as logits/]
β”‚
└── forget_sets/
    └── cifar10/
        └── forget_set_Z.npy                  # Indices into CIFAR-10 train set

File naming

  • Models: id_{MODEL_ID}_epoch_{EPOCH}.pt (e.g. id_42_epoch_23.pt)
  • Logits / margins: id_{MODEL_ID}_epoch_{EPOCH}.npy
  • Forget sets: forget_set_{SET_ID}.npy (e.g. forget_set_1.npy)

Shapes and dtypes

  • Logits: (n_samples, 10) NumPy arrays of float32 β€” raw model outputs for the 10 CIFAR-10 classes.
  • Margins: (n_samples,) NumPy arrays of float32 β€” scalar margins (see formula below).
  • Forget sets: (n_forget_samples,) NumPy arrays of integer indices into the CIFAR-10 training set in [0, 49_999].

Typical sizes:

  • Train set: 50_000 samples
  • Validation set: 10_000 samples
  • Forget sets: 10–1000 samples (varies by set)

Margin definition

For each sample with logits logits and true label true_label:

import torch

def compute_margin(logits: torch.Tensor, true_label: int) -> torch.Tensor:
    logit_other = logits.clone()
    logit_other[true_label] = -torch.inf
    return logits[true_label] - logit_other.logsumexp(dim=-1)

Higher margins indicate higher confidence in the correct class relative to all others (via log-sum-exp).

Forget sets

The dataset includes 10 CIFAR-10 forget sets:

  • Forget set 1: 10 random samples
  • Forget set 2: 100 random samples
  • Forget set 3: 1_000 random samples
  • Forget set 4: 10 samples with highest projection onto the 1st principal component
  • Forget set 5: 100 samples with highest projection onto the 1st principal component
  • Forget set 6: 250 samples with highest + 250 with lowest projection onto the 1st principal component
  • Forget set 7: 10 samples with highest projection onto the 2nd principal component
  • Forget set 8: 100 samples with highest projection onto the 2nd principal component
  • Forget set 9: 250 samples with highest + 250 with lowest projection onto the 2nd principal component
  • Forget set 10: 100 samples closest in CLIP image space to a reference cassowary image

Each forget_set_Z.npy is a 1D array of training indices.

Quick start

The companion EasyDUB-code repository provides utilities and unlearning methods on top of this dataset.

Here is a minimal example using only NumPy and PyTorch:

import numpy as np
import torch

root = "EasyDUB-dataset"

# Load margins for a single pretrain model on the validation set
margins = np.load(f"{root}/margins/cifar10/pretrain/val/resnet9/id_0_epoch_23.npy")

# Load oracle margins for the same model index and forget set (example: forget_set_1)
oracle_margins = np.load(
    f"{root}/margins/cifar10/oracle/forget_1/val/resnet9/id_0_epoch_23.npy"
)

print(margins.shape, oracle_margins.shape)

For a higher-level end-to-end demo (including unlearning methods and KLOM computation), see the EasyDUB-code GitHub repository. In particular, strong_test.py in EasyDUB-code runs a reproducible noisy-SGD unlearning experiment comparing:

  • KLOM(pretrain, oracle)
  • KLOM(noisy_descent, oracle)

Training procedure (summary)

All pretrain and oracle models share the same training setup:

  • Optimizer: SGD with momentum
  • Learning rate: 0.4 (triangular schedule peaking at epoch 5)
  • Momentum: 0.9
  • Weight decay: 5e-4
  • Epochs: 24 total, checkpoint used here is epoch 23
  • Mixed precision: enabled (FP16)
  • Label smoothing: 0.0

Pretrain models are trained on the full CIFAR-10 training set. Oracle models are trained on the retain set (training set minus the corresponding forget set) for each forget set.

Citation

If you use EasyDUB in your work, please cite:

@inproceedings{rinberg2025dataunlearnbench,
  title     = {Data-Unlearn-Bench: Making Evaluating Data Unlearning Easy},
  author    = {Rinberg, Roy and Puigdemont, Pol and Pawelczyk, Martin and Cevher, Volkan},
  booktitle = {MUGEN Workshop at ICML},
  year      = {2025},
}

EasyDUB builds on the KLOM metric introduced in:

@misc{georgiev2024attributetodeletemachineunlearningdatamodel,
  title         = {Attribute-to-Delete: Machine Unlearning via Datamodel Matching},
  author        = {Kristian Georgiev and Roy Rinberg and Sung Min Park and Shivam Garg and Andrew Ilyas and Aleksander Madry and Seth Neel},
  year          = {2024},
  eprint        = {2410.23232},
  archivePrefix = {arXiv},
  primaryClass  = {cs.LG},
  url           = {https://arxiv.org/abs/2410.23232},
}
Downloads last month
1

Paper for easydub/EasyDUB-dataset