Purrturbed but Stable: Human-Cat Invariant Representations Across CNNs, ViTs and Self-Supervised ViTs
Paper
• 2511.02404 • Published
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Official dataset for the paper:
Purrturbed but Stable: Human-Cat Invariant Representations Across CNNs, ViTs and Self-Supervised ViTs
Arya Shah et al. · arXiv:2511.02404
This dataset contains 346,400 paired video frames rendered under human vision and simulated cat vision optics. It was used to benchmark cross-species representational alignment across CNNs, supervised ViTs, windowed transformers, and self-supervised ViTs (DINO) using CKA and RSA.
| Column | Type | Description |
|---|---|---|
pair_id |
string | Unique identifier for each frame pair |
video_id |
string | Source video (video1–video191) |
frame_filename |
string | Original frame filename |
human_frame |
image | Frame as seen by humans |
cat_frame |
image | Frame simulated for Felis catus vision |
| Stat | Value |
|---|---|
| Videos | 191 |
| Frame pairs | 346,400 |
| Format | Parquet (1 file / video, Snappy) |
| Image encoding | JPEG |
from datasets import load_dataset
ds = load_dataset("aryashah00/CatVision")
sample = ds["train"][0]
# Access images (returned as PIL.Image)
human_img = sample["human_frame"]
cat_img = sample["cat_frame"]
@misc{shah2024purrturbed,
title = {Purrturbed but Stable: Human-Cat Invariant Representations
Across CNNs, ViTs and Self-Supervised ViTs},
author = {Arya Shah and others},
year = {2024},
eprint = {2511.02404},
archivePrefix = {arXiv},
primaryClass = {cs.CV},
url = {https://arxiv.org/abs/2511.02404}
}
Released under CC BY 4.0.