Update model card: add Primus paper link and update pipeline tag

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +31 -4
README.md CHANGED
@@ -1,18 +1,21 @@
1
  ---
2
- license: cc-by-4.0
3
  datasets:
4
  - AnonRes/OpenMind
5
- pipeline_tag: image-feature-extraction
 
6
  tags:
7
  - medical
8
  ---
9
 
10
  # OpenMind Benchmark 3D SSL Models
11
 
12
- > **Model from the paper**: [An OpenMind for 3D medical vision self-supervised learning](https://arxiv.org/abs/2412.17041)
 
 
13
  > **Pre-training codebase used to create checkpoint**: [MIC-DKFZ/nnssl](https://github.com/MIC-DKFZ/nnssl)
14
  > **Dataset**: [AnonRes/OpenMind](https://huggingface.co/datasets/AnonRes/OpenMind)
15
  > **Downstream (segmentation) fine-tuning**: [TaWald/nnUNet](https://github.com/TaWald/nnUNet)
 
16
 
17
  ---
18
 
@@ -24,6 +27,10 @@ This repository hosts pre-trained checkpoints from the **OpenMind** benchmark:
24
  📄 **An OpenMind for 3D medical vision self-supervised learning** (Wald, T., Ulrich, C., Suprijadi, J., Ziegler, S., Nohel, M., Peretzke, R., ... & Maier-Hein, K. H. (2024).)
25
  ([arXiv:2412.17041](https://arxiv.org/abs/2412.17041)) — the first extensive benchmark study for **self-supervised learning (SSL)** on **3D medical imaging** data.
26
 
 
 
 
 
27
  Each model was pre-trained using a particular SSL method on the [OpenMind Dataset](https://huggingface.co/datasets/AnonRes/OpenMind), a large-scale, standardized collection of public brain MRI datasets.
28
 
29
  **These models are not recommended to be used as-is for feature extraction.** Instead we recommend using the downstream fine-tuning frameworks for **segmentation** and **classification** adaptation, available in the [adaptation repository](https://github.com/TaWald/nnUNet).
@@ -36,7 +43,7 @@ Each model was pre-trained using a particular SSL method on the [OpenMind Datase
36
  We release SSL checkpoints for two backbone architectures:
37
 
38
  - **ResEnc-L**: A CNN-based encoder [[a](https://arxiv.org/abs/2410.23132), [b](https://arxiv.org/abs/2404.09556)]
39
- - **Primus-M**: A transformer-based encoder [[Primus paper](https://arxiv.org/abs/2503.01835)]
40
 
41
  Each encoder has been pre-trained using one of the following SSL techniques:
42
 
@@ -50,3 +57,23 @@ Each encoder has been pre-trained using one of the following SSL techniques:
50
  | [SimMIM](https://openaccess.thecvf.com/content/CVPR2022/html/Xie_SimMIM_A_Simple_Framework_for_Masked_Image_Modeling_CVPR_2022_paper.html) | Simple masked reconstruction based pretraining method (TR only) |
51
  | [SwinUNETR SSL](https://arxiv.org/abs/2111.14791) | Rotation, Contrastive and Reconstruction based pre-training method. |
52
  | [SimCLR](https://arxiv.org/abs/2002.05709) | Transfer of 2D Contrastive learning baseline method to 3D |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  datasets:
3
  - AnonRes/OpenMind
4
+ license: cc-by-4.0
5
+ pipeline_tag: image-segmentation
6
  tags:
7
  - medical
8
  ---
9
 
10
  # OpenMind Benchmark 3D SSL Models
11
 
12
+ > **Models from the papers**:
13
+ > - [An OpenMind for 3D medical vision self-supervised learning](https://arxiv.org/abs/2412.17041)
14
+ > - [Primus: Enforcing Attention Usage for 3D Medical Image Segmentation](https://huggingface.co/papers/2503.01835)
15
  > **Pre-training codebase used to create checkpoint**: [MIC-DKFZ/nnssl](https://github.com/MIC-DKFZ/nnssl)
16
  > **Dataset**: [AnonRes/OpenMind](https://huggingface.co/datasets/AnonRes/OpenMind)
17
  > **Downstream (segmentation) fine-tuning**: [TaWald/nnUNet](https://github.com/TaWald/nnUNet)
18
+ > **Official Code Documentation**: [Primus in nnU-Net](https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/primus.md)
19
 
20
  ---
21
 
 
27
  📄 **An OpenMind for 3D medical vision self-supervised learning** (Wald, T., Ulrich, C., Suprijadi, J., Ziegler, S., Nohel, M., Peretzke, R., ... & Maier-Hein, K. H. (2024).)
28
  ([arXiv:2412.17041](https://arxiv.org/abs/2412.17041)) — the first extensive benchmark study for **self-supervised learning (SSL)** on **3D medical imaging** data.
29
 
30
+ It also features the **Primus** architecture:
31
+ 📄 **Primus: Enforcing Attention Usage for 3D Medical Image Segmentation** (Wald, T., Roy, S., Isensee, F., Ulrich, C., Ziegler, S., Trofimova, D., ... & Maier-Hein, K. H. (2025).)
32
+ ([Hugging Face Papers](https://huggingface.co/papers/2503.01835)) — introduction of Transformer-centric segmentation architectures that achieve state-of-the-art results.
33
+
34
  Each model was pre-trained using a particular SSL method on the [OpenMind Dataset](https://huggingface.co/datasets/AnonRes/OpenMind), a large-scale, standardized collection of public brain MRI datasets.
35
 
36
  **These models are not recommended to be used as-is for feature extraction.** Instead we recommend using the downstream fine-tuning frameworks for **segmentation** and **classification** adaptation, available in the [adaptation repository](https://github.com/TaWald/nnUNet).
 
43
  We release SSL checkpoints for two backbone architectures:
44
 
45
  - **ResEnc-L**: A CNN-based encoder [[a](https://arxiv.org/abs/2410.23132), [b](https://arxiv.org/abs/2404.09556)]
46
+ - **Primus-M**: A transformer-based encoder [[Primus paper](https://huggingface.co/papers/2503.01835)]
47
 
48
  Each encoder has been pre-trained using one of the following SSL techniques:
49
 
 
57
  | [SimMIM](https://openaccess.thecvf.com/content/CVPR2022/html/Xie_SimMIM_A_Simple_Framework_for_Masked_Image_Modeling_CVPR_2022_paper.html) | Simple masked reconstruction based pretraining method (TR only) |
58
  | [SwinUNETR SSL](https://arxiv.org/abs/2111.14791) | Rotation, Contrastive and Reconstruction based pre-training method. |
59
  | [SimCLR](https://arxiv.org/abs/2002.05709) | Transfer of 2D Contrastive learning baseline method to 3D |
60
+
61
+ ## Citation
62
+
63
+ If you use these models or the Primus architecture, please cite:
64
+
65
+ ```bibtex
66
+ @article{wald2025primus,
67
+ title={Primus: Enforcing Attention Usage for 3D Medical Image Segmentation},
68
+ author={Wald, Tassilo and Roy, Saikat and Isensee, Fabian and Ulrich, Constantin and Ziegler, Sebastian and Trofimova, Dasha and Stock, Raphael and Baumgartner, Michael and Köhler, Gregor and Maier-Hein, Klaus},
69
+ journal={arXiv preprint arXiv:2503.01835},
70
+ year={2025}
71
+ }
72
+
73
+ @article{wald2024openmind,
74
+ title={An OpenMind for 3D medical vision self-supervised learning},
75
+ author={Wald, Tassilo and Ulrich, Constantin and Suprijadi, J and Ziegler, Sebastian and Nohel, M and Peretzke, R and others},
76
+ journal={arXiv preprint arXiv:2412.17041},
77
+ year={2024}
78
+ }
79
+ ```