| | --- |
| | license: apache-2.0 |
| | tags: |
| | - medical-imaging |
| | - image-registration |
| | - torchscript |
| | - impact |
| | - pretrained |
| | - segmentation |
| | --- |
| | |
| | # 🧠 TorchScript Models for the IMPACT Semantic Similarity Metric |
| |
|
| | This repository provides a collection of **TorchScript-exported pretrained models** designed for use with the **IMPACT** similarity metric, enabling semantic medical image registration through feature-level comparison. |
| |
|
| | The IMPACT metric is introduced in the following preprint, currently under review: |
| |
|
| | > **IMPACT: A Generic Semantic Loss for Multimodal Medical Image Registration** |
| | > *V. Boussot, C. Hémon, J.-C. Nunes, J. Dowling, S. Rouzé, C. Lafond, A. Barateau, J.-L. Dillenseger* |
| | > [arXiv:2503.24121 [cs.CV]](https://arxiv.org/abs/2503.24121) |
| |
|
| | 🔧 The full implementation of IMPACT, along with its integration into the **Elastix** framework, is available in the repository: |
| | ➡️ [github.com/vboussot/ImpactLoss](https://github.com/vboussot/ImpactLoss) |
| |
|
| | This repository also includes example parameter maps, TorchScript model handling utilities, and a ready-to-use Docker environment for quick experimentation and reproducibility. |
| |
|
| | --- |
| |
|
| | ## 📚 Pretrained Model |
| |
|
| | The TorchScript models provided in this repository were exported from publicly available pretrained networks. These include: |
| |
|
| | - **TotalSegmentator (TS)** — U-Net models trained for full-body anatomical segmentation |
| | - **MRSegmentator (MRSeg)** — U-Net models trained for full-body anatomical segmentation in MRI and CT |
| | - **Segment Anything 2.1 (SAM2.1)** — Foundation model for segmentation on natural images |
| | - **DINOv2** — Self-supervised vision transformer trained on diverse datasets |
| | - **Anatomix** — Transformer-based model with anatomical priors for medical images |
| |
|
| | Each model provides multiple feature extraction layers. This can be configured through the LayerMask parameter in the IMPACT configuration. |
| |
|
| | In addition, the repository also includes: |
| |
|
| | - **MIND** — A handcrafted descriptor, wrapped in TorchScript |
| |
|
| |
|
| | | Model | Specialization | Paper / Reference | Field of View | License | Preprocessing | |
| | |----------------|---------------------------------------|-------------------------------------------------------------|------------------------|--------------|---------------| |
| | | **MIND** | Handcrafted descriptor | [Heinrich et al., 2012](https://doi.org/10.1016/j.media.2012.05.008) | `2*r*d + 1` (r: radius, d: dilation) | Apache 2.0 | Normalize intensities to [0, 1] | |
| | | **SAM2.1** | General segmentation (natural images) | [Ravi et al., 2023](https://arxiv.org/abs/2408.00714) | 29 | Apache 2.0 | Normalize intensities to [0, 1], then standardize with mean 0.485 and std 0.229 | |
| | | **TS Models** | CT/MRI segmentation | [Wasserthal et al., 2022](https://arxiv.org/abs/2208.05868) | `2^l + 3` (l: layer number) | Apache 2.0 | Canonical orientation for all models. For MRI models (e.g., TS/M730–M733-M850–M853), standardize intensities to zero mean and unit variance. For CT models (e.g., TS/M258, TS/M291), clip intensities + normalize model dependant | |
| | | **MRSegmentator** | CT/MRI segmentation | [Häntze et al., 2024](https://arxiv.org/abs/2405.06463) | `2^l + 3` (l: layer number) | Apache 2.0 | Standardize intensities to zero mean and unit variance.| |
| | | **Anatomix** | Anatomy-aware transformer encoder | [Dey et al., 2024](https://arxiv.org/abs/2411.02372) | Global(Static mode) | MIT | Normalize intensities to [0, 1] | |
| | | **DINOv2** | Self-supervised vision transformer | [Oquab et al., 2023](https://arxiv.org/abs/2304.07193) | 14 | Apache 2.0 | Normalize intensities to [0, 1], then standardize with mean 0.485 and std 0.229 | |
| |
|
| | --- |