OmniRad: A Radiological Foundation Model for Multi-Task Medical Image Analysis
Abstract
OmniRad is a self-supervised radiological foundation model pretrained on 1.2 million medical images that demonstrates improved performance in classification and segmentation tasks through representation reuse and cross-task transferability.
Radiological analysis increasingly benefits from pretrained visual representations that can support heterogeneous downstream tasks across imaging modalities. In this work, we introduce OmniRad, a self-supervised radiological foundation model pretrained on 1.2 million medical images, designed with radiology-inspired principles emphasizing representation reuse and cross-task transferability. We evaluate the pretrained encoder under multiple downstream adaptation regimes, including lightweight task-specific adapters with a frozen backbone as well as full end-to-end fine-tuning for classification, allowing us to assess both representation quality and task-specific performance. OmniRad is evaluated on a broad suite of public benchmarks spanning classification and segmentation across multiple modalities. On the MedMNISTv2 collection, OmniRad improves classification F1 by up to 2.05% over competing foundation models. For dense prediction, OmniRad attains mean Dice score improvements across six MedSegBench datasets when using frozen representations. Qualitative analyses and latent-space visualizations suggest improved feature clustering and modality-related separation.
Community
OmniRad introduces a self-supervised radiological foundation model pretrained on 1.2M medical images that’s designed for representation reuse across classification, segmentation, and vision–language tasks. The paper shows consistent gains over prior medical foundation models on MedMNISTv2 and multiple MedSegBench segmentation datasets, and provides code on GitHub https://github.com/unica-visual-intelligence-lab/OmniRad and pretrained backbones here on Huggingface https://huggingface.co/collections/Snarcy/omnirad.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper