Papers
arxiv:2602.04547

OmniRad: A Radiological Foundation Model for Multi-Task Medical Image Analysis

Published on Feb 4
· Submitted by
Luca Zedda
on Feb 5
Authors:
,
,

Abstract

OmniRad is a self-supervised radiological foundation model pretrained on 1.2 million medical images that demonstrates improved performance in classification and segmentation tasks through representation reuse and cross-task transferability.

AI-generated summary

Radiological analysis increasingly benefits from pretrained visual representations that can support heterogeneous downstream tasks across imaging modalities. In this work, we introduce OmniRad, a self-supervised radiological foundation model pretrained on 1.2 million medical images, designed with radiology-inspired principles emphasizing representation reuse and cross-task transferability. We evaluate the pretrained encoder under multiple downstream adaptation regimes, including lightweight task-specific adapters with a frozen backbone as well as full end-to-end fine-tuning for classification, allowing us to assess both representation quality and task-specific performance. OmniRad is evaluated on a broad suite of public benchmarks spanning classification and segmentation across multiple modalities. On the MedMNISTv2 collection, OmniRad improves classification F1 by up to 2.05% over competing foundation models. For dense prediction, OmniRad attains mean Dice score improvements across six MedSegBench datasets when using frozen representations. Qualitative analyses and latent-space visualizations suggest improved feature clustering and modality-related separation.

Community

Paper submitter

OmniRad introduces a self-supervised radiological foundation model pretrained on 1.2M medical images that’s designed for representation reuse across classification, segmentation, and vision–language tasks. The paper shows consistent gains over prior medical foundation models on MedMNISTv2 and multiple MedSegBench segmentation datasets, and provides code on GitHub https://github.com/unica-visual-intelligence-lab/OmniRad and pretrained backbones here on Huggingface https://huggingface.co/collections/Snarcy/omnirad.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.04547 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.04547 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.04547 in a Space README.md to link it from this page.

Collections including this paper 1