Curia: A Multi-Modal Foundation Model for Radiology
Paper
โข 2509.06830 โข Published
โข 21
๐ Blog Post | ๐ค Original Curia | ๐ Curia Paper Link
We introduce Curia-2, a follow-up to Curia which significantly improves the original pre-training strategy and representation quality to better capture the specificities of radiological data. Curia-2 excels on vision-focused tasks and fairs competitively to vision-language models on clinically complex tasks such as finding detection.
Research paper coming soon.
To load the model, use the AutoModel class from huggingface transformers library.
from transformers import AutoModel
model = AutoModel.from_pretrained("raidium/curia-2")
You can also load the image pre-processor
from transformers import AutoImageProcessor
processor = AutoImageProcessor.from_pretrained("raidium/curia-2", trust_remote_code=True)
Then to forward an image:
img = 2048 * np.random.rand(256, 256) - 1024 # single axial slice, in PL orientation
model_input = processor(img)
features = model(**model_input)
The image must follow the following format:
input: numpy array of shape (H, W)
Images needs to be in:
- PL for axial
- IL for coronal
- IP for sagittal
for CT, no windowing, just hounsfield or normalized image
for MRI, similar, no windowing, just raw values or normalized image
The model is released under the RESEARCH-ONLY RAIL-M license. https://huggingface.co/raidium/curia/blob/main/LICENSE
@article{dancette2025curia,
title={Curia: A Multi-Modal Foundation Model for Radiology},
author={Dancette, Corentin and Khlaut, Julien and Saporta, Antoine and Philippe, Helene and Ferreres, Elodie and Callard, Baptiste and Danielou, Th{\'e}o and Alberge, L{\'e}o and Machado, L{\'e}o and Tordjman, Daniel and others},
journal={arXiv preprint arXiv:2509.06830},
year={2025}
}