Model Overview
Description
This model performs visual feature extraction. For instance, RADIO generates image embeddings that can be used by a downstream model to classify images.
C-RADIOv4 models are available in multiple sizes:
- Shape-Optimized (431M parameters).
- Huge (653M parameters).
C-RADIOv4 was trained using an updated set of teach models:
This model is ready for commercial/non-commercial use.
License/Terms of Use
GOVERNING TERMS: Use of this model is governed by the NVIDIA Open Model License Agreement.
Deployment Geography
Global
Use Case
The embeddings generated by this model are expected to be used by a downstream application. For example:
- Image-level understanding (image classification, curation, etc.).
- Dense processing (semantic segmentation, depth estimation, etc.).
- Integration into a Vision-Language Model.
Release Date
Hugging Face: 01/27/2026 via RADIO Collection of Models.
References
- AM-RADIO: Agglomerative Vision Foundation Model -- Reduce All Domains Into One
- PHI-S: Distribution Balancing for Label-Free Multi-Teacher Distillation
- RADIOv2.5: Improved Baselines for Agglomerative Vision Foundation Models
- FeatSharp: Your Vision Model Features, Sharper
- C-RADIOv4 (Tech Report)
Model Architecture
Architecture Type: Neural Network
Network Architecture: Vision Transformer
Number of model parameters: -SO400M size: 431M, -H size: 653M
Input
Input Type(s): Image
Input Format(s): Red, Green, Blue (RGB)
Input Parameters: Two Dimensional (2D)
Other Properties Related to Input: Image resolutions up to 2048x2028 in increments of 16 pixels
Output
Output Type(s): Embeddings
Output Format: Tensor
Output Parameters: Two Dimensional 2D
Other Properties Related to Output: Downstream model required to leverage image features. Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
Usage:
RADIO will return a tuple with two tensors.
The summary is similar to the cls_token in ViT and is meant to represent the general concept of the entire image.
It has shape (B,C) with B being the batch dimension, and C being some number of channels.
The spatial_features represent more localized content which should be suitable for dense tasks such as semantic segmentation, or for integration into an LLM.
import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor
hf_repo = "nvidia/C-RADIOv4-H"
image_processor = CLIPImageProcessor.from_pretrained(hf_repo)
model = AutoModel.from_pretrained(hf_repo, trust_remote_code=True)
model.eval().cuda()
image = Image.open('./assets/radio.png').convert('RGB')
pixel_values = image_processor(images=image, return_tensors='pt', do_resize=True).pixel_values
pixel_values = pixel_values.cuda()
summary, features = model(pixel_values)
Spatial features have shape (B,T,D) with T being the flattened spatial tokens, and D being the channels for spatial features. Note that C!=D in general.
Converting to a spatial tensor format can be done using the downsampling size of the model, combined with the input tensor shape. For RADIO, the patch size is 16.
from einops import rearrange
spatial_features = rearrange(spatial_features, 'b (h w) d -> b d h w', h=x.shape[-2] // patch_size, w=x.shape[-1] // patch_size)
The resulting tensor will have shape (B,D,H,W), as is typically seen with computer vision models.
Software Integration
Runtime Engine(s):
- [TAO-6.1]
Supported Hardware Microarchitecture Compatibility:
- NVIDIA Ampere
- NVIDIA Blackwell
- NVIDIA Jetson
- NVIDIA Hopper
- NVIDIA Lovelace
- NVIDIA Pascal
- NVIDIA Turing
- NVIDIA Volta
[Preferred/Supported] Operating System(s):
- Linux
- Linux 4 Tegra
- QNX
- Windows
The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
This AI model can be embedded as an Application Programming Interface (API) call into the software environment described above.
Model Version(s)
- C-RADIOv4-SO400M (400M parameters).
- C-RADIOv4-H (653M parameters).
Links:
Training and Evaluation Datasets
Training Dataset
NV-CC-Img-Text-Dataset
Data Modality: Image
Image Training Data Size: 1 Million to 1 Billion Images
Data Collection Method by dataset: Automated
Labeling Method by dataset: Not Applicable (no labels are needed)
Properties: 700 Million Images
Evaluation Datasets
ImageNet
Link: ImageNet
Data Collection: Automated
Labeling Method: Human
Training Images: 1,281,167
Validation Images: 50,000
Test Images: 100,000
To perform the semantic segmentation evaluation, we use training sets from ADE20K and PascalVOC to train a linear layer, and subsequently performed evaluations on the validation set. See below for further details:
ADE20k
Link: ADE20K
Data Collection: Human
Labeling Method: Human
Training Images: 25,574
Validation Images: 2,000
Pascal VOC
Link: Pascal VOC
Data Collection: Human
Labeling Method: Human
Training Images: 1,464
Validation Images: 1,449
| Benchmark | C-RADIOv3-B | C-RADIOv3-L | C-RADIOv4-SO400M | C-RADIOv3-H | C-RADIOv4-H |
|---|---|---|---|---|---|
| ImageNet Classification (Top1 accuracy) | |||||
| Zero-Shot | 71.30 | 79.95 | 82.01 | 82.65 | 83.09 |
| KNN | 81.22 | 84.33 | 85.75 | 86.23 | 86.68 |
| ADE20k Semantic Segmentation (mIoU) | 49.79 | 51.87 | 55.14 | 52.75 | 55.20 |
| Pascal VOC Semantic Segmentation (mIoU) | 84.68 | 86.12 | 87.22 | 86.41 | 87.24 |
Inference
Acceleration Engine: Tensor(RT), Tensor(RT)-LLM
Engine: PyTorch
Test Hardware: H100
Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please make sure you have proper rights and permissions for all input image and video content; if image or video includes people, personal health information, or intellectual property, the image or video generated will not blur or maintain proportions of image subjects included.
For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards below.
Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here.
Bias
| Field | Response |
|---|---|
| Participation considerations from adversely impacted groups protected classes in model design and testing: | None |
| Measures taken to mitigate against unwanted bias: | None |
Explainability
| Field | Response |
|---|---|
| Intended Task/Domain: | Visual Feature Extraction |
| Model Type: | Vision Transformer |
| Intended Users: | Developers of downstream vision applications |
| Output: | Image embeddings |
| Describe how the model works: | The model takes an image as input, processes the image through multiple transformer blocks, and outputs summary and patch embeddings. |
| Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of: | Not Applicable |
| Technical Limitations: | This model generates image embeddings that can be used by a downstream model to, for example, classify images. The downstream model must be trained to leverage the visual embeddings. This model is only tested on input resolutions ranging from 256 to 2048, in increments of 16 pixels. This model may fail to surface information about the orientation of objects (e.g. whether a traffic sign points left/right). |
| Verified to have met prescribed NVIDIA quality standards: | Yes |
| Performance Metrics: | Image classification accuracy, semantic segmentation mean-over-intersection. |
| Potential Known Risks: | This model may not perform well on visual domains that are not represented in the training data. The generated embeddings might fail to disambiguate differences that appear evident to humans (e.g. two images showing different breeds of dogs might in fact produce very similar embeddings). Domain-specific evaluation is required for the target application. |
| Licensing: | NVIDIA Open Model License |
Privacy
| Field | Response |
|---|---|
| Generatable or reverse engineerable personal data? | No |
| Personal data used to create this model? | None Known |
| How often is dataset reviewed? | Before Every Release |
| Is there provenance for all datasets used in training? | Yes |
| Does data labeling (annotation, metadata) comply with privacy laws? | Yes |
| Is data compliant with data subject requests for data correction or removal, if such a request was made? | Yes |
| Was data from user interactions with the AI model (e.g. user input and prompts) used to train the model? | No |
| Applicable Privacy Policy | https://www.nvidia.com/en-us/about-nvidia/privacy-policy/ |
Safety
| Field | Response |
|---|---|
| Model Application Field(s): | Generation of visual embeddings |
| Describe the life critical impact (if present). | Not Applicable |
| Use Case Restrictions: | Abide by NVIDIA Open Model License Agreement |
| Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. |
- Downloads last month
- -