Instructions to use MLLabIISc/ModHiFi-ResNet50-ImageNet-Tiny with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use MLLabIISc/ModHiFi-ResNet50-ImageNet-Tiny with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-classification", model="MLLabIISc/ModHiFi-ResNet50-ImageNet-Tiny", trust_remote_code=True) pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png")# Load model directly from transformers import AutoImageProcessor, AutoModelForImageClassification processor = AutoImageProcessor.from_pretrained("MLLabIISc/ModHiFi-ResNet50-ImageNet-Tiny", trust_remote_code=True) model = AutoModelForImageClassification.from_pretrained("MLLabIISc/ModHiFi-ResNet50-ImageNet-Tiny", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -38,9 +38,10 @@ This results in a model that is natively **smaller, faster, and reduces FLOPs**
|
|
| 38 |
| **Original ResNet-50** | 0% | 76.13% | 92.86% | 25.56 | 4.12 | ~98 |
|
| 39 |
| **ModHiFi-Tiny** | **~67%** | **73.85%** | **91.83%** | **8.38** | **1.13** | **~33** |
|
| 40 |
|
|
|
|
|
|
|
| 41 |
> **Note:** "FLOPs" measures the number of floating-point operations required for a single inference pass. Lower is better for latency and battery life.
|
| 42 |
|
| 43 |
-
On the hardware we test on (detailed in our [Paper](https://arxiv.org/abs/2511.19566)) we observe speedups of 2.42x on CPUs and 2.38x on GPUs.
|
| 44 |
|
| 45 |
## ⚠️ Critical Note on Preprocessing & Accuracy
|
| 46 |
|
|
|
|
| 38 |
| **Original ResNet-50** | 0% | 76.13% | 92.86% | 25.56 | 4.12 | ~98 |
|
| 39 |
| **ModHiFi-Tiny** | **~67%** | **73.85%** | **91.83%** | **8.38** | **1.13** | **~33** |
|
| 40 |
|
| 41 |
+
On the hardware we test on (detailed in our [Paper](https://arxiv.org/abs/2511.19566)) we observe speedups of **2.42x on CPUs** and **2.38x on GPUs**.
|
| 42 |
+
|
| 43 |
> **Note:** "FLOPs" measures the number of floating-point operations required for a single inference pass. Lower is better for latency and battery life.
|
| 44 |
|
|
|
|
| 45 |
|
| 46 |
## ⚠️ Critical Note on Preprocessing & Accuracy
|
| 47 |
|