Instructions to use MLLabIISc/ModHiFi-ResNet50-ImageNet-Tiny with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use MLLabIISc/ModHiFi-ResNet50-ImageNet-Tiny with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-classification", model="MLLabIISc/ModHiFi-ResNet50-ImageNet-Tiny", trust_remote_code=True) pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png")# Load model directly from transformers import AutoImageProcessor, AutoModelForImageClassification processor = AutoImageProcessor.from_pretrained("MLLabIISc/ModHiFi-ResNet50-ImageNet-Tiny", trust_remote_code=True) model = AutoModelForImageClassification.from_pretrained("MLLabIISc/ModHiFi-ResNet50-ImageNet-Tiny", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse filesAdded speedup numbers
README.md
CHANGED
|
@@ -40,6 +40,8 @@ This results in a model that is natively **smaller, faster, and reduces FLOPs**
|
|
| 40 |
|
| 41 |
> **Note:** "FLOPs" measures the number of floating-point operations required for a single inference pass. Lower is better for latency and battery life.
|
| 42 |
|
|
|
|
|
|
|
| 43 |
## ⚠️ Critical Note on Preprocessing & Accuracy
|
| 44 |
|
| 45 |
**Please Read Before Evaluating:** This model was trained and evaluated using standard PyTorch `torchvision.transforms`. The Hugging Face `pipeline` uses `PIL` (Pillow) for image resizing by default.
|
|
|
|
| 40 |
|
| 41 |
> **Note:** "FLOPs" measures the number of floating-point operations required for a single inference pass. Lower is better for latency and battery life.
|
| 42 |
|
| 43 |
+
On the hardware we test on (detailed in our [Paper](https://arxiv.org/abs/2511.19566)) we observe speedups of 2.42x on CPUs and 2.38x on GPUs.
|
| 44 |
+
|
| 45 |
## ⚠️ Critical Note on Preprocessing & Accuracy
|
| 46 |
|
| 47 |
**Please Read Before Evaluating:** This model was trained and evaluated using standard PyTorch `torchvision.transforms`. The Hugging Face `pipeline` uses `PIL` (Pillow) for image resizing by default.
|