| --- |
| library_name: litert |
| base_model: timm/deit_tiny_patch16_224.fb_in1k |
| tags: |
| - vision |
| - image-classification |
| datasets: |
| - imagenet-1k |
| --- |
| |
| # deit_tiny_patch16_224 |
| |
| Converted TIMM image classification model for LiteRT. |
| |
| - Source architecture: deit_tiny_patch16_224 |
| - File: model.tflite |
|
|
| ## Model Details |
|
|
| - **Model Type:** Image classification / feature backbone |
| - **Model Stats:** |
| - Params (M): 5.7 |
| - GMACs: 1.3 |
| - Activations (M): 6.0 |
| - Image size: 224 x 224 |
| - **Papers:** |
| - Training data-efficient image transformers & distillation through attention: https://arxiv.org/abs/2012.12877 |
| - **Original:** https://github.com/facebookresearch/deit |
| - **Dataset:** ImageNet-1k |
|
|
| ## Citation |
|
|
| ```bibtex |
| @InProceedings{pmlr-v139-touvron21a, |
| title = {Training data-efficient image transformers & distillation through attention}, |
| author = {Touvron, Hugo and Cord, Matthieu and Douze, Matthijs and Massa, Francisco and Sablayrolles, Alexandre and Jegou, Herve}, |
| booktitle = {International Conference on Machine Learning}, |
| pages = {10347--10357}, |
| year = {2021}, |
| volume = {139}, |
| month = {July} |
| } |
| ``` |
| ```bibtex |
| @misc{rw2019timm, |
| author = {Ross Wightman}, |
| title = {PyTorch Image Models}, |
| year = {2019}, |
| publisher = {GitHub}, |
| journal = {GitHub repository}, |
| doi = {10.5281/zenodo.4414861}, |
| howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} |
| } |
| ``` |
|
|