Image Classification
File size: 5,142 Bytes
1218a5a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
---
license: apache-2.0
pipeline_tag: image-classification
---
# HardNet

## **Use case** : `Image classification`

# Model description


Harmonic DenseNet (HardNet) is a memory-efficient variant of DenseNet that optimizes for both **computational efficiency and memory access cost**. It introduces a harmonic pattern in the dense connections to reduce redundant feature computations.

HardNet features **harmonic dense connections** that reduce connection patterns to minimize memory bandwidth, while maintaining the benefits of DenseNet's feature reuse. The architecture combines **depthwise separable convolutions** with dense blocks for enhanced efficiency.

Designed for practical hardware deployment, HardNet provides DenseNet-like feature richness with lower memory cost on edge devices.

(source: https://arxiv.org/abs/1909.00948)

The model is quantized to **int8** using **ONNX Runtime** and exported for efficient deployment.

## Network information


| Network Information | Value |
|--------------------|-------|
| Framework          | Torch |
| MParams            | ~3.43 M |
| Quantization       | Int8 |
| Provenance         | https://github.com/PingoLH/Pytorch-HarDNet |
| Paper              | https://arxiv.org/abs/1909.00948 |

## Network inputs / outputs


For an image resolution of NxM and P classes

| Input Shape | Description |
| ----- | ----------- |
| (1, N, M, 3) | Single NxM RGB image with UINT8 values between 0 and 255 |

| Output Shape | Description |
| ----- | ----------- |
| (1, P) | Per-class confidence for P classes in FLOAT32|


## Recommended platforms


| Platform | Supported | Recommended |
|----------|-----------|-----------|
| STM32L0  |[]|[]|
| STM32L4  |[]|[]|
| STM32U5  |[]|[]|
| STM32H7  |[]|[]|
| STM32MP1 |[]|[]|
| STM32MP2 |[]|[]|
| STM32N6  |[x]|[x]|

# Performances

## Metrics

- Measures are done with default STEdgeAI Core configuration with enabled input / output allocated option.
- All the models are trained from scratch on Imagenet dataset 

### Reference **NPU** memory footprint on Imagenet dataset (see Accuracy for details on dataset)
| Model | Dataset | Format | Resolution | Series | Internal RAM (KiB) | External RAM (KiB) | Weights Flash (KiB) | STEdgeAI Core version |
|-------|---------|--------|------------|--------|--------------|--------------|---------------|----------------------|
| [hardnet39ds_pt_224](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/image_classification/hardnet_pt/Public_pretrainedmodel_public_dataset/Imagenet/hardnet39ds_pt_224/hardnet39ds_pt_224_qdq_int8.onnx) | Imagenet | Int8 | 224×224×3 | STM32N6 | 1476.12 | 0 | 3516.67 | 3.0.0 |



### Reference **NPU**  inference time on Imagenet dataset (see Accuracy for details on dataset)
| Model | Dataset  | Format | Resolution | Board | Execution Engine | Inference time (ms) | Inf / sec | STEdgeAI Core version |
|-------|---------|--------|--------|------------|-------|-----------------|-------------------|---------------------|
| [hardnet39ds_pt_224](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/image_classification/hardnet_pt/Public_pretrainedmodel_public_dataset/Imagenet/hardnet39ds_pt_224/hardnet39ds_pt_224_qdq_int8.onnx) | Imagenet | Int8 | 224×224×3 | STM32N6570-DK | NPU/MCU | 65.81 | 15.19 | 3.0.0  |



### Accuracy with Imagenet dataset

| Model | Format | Resolution | Top 1 Accuracy |
| --- | --- | --- | --- |
| [hardnet39ds_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/image_classification/hardnet_pt/Public_pretrainedmodel_public_dataset/Imagenet/hardnet39ds_pt_224/hardnet39ds_pt_224.onnx) | Float | 224x224x3 | 74.38 % |
| [hardnet39ds_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/image_classification/hardnet_pt/Public_pretrainedmodel_public_dataset/Imagenet/hardnet39ds_pt_224/hardnet39ds_pt_224_qdq_int8.onnx) | Int8 | 224x224x3 | 73.61 % |


Dataset details: [link](https://www.image-net.org)
Number of classes: 1000.
To perform the quantization, we calibrated the activations with a random subset of the training set.
For the sake of simplicity, the accuracy reported here was estimated on the 50000 labelled images of the validation set.

| Model | Format | Resolution | Top 1 Accuracy |
| --- | --- | --- | --- |
| [hardnet39ds_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/image_classification/hardnet_pt/Public_pretrainedmodel_public_dataset/Imagenet/hardnet39ds_pt_224/hardnet39ds_pt_224.onnx) | Float | 224x224x3 | 74.38 % |
| [hardnet39ds_pt](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/image_classification/hardnet_pt/Public_pretrainedmodel_public_dataset/Imagenet/hardnet39ds_pt_224/hardnet39ds_pt_224_qdq_int8.onnx) | Int8 | 224x224x3 | 73.61 % |



## Retraining and Integration in a simple example:

Please refer to the stm32ai-modelzoo-services GitHub [here](https://github.com/STMicroelectronics/stm32ai-modelzoo-services)



# References

<a id="1">[1]</a> - **Dataset**: Imagenet (ILSVRC 2012) — https://www.image-net.org/

<a id="2">[2]</a> - **Model**: HarDNet — https://github.com/PingoLH/Pytorch-HarDNet