README / README.md
FBAGSTM's picture
Update: Oragnization card
1dba911
---
title: README
emoji:
colorFrom: red
colorTo: gray
sdk: static
pinned: false
---
<style>
.st-header-banner {
width: 100%;
height: 120px;
margin: 0 0 12px 0;
padding: 0 24px;
box-sizing: border-box;
background: #03234B;
display: flex;
align-items: center;
justify-content: flex-start;
}
.st-logo-right {
height: 72px;
width: auto;
object-fit: contain;
}
img.rotating-content {
width: 300px;
height: 150px;
object-fit:cover;
object-position:center;
}
.icon {
width: 1.5em;
height: 1.5em;
vertical-align: -.7em;
padding: .25em .5em .25em .25em;
}
ul.social {
list-style: none;
padding-left: 0;
}
ul.social li {
padding-left: .5em;
display: flex;
}
.architectureImage {
border: 0.5px solid black;
}
</style>
<div class="st-header-banner">
<img src="assets/ST_logo_2024_white.png" alt="ST Logo" class="st-logo-right"/>
</div>
_**Innovating with edge AI on STM32 and Hugging Face.**_
STMicroelectronics is a global semiconductor leader pushing artificial intelligence down to the most resource-constrained microcontrollers. With the **STM32 AI ecosystem**, ST provides an end-to-end pipeline — from pre-trained models in the **Model Zoo** to bare-metal optimized deployment — enabling embedded developers to build intelligent applications without deep ML expertise.
Models are optimized, quantized and validated to run directly on ST Neural-ART but also Cortex-M4, M7, M85 and M33 cores.
---
## End-to-End AI Pipeline
```
+----------------------------+
| EXPLORE |
+----------------------------+
| STM32 AI Model Zoo |
+----------------------------+
|
v
+----------------------------+
| TRAIN |
+----------------------------+
| STM32 AI Model Zoo |
| Services |
+----------------------------+
|
v
+----------------------------+
| OPTIMIZE / QUANTIZE |
+----------------------------+
| STM32 AI Model Zoo |
| Services |
+----------------------------+
|
v
+----------------------------+
| EVALUATE / PREDICT |
+----------------------------+
| STM32 AI Model Zoo |
| Services |
+----------------------------+
|
v
+----------------------------+
| BENCHMARK |
+----------------------------+
| STM32Cube AI Studio |
| STM32 Developer Cloud |
+----------------------------+
|
v
+----------------------------+
| CONVERT |
+----------------------------+
| STM32Cube AI Studio |
| ST Edge AI Core |
+----------------------------+
|
v
+----------------------------+
| DEPLOY |
+----------------------------+
| STM32Cube ecosystem |
| (tools, middleware, BSP) |
+----------------------------+
```
This diagram summarizes the typical STM32 edge AI workflow from model discovery to on-device deployment:
1. **Explore**: Start from the STM32 AI Model Zoo to browse available architectures, pretrained checkpoints, and application examples.
2. **Train**: Use Model Zoo Services to retrain an existing model or build a task-specific pipeline on your own dataset.
3. **Optimize / Quantize**: Reduce model size and compute cost so the network fits embedded constraints while preserving the best possible accuracy.
4. **Evaluate / Predict**: Validate accuracy, inspect predictions, and compare tradeoffs before moving to hardware execution.
5. **Benchmark**: Measure latency, memory footprint, and target compatibility with STM32Cube AI Studio and STM32 Developer Cloud.
6. **Convert**: Transform the trained model into STM32-ready artifacts using STM32Cube AI Studio and ST Edge AI Core.
7. **Deploy**: Integrate the generated code into the STM32Cube ecosystem, including firmware, middleware, and board support components.
In short, the flow shows how a model moves from selection and training to optimization, hardware validation, and final integration on STM32 devices.
## Build, Optimize and Deploy AI/ML on STM32
- **STM32 AI Model Zoo**: A GitHub collection of reference machine learning models optimized for STM32 microcontrollers.
- **Application-Oriented Model Library**: A large set of models ready for re-training across multiple use cases.
- **Pre-trained Models Across Frameworks**: Reference models variants available for PyTorch, TensorFlow, and ONNX workflows.
- **End-to-End Scripts & Services**: Tools to retrain, quantize, evaluate, and benchmark models on custom datasets, plus autogenerated application code examples via [stm32ai-modelzoo-services](https://github.com/STMicroelectronics/stm32ai-modelzoo-services/tree/main)
- **Fast Deployment + Full Customization**: Use pretrained categories for quick deployment, or apply transfer learning / full training from scratch on your own data.
- **Reference Performance Metrics**: Results provided on STM32 MCU, NPU, and MPU targets for both float and quantized models.
- **Expanded Framework Support**: Comprehensive PyTorch support complements TensorFlow and ONNX in unified end-to-end workflows (train, evaluate, quantize, benchmark, deploy).
---
## Key Tools & Ecosystem
- **STEdgeAI Core**: Converts trained neural networks into optimized C code for STM32.
- **STM32 AI Model Zoo services**: This repository provide scripts and workflows to ease end-to-end AI model training and integration on ST devices. They offer a valuable foundation to add AI capabilities to STM32-based projects.
- **STM32 AI Model Zoo** The repository with a of reference pre-trained machine learning models optimized for STM32 microcontrollers generated thanks to the STM32 AI Model Zoo services.
- **Integration with Popular Frameworks**:
- TensorFlow / Keras
- PyTorch (via ONNX export)
- ONNX Runtime pipelines
---
## Links
- **[STM32 AI Model Zoo services](https://github.com/STMicroelectronics/stm32ai-modelzoo-services/tree/main)**
- **[STEdgeAI Core](https://www.st.com/en/development-tools/stedgeai-core.html)**
- **[STM32 Developer Cloud](https://stm32ai-cs.st.com/home)**
- **[STM32AI Model Zoo](https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main)**
- **[STM32AI Cube Studio](https://www.st.com/en/development-tools/stedgeai-cubeai.html)**
---
## 🤝 Contact & Contributions
- For technical questions: [ST EdgeAI Community](https://community.st.com/t5/edge-ai/bd-p/edge-ai)
- For issues or feature requests, use the **Issues** or **Discussions** tabs in the respective repos.
- Contributions and feedback on models, pipelines, and docs are welcome.