Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,319 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
pipeline_tag: object-detection
|
| 4 |
+
library_name: ultralytics
|
| 5 |
+
base_model: yolo11n
|
| 6 |
+
datasets:
|
| 7 |
+
- lisa-traffic-sign-dataset
|
| 8 |
+
tags:
|
| 9 |
+
- computer-vision
|
| 10 |
+
- object-detection
|
| 11 |
+
- yolov11
|
| 12 |
+
- traffic-sign-detection
|
| 13 |
+
metrics:
|
| 14 |
+
- precision
|
| 15 |
+
- recall
|
| 16 |
+
- map50
|
| 17 |
+
- map50-95
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
# Traffic Sign Detection Model (YOLOv11)
|
| 21 |
+
|
| 22 |
+
## Model Description
|
| 23 |
+
|
| 24 |
+
This model is an object detection model trained using the Ultralytics YOLOv11 framework. The model detects and classifies multiple types of traffic signs in road images by predicting bounding boxes and class labels.
|
| 25 |
+
|
| 26 |
+
The model was trained by fine-tuning the pretrained **YOLOv11n** architecture on a dataset of annotated traffic signs derived from the **LISA Traffic Sign Dataset**.
|
| 27 |
+
|
| 28 |
+
### Training Approach
|
| 29 |
+
|
| 30 |
+
- Base model: YOLOv11n
|
| 31 |
+
- Framework: Ultralytics YOLO
|
| 32 |
+
- Training method: transfer learning / fine-tuning
|
| 33 |
+
- Task: object detection
|
| 34 |
+
|
| 35 |
+
Transfer learning allows the model to start with pretrained visual features learned from large datasets and then specialize those features for traffic sign detection.
|
| 36 |
+
|
| 37 |
+
---
|
| 38 |
+
|
| 39 |
+
# Intended Use
|
| 40 |
+
|
| 41 |
+
This model is designed for:
|
| 42 |
+
|
| 43 |
+
- traffic sign detection research
|
| 44 |
+
- computer vision experimentation
|
| 45 |
+
- academic coursework projects
|
| 46 |
+
- demonstrations of object detection systems
|
| 47 |
+
|
| 48 |
+
Possible applications include:
|
| 49 |
+
|
| 50 |
+
- driver assistance research
|
| 51 |
+
- automated traffic sign recognition
|
| 52 |
+
- road scene analysis
|
| 53 |
+
|
| 54 |
+
This model **should not be used in safety-critical systems such as autonomous vehicles without extensive additional testing and validation**.
|
| 55 |
+
|
| 56 |
+
---
|
| 57 |
+
|
| 58 |
+
# Training Data
|
| 59 |
+
|
| 60 |
+
## Dataset Source
|
| 61 |
+
|
| 62 |
+
The model was trained using images derived from the **LISA Traffic Sign Dataset**.
|
| 63 |
+
|
| 64 |
+
The LISA Traffic Sign Dataset is a publicly available dataset created for traffic sign detection and classification research. The dataset contains traffic sign images captured from real driving environments in the United States.
|
| 65 |
+
|
| 66 |
+
Dataset link:
|
| 67 |
+
https://cvrr.ucsd.edu/LISA/lisa-traffic-sign-dataset.html
|
| 68 |
+
|
| 69 |
+
### Dataset Citation
|
| 70 |
+
|
| 71 |
+
Mogelmose, A., Trivedi, M. M., & Moeslund, T. B. (2012).
|
| 72 |
+
**Vision-based traffic sign detection and analysis for intelligent driver assistance systems: Perspectives and survey.**
|
| 73 |
+
IEEE Transactions on Intelligent Transportation Systems.
|
| 74 |
+
|
| 75 |
+
---
|
| 76 |
+
|
| 77 |
+
## Classes
|
| 78 |
+
|
| 79 |
+
The model detects the following traffic sign classes:
|
| 80 |
+
|
| 81 |
+
- doNotEnter
|
| 82 |
+
- pedestrianCrossing
|
| 83 |
+
- speedLimit15
|
| 84 |
+
- speedLimit25
|
| 85 |
+
- speedLimit30
|
| 86 |
+
- speedLimit35
|
| 87 |
+
- speedLimit40
|
| 88 |
+
- speedLimit45
|
| 89 |
+
- speedLimit50
|
| 90 |
+
- speedLimit65
|
| 91 |
+
- stop
|
| 92 |
+
- yield
|
| 93 |
+
|
| 94 |
+
Each object instance is annotated with bounding boxes.
|
| 95 |
+
|
| 96 |
+
---
|
| 97 |
+
|
| 98 |
+
## Data Collection Methodology
|
| 99 |
+
|
| 100 |
+
Images in the dataset were collected using vehicle-mounted cameras capturing real road scenes. These images include a variety of:
|
| 101 |
+
|
| 102 |
+
- lighting conditions
|
| 103 |
+
- road environments
|
| 104 |
+
- viewing angles
|
| 105 |
+
- traffic sign scales
|
| 106 |
+
|
| 107 |
+
This diversity helps improve the model’s ability to generalize to new images.
|
| 108 |
+
|
| 109 |
+
---
|
| 110 |
+
|
| 111 |
+
## Annotation Process
|
| 112 |
+
|
| 113 |
+
Bounding box annotations were created using **Roboflow annotation tools**.
|
| 114 |
+
|
| 115 |
+
Annotation workflow:
|
| 116 |
+
|
| 117 |
+
1. Images were uploaded to Roboflow
|
| 118 |
+
2. Bounding boxes were drawn around each traffic sign
|
| 119 |
+
3. Each object was labeled with its correct class
|
| 120 |
+
4. Annotations were reviewed and corrected
|
| 121 |
+
5. Dataset exported in YOLO format for training
|
| 122 |
+
|
| 123 |
+
---
|
| 124 |
+
|
| 125 |
+
## Train / Validation / Test Split
|
| 126 |
+
|
| 127 |
+
The dataset was divided into three sets:
|
| 128 |
+
|
| 129 |
+
| Dataset Split | Percentage |
|
| 130 |
+
|---------------|-----------|
|
| 131 |
+
| Training | ~70% |
|
| 132 |
+
| Validation | ~20% |
|
| 133 |
+
| Test | ~10% |
|
| 134 |
+
|
| 135 |
+
---
|
| 136 |
+
|
| 137 |
+
## Data Augmentation
|
| 138 |
+
|
| 139 |
+
During training, several augmentation techniques were applied to improve generalization:
|
| 140 |
+
|
| 141 |
+
- horizontal flipping
|
| 142 |
+
- mosaic augmentation
|
| 143 |
+
- image scaling
|
| 144 |
+
- color adjustments (HSV)
|
| 145 |
+
|
| 146 |
+
These augmentations help the model learn to detect objects under different visual conditions.
|
| 147 |
+
|
| 148 |
+
---
|
| 149 |
+
|
| 150 |
+
# Training Procedure
|
| 151 |
+
|
| 152 |
+
## Framework
|
| 153 |
+
|
| 154 |
+
Training was performed using the **Ultralytics YOLO training framework**.
|
| 155 |
+
|
| 156 |
+
---
|
| 157 |
+
|
| 158 |
+
## Hardware
|
| 159 |
+
|
| 160 |
+
Training environment:
|
| 161 |
+
|
| 162 |
+
- GPU: Tesla T4
|
| 163 |
+
- Platform: Google Colab
|
| 164 |
+
- Training time: ~1 hour
|
| 165 |
+
|
| 166 |
+
---
|
| 167 |
+
|
| 168 |
+
## Hyperparameters
|
| 169 |
+
|
| 170 |
+
| Parameter | Value |
|
| 171 |
+
|-----------|------|
|
| 172 |
+
| Epochs | 300 |
|
| 173 |
+
| Batch Size | 16 |
|
| 174 |
+
| Image Size | 640 |
|
| 175 |
+
| Learning Rate | 0.01 |
|
| 176 |
+
| Weight Decay | 0.0005 |
|
| 177 |
+
|
| 178 |
+
Early stopping patience was set to **100 epochs**.
|
| 179 |
+
|
| 180 |
+
---
|
| 181 |
+
|
| 182 |
+
# Evaluation Results
|
| 183 |
+
|
| 184 |
+
## Overall Model Performance
|
| 185 |
+
|
| 186 |
+
| Metric | Score |
|
| 187 |
+
|------|------|
|
| 188 |
+
| Precision | 0.99 |
|
| 189 |
+
| Recall | ~0.99 |
|
| 190 |
+
| mAP@0.5 | 0.994 |
|
| 191 |
+
| mAP@0.5–0.95 | ~0.89 |
|
| 192 |
+
| Best F1 Score | 0.99 |
|
| 193 |
+
|
| 194 |
+
These results indicate that the model performs extremely well on the validation dataset, detecting traffic signs with high accuracy and minimal false positives.
|
| 195 |
+
|
| 196 |
+
---
|
| 197 |
+
|
| 198 |
+
## Per-Class Performance
|
| 199 |
+
|
| 200 |
+
| Class | Average Precision |
|
| 201 |
+
|------|------|
|
| 202 |
+
| doNotEnter | 0.995 |
|
| 203 |
+
| pedestrianCrossing | 0.985 |
|
| 204 |
+
| speedLimit15 | 0.995 |
|
| 205 |
+
| speedLimit25 | 0.995 |
|
| 206 |
+
| speedLimit30 | 0.995 |
|
| 207 |
+
| speedLimit35 | 0.994 |
|
| 208 |
+
| speedLimit40 | 0.995 |
|
| 209 |
+
| speedLimit45 | 0.995 |
|
| 210 |
+
| speedLimit50 | 0.995 |
|
| 211 |
+
| speedLimit65 | 0.995 |
|
| 212 |
+
| stop | 0.995 |
|
| 213 |
+
| yield | 0.995 |
|
| 214 |
+
|
| 215 |
+
Most classes achieved extremely high detection accuracy.
|
| 216 |
+
|
| 217 |
+
The slightly lower performance for **pedestrianCrossing** may be due to higher variation in appearance and background conditions.
|
| 218 |
+
|
| 219 |
+
---
|
| 220 |
+
|
| 221 |
+
# Confusion Matrix Analysis
|
| 222 |
+
|
| 223 |
+
The confusion matrix shows that most predictions fall along the diagonal, indicating that the model correctly classifies the majority of traffic sign instances.
|
| 224 |
+
|
| 225 |
+
Examples of strong performance include:
|
| 226 |
+
|
| 227 |
+
- pedestrianCrossing: 146 correct detections
|
| 228 |
+
- speedLimit35: 76 correct detections
|
| 229 |
+
- speedLimit25: 57 correct detections
|
| 230 |
+
- stop: 118 correct detections
|
| 231 |
+
- yield: 41 correct detections
|
| 232 |
+
|
| 233 |
+
Misclassifications are rare and usually occur between visually similar traffic signs.
|
| 234 |
+
|
| 235 |
+
---
|
| 236 |
+
|
| 237 |
+
# Key Visualizations
|
| 238 |
+
|
| 239 |
+
## Precision-Recall Curve
|
| 240 |
+
|
| 241 |
+
The precision-recall curve demonstrates that the model maintains high precision across most recall values. This indicates that the model produces very few false positives while still detecting most objects.
|
| 242 |
+
|
| 243 |
+
---
|
| 244 |
+
|
| 245 |
+
## F1-Confidence Curve
|
| 246 |
+
|
| 247 |
+
The F1-confidence curve shows that the optimal detection confidence threshold is approximately **0.73**, where the model achieves an F1 score of about **0.99**.
|
| 248 |
+
|
| 249 |
+
This threshold provides the best balance between precision and recall.
|
| 250 |
+
|
| 251 |
+
---
|
| 252 |
+
|
| 253 |
+
# Performance Analysis
|
| 254 |
+
|
| 255 |
+
The model performs extremely well on the validation dataset due to several factors:
|
| 256 |
+
|
| 257 |
+
1. Transfer learning from a pretrained YOLO model.
|
| 258 |
+
2. Consistent visual characteristics of traffic signs.
|
| 259 |
+
3. Data augmentation during training.
|
| 260 |
+
4. Clear visual differences between most traffic sign classes.
|
| 261 |
+
|
| 262 |
+
However, these results reflect performance on the validation dataset and may not fully represent real-world performance in different environments.
|
| 263 |
+
|
| 264 |
+
---
|
| 265 |
+
|
| 266 |
+
# Limitations and Biases
|
| 267 |
+
|
| 268 |
+
## Visually Similar Classes
|
| 269 |
+
|
| 270 |
+
Speed limit signs such as **25 mph, 30 mph, and 35 mph** have similar shapes and layouts. If the number on the sign is partially obscured or blurred, the model may confuse these classes.
|
| 271 |
+
|
| 272 |
+
---
|
| 273 |
+
|
| 274 |
+
## Environmental Limitations
|
| 275 |
+
|
| 276 |
+
Model performance may degrade under certain conditions:
|
| 277 |
+
|
| 278 |
+
- poor lighting
|
| 279 |
+
- nighttime driving scenes
|
| 280 |
+
- motion blur
|
| 281 |
+
- heavy shadows
|
| 282 |
+
- extreme viewing angles
|
| 283 |
+
|
| 284 |
+
---
|
| 285 |
+
|
| 286 |
+
## Dataset Bias
|
| 287 |
+
|
| 288 |
+
The dataset primarily contains traffic signs captured in specific geographic and environmental conditions. This may introduce bias related to:
|
| 289 |
+
|
| 290 |
+
- geographic location
|
| 291 |
+
- road environment
|
| 292 |
+
- weather conditions
|
| 293 |
+
|
| 294 |
+
Performance may vary in unfamiliar environments.
|
| 295 |
+
|
| 296 |
+
---
|
| 297 |
+
|
| 298 |
+
# Ethical Considerations
|
| 299 |
+
|
| 300 |
+
This model should be used responsibly and should not be deployed in safety-critical systems without rigorous real-world testing and validation.
|
| 301 |
+
|
| 302 |
+
---
|
| 303 |
+
|
| 304 |
+
# Reproducibility
|
| 305 |
+
|
| 306 |
+
Training command used:
|
| 307 |
+
|
| 308 |
+
```python
|
| 309 |
+
from ultralytics import YOLO
|
| 310 |
+
|
| 311 |
+
model = YOLO("yolo11n.pt")
|
| 312 |
+
|
| 313 |
+
model.train(
|
| 314 |
+
data="/content/dataset/data.yaml",
|
| 315 |
+
epochs=300,
|
| 316 |
+
imgsz=640,
|
| 317 |
+
batch=16,
|
| 318 |
+
device=0
|
| 319 |
+
)
|