Spaces:
Sleeping
Sleeping
File size: 1,904 Bytes
8bbb872 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | # models/
Feature extraction modules and model training scripts.
## 1. Feature Extraction
Root-level modules form the real-time inference pipeline:
| Module | Input | Output |
|--------|-------|--------|
| `face_mesh.py` | BGR frame | 478 MediaPipe landmarks |
| `head_pose.py` | Landmarks, frame size | yaw, pitch, roll, face/eye score, gaze offset, head deviation |
| `eye_scorer.py` | Landmarks | EAR (left/right/avg), gaze ratio (h/v), MAR |
| `eye_crop.py` | Landmarks, frame | Cropped eye region images |
| `eye_classifier.py` | Eye crops or landmarks | Eye open/closed prediction (geometric fallback) |
| `collect_features.py` | BGR frame | 17-d feature vector + temporal features (PERCLOS, blink rate, etc.) |
## 2. Training Scripts
| Folder | Model | Command |
|--------|-------|---------|
| `mlp/` | PyTorch MLP (64β32, 2-class) | `python -m models.mlp.train` |
| `xgboost/` | XGBoost (600 trees, depth 8) | `python -m models.xgboost.train` |
### mlp/
- `train.py` β training loop with early stopping, ClearML opt-in
- `sweep.py` β hyperparameter search (Optuna: lr, batch_size)
- `eval_accuracy.py` β load checkpoint and print test metrics
- Saves to **`checkpoints/mlp_best.pt`**
### xgboost/
- `train.py` β training with eval-set logging
- `sweep.py` / `sweep_local.py` β hyperparameter search (Optuna + ClearML)
- `eval_accuracy.py` β load checkpoint and print test metrics
- Saves to **`checkpoints/xgboost_face_orientation_best.json`**
## 3. Data Loading
All training scripts import from `data_preparation.prepare_dataset`:
```python
from data_preparation.prepare_dataset import get_numpy_splits # XGBoost
from data_preparation.prepare_dataset import get_dataloaders # MLP (PyTorch)
```
## 4. Results
| Model | Test Accuracy | F1 | ROC-AUC |
|-------|--------------|-----|---------|
| XGBoost | 95.87% | 0.959 | 0.991 |
| MLP | 92.92% | 0.929 | 0.971 |
|