IntegrationTest / ui /README.md
Yingtao-Zheng's picture
Upload partially updated files
8bbb872
# ui/
Live camera demo and real-time inference pipeline.
## 1. Files
| File | Description |
|------|-------------|
| `pipeline.py` | Inference pipelines: `FaceMeshPipeline`, `MLPPipeline`, `XGBoostPipeline` |
| `live_demo.py` | OpenCV webcam window with mesh overlay and focus classification |
## 2. Pipelines
| Pipeline | Features | Model | Source |
|----------|----------|-------|--------|
| `FaceMeshPipeline` | Head pose + eye geometry | Rule-based fusion | `models/head_pose.py`, `models/eye_scorer.py` |
| `MLPPipeline` | 10 selected features | PyTorch MLP | `checkpoints/model_best.joblib` |
| `XGBoostPipeline` | 10 selected features | XGBoost | `models/xgboost/checkpoints/face_orientation_best.json` |
## 3. Running
```bash
# default mode (cycles through available pipelines)
python ui/live_demo.py
# start directly in XGBoost mode
python ui/live_demo.py --xgb
```
### Controls
| Key | Action |
|-----|--------|
| `m` | Cycle mesh overlay (full β†’ contours β†’ off) |
| `p` | Switch pipeline (FaceMesh β†’ MLP β†’ XGBoost) |
| `q` | Quit |
## 4. Integration
The same pipelines are used by the FastAPI backend (`main.py`) for WebSocket-based video inference in the React app.