Yingtao-Zheng commited on
Commit
24a5e7e
·
1 Parent(s): 8bbb872

Add other files and folders, including data related, notebook, test and evaluation

Browse files
data/CNN/eye_crops/val/open/.gitkeep ADDED
@@ -0,0 +1 @@
 
 
1
+
data/README.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # data/
2
+
3
+ Raw collected session data used for model training and evaluation.
4
+
5
+ ## 1. Contents
6
+
7
+ Each `collected_<name>/` folder contains `.npz` files for one participant:
8
+
9
+ | Folder | Participant | Samples |
10
+ |--------|-------------|---------|
11
+ | `collected_Abdelrahman/` | Abdelrahman | 15,870 |
12
+ | `collected_Jarek/` | Jarek | 14,829 |
13
+ | `collected_Junhao/` | Junhao | 8,901 |
14
+ | `collected_Kexin/` | Kexin | 32,312 (2 sessions) |
15
+ | `collected_Langyuan/` | Langyuan | 15,749 |
16
+ | `collected_Mohamed/` | Mohamed | 13,218 |
17
+ | `collected_Yingtao/` | Yingtao | 17,591 |
18
+ | `collected_ayten/` | Ayten | 17,621 |
19
+ | `collected_saba/` | Saba | 8,702 |
20
+ | **Total** | **9 participants** | **144,793** |
21
+
22
+ ## 2. File Format
23
+
24
+ Each `.npz` file contains:
25
+
26
+ | Key | Shape | Description |
27
+ |-----|-------|-------------|
28
+ | `features` | (N, 17) | 17-dimensional feature vectors (float32) |
29
+ | `labels` | (N,) | Binary labels: 0 = unfocused, 1 = focused |
30
+ | `feature_names` | (17,) | Column names for the 17 features |
31
+
32
+ ## 3. Feature List
33
+
34
+ `ear_left`, `ear_right`, `ear_avg`, `h_gaze`, `v_gaze`, `mar`, `yaw`, `pitch`, `roll`, `s_face`, `s_eye`, `gaze_offset`, `head_deviation`, `perclos`, `blink_rate`, `closure_duration`, `yawn_duration`
35
+
36
+ 10 of these are selected for training (see `data_preparation/prepare_dataset.py`).
37
+
38
+ ## 4. Collection
39
+
40
+ ```bash
41
+ python -m models.collect_features --name yourname
42
+ ```
43
+
44
+ 1. Webcam opens with live overlay
45
+ 2. Press **1** = focused, **0** = unfocused (switch every 10–30 sec)
46
+ 3. Press **p** to pause/resume
47
+ 4. Press **q** to stop and save
data/collected_Abdelrahman/abdelrahman_20260306_023035.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e2c48532150182c8933d4595e0a0711365645b699647e99976575b7c2adffaf8
3
+ size 1207980
data/collected_Jarek/Jarek_20260225_012931.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0fa68f4d587eee8d645b23b463a9f1c848b9bacc2adb68603d5fa9cd8cb744c7
3
+ size 1128864
data/collected_Junhao/Junhao_20260303_113554.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec321ee79800c04fdc0f999690d07970445aeca61f977bf6537880bbc996b5e5
3
+ size 678336
data/collected_Kexin/kexin2_20260305_180229.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e96fe17571fa1fcccc1b4bd0c8838270498883e4db6a608c4d4d4c3a8ac1d0d
3
+ size 1129700
data/collected_Kexin/kexin_20260224_151043.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d402ca4e66910a2e174c4f4beec5d7b3db6a04213d29673b227ce6ef04b39c4
3
+ size 1329732
data/collected_Langyuan/Langyuan_20260303_153145.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c679cdba334b2f3f0953b7e44f7209056277c826e2b7b5cfcf2b8b750898400
3
+ size 1198784
data/collected_Mohamed/session_20260224_010131.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a784f703c13b83911f47ec507d32c25942a07572314b8a77cbf40ca8cdff16f
3
+ size 1006428
data/collected_Yingtao/Yingtao_20260306_023937.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a75af17e25dca5f06ea9e7443ea5fee9db638f68a5910e014ee7cb8b7ae80fd
3
+ size 1338776
data/collected_ayten/ayten_session_1.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fbecdbffa1c1b03b3b0fb5f715dcb4ff885ecc67da4aff78e6952b8847a96014
3
+ size 1341056
data/collected_saba/saba_20260306_230710.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db1cab5ddcf9988856c5bdca1183c8eba4647365e675a1d8a200d12f6b5d2097
3
+ size 663212
data_preparation/README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # data_preparation/
2
+
3
+ Shared data loading, cleaning, and exploratory analysis.
4
+
5
+ ## 1. Files
6
+
7
+ | File | Description |
8
+ |------|-------------|
9
+ | `prepare_dataset.py` | Central data loading module used by all training scripts and notebooks |
10
+ | `data_exploration.ipynb` | EDA notebook: feature distributions, class balance, correlations |
11
+
12
+ ## 2. prepare_dataset.py
13
+
14
+ Provides a consistent pipeline for loading raw `.npz` data from `data/`:
15
+
16
+ | Function | Purpose |
17
+ |----------|---------|
18
+ | `load_all_pooled(model_name)` | Load all participants, clean, select features, concatenate |
19
+ | `load_per_person(model_name)` | Load grouped by person (for LOPO cross-validation) |
20
+ | `get_numpy_splits(model_name)` | Load + stratified 70/15/15 split + StandardScaler |
21
+ | `get_dataloaders(model_name)` | Same as above, wrapped in PyTorch DataLoaders |
22
+ | `_split_and_scale(features, labels, ...)` | Reusable split + optional scaling |
23
+
24
+ ### Cleaning rules
25
+
26
+ - `yaw` clipped to [-45, 45], `pitch`/`roll` to [-30, 30]
27
+ - `ear_left`, `ear_right`, `ear_avg` clipped to [0, 0.85]
28
+
29
+ ### Selected features (face_orientation)
30
+
31
+ `head_deviation`, `s_face`, `s_eye`, `h_gaze`, `pitch`, `ear_left`, `ear_avg`, `ear_right`, `gaze_offset`, `perclos`
32
+
33
+ ## 3. data_exploration.ipynb
34
+
35
+ Run from this folder or from the project root. Covers:
36
+
37
+ 1. Per-feature statistics (mean, std, min, max)
38
+ 2. Class distribution (focused vs unfocused)
39
+ 3. Feature histograms and box plots
40
+ 4. Correlation matrix
41
+
42
+ ## 4. How to run
43
+
44
+ `prepare_dataset.py` is a **library module**, not a standalone script. You don’t run it directly; you import it from code that needs data.
45
+
46
+ **From repo root:**
47
+
48
+ ```bash
49
+ # Optional: quick test that loading works
50
+ python -c "
51
+ from data_preparation.prepare_dataset import load_all_pooled
52
+ X, y, names = load_all_pooled('face_orientation')
53
+ print(f'Loaded {X.shape[0]} samples, {X.shape[1]} features: {names}')
54
+ "
55
+ ```
56
+
57
+ **Used by:**
58
+
59
+ - `python -m models.mlp.train`
60
+ - `python -m models.xgboost.train`
61
+ - `notebooks/mlp.ipynb`, `notebooks/xgboost.ipynb`
62
+ - `data_preparation/data_exploration.ipynb`
63
+
64
+ ## 5. Usage (in code)
65
+
66
+ ```python
67
+ from data_preparation.prepare_dataset import load_all_pooled, get_numpy_splits
68
+
69
+ # pooled data
70
+ X, y, names = load_all_pooled("face_orientation")
71
+
72
+ # ready-to-train splits
73
+ splits, n_features, n_classes, scaler = get_numpy_splits("face_orientation")
74
+ X_train, y_train = splits["X_train"], splits["y_train"]
75
+ ```
data_preparation/__init__.py ADDED
File without changes
data_preparation/data_exploration.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
data_preparation/prepare_dataset.py ADDED
@@ -0,0 +1,232 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import glob
3
+
4
+ import numpy as np
5
+ from sklearn.preprocessing import StandardScaler
6
+ from sklearn.model_selection import train_test_split
7
+
8
+ try:
9
+ import torch
10
+ from torch.utils.data import Dataset, DataLoader
11
+ except ImportError: # pragma: no cover
12
+ torch = None
13
+
14
+ class Dataset: # type: ignore
15
+ pass
16
+
17
+ class _MissingTorchDataLoader: # type: ignore
18
+ def __init__(self, *args, **kwargs):
19
+ raise ImportError(
20
+ "PyTorch not installed"
21
+ )
22
+
23
+ DataLoader = _MissingTorchDataLoader # type: ignore
24
+
25
+ DATA_DIR = os.path.join(os.path.dirname(__file__), "..", "data")
26
+
27
+ SELECTED_FEATURES = {
28
+ "face_orientation": [
29
+ 'head_deviation', 's_face', 's_eye', 'h_gaze', 'pitch',
30
+ 'ear_left', 'ear_avg', 'ear_right', 'gaze_offset', 'perclos'
31
+ ],
32
+ "eye_behaviour": [
33
+ 'ear_left', 'ear_right', 'ear_avg', 'mar',
34
+ 'blink_rate', 'closure_duration', 'perclos', 'yawn_duration'
35
+ ]
36
+ }
37
+
38
+
39
+ class FeatureVectorDataset(Dataset):
40
+ def __init__(self, features: np.ndarray, labels: np.ndarray):
41
+ self.features = torch.tensor(features, dtype=torch.float32)
42
+ self.labels = torch.tensor(labels, dtype=torch.long)
43
+
44
+ def __len__(self):
45
+ return len(self.labels)
46
+
47
+ def __getitem__(self, idx):
48
+ return self.features[idx], self.labels[idx]
49
+
50
+
51
+ # ── Low-level helpers ────────────────────────────────────────────────────
52
+
53
+ def _clean_npz(raw, names):
54
+ """Apply clipping rules in-place. Shared by all loaders."""
55
+ for col, lo, hi in [('yaw', -45, 45), ('pitch', -30, 30), ('roll', -30, 30)]:
56
+ if col in names:
57
+ raw[:, names.index(col)] = np.clip(raw[:, names.index(col)], lo, hi)
58
+ for feat in ['ear_left', 'ear_right', 'ear_avg']:
59
+ if feat in names:
60
+ raw[:, names.index(feat)] = np.clip(raw[:, names.index(feat)], 0, 0.85)
61
+ return raw
62
+
63
+
64
+ def _load_one_npz(npz_path, target_features):
65
+ """Load a single .npz file, clean and select features. Returns (X, y, selected_feature_names)."""
66
+ data = np.load(npz_path, allow_pickle=True)
67
+ raw = data['features'].astype(np.float32)
68
+ labels = data['labels'].astype(np.int64)
69
+ names = list(data['feature_names'])
70
+ raw = _clean_npz(raw, names)
71
+ selected = [f for f in target_features if f in names]
72
+ idx = [names.index(f) for f in selected]
73
+ return raw[:, idx], labels, selected
74
+
75
+
76
+ # ── Public data loaders ──────────────────────────────────────────────────
77
+
78
+ def load_all_pooled(model_name: str = "face_orientation", data_dir: str = None):
79
+ """Load all collected_*/*.npz, clean, select features, concatenate.
80
+
81
+ Returns (X_all, y_all, all_feature_names).
82
+ """
83
+ data_dir = data_dir or DATA_DIR
84
+ target_features = SELECTED_FEATURES.get(model_name, SELECTED_FEATURES["face_orientation"])
85
+ pattern = os.path.join(data_dir, "collected_*", "*.npz")
86
+ npz_files = sorted(glob.glob(pattern))
87
+
88
+ if not npz_files:
89
+ print("[DATA] Warning: No .npz files found. Falling back to synthetic.")
90
+ X, y = _generate_synthetic_data(model_name)
91
+ return X, y, target_features
92
+
93
+ all_X, all_y = [], []
94
+ all_names = None
95
+ for npz_path in npz_files:
96
+ X, y, names = _load_one_npz(npz_path, target_features)
97
+ if all_names is None:
98
+ all_names = names
99
+ all_X.append(X)
100
+ all_y.append(y)
101
+ print(f"[DATA] + {os.path.basename(npz_path)}: {X.shape[0]} samples")
102
+
103
+ X_all = np.concatenate(all_X, axis=0)
104
+ y_all = np.concatenate(all_y, axis=0)
105
+ print(f"[DATA] Loaded {len(npz_files)} file(s) for '{model_name}': "
106
+ f"{X_all.shape[0]} total samples, {X_all.shape[1]} features")
107
+ return X_all, y_all, all_names
108
+
109
+
110
+ def load_per_person(model_name: str = "face_orientation", data_dir: str = None):
111
+ """Load collected_*/*.npz grouped by person (folder name).
112
+
113
+ Returns dict { person_name: (X, y) } where X/y are per-person numpy arrays.
114
+ Also returns (X_all, y_all) as pooled data.
115
+ """
116
+ data_dir = data_dir or DATA_DIR
117
+ target_features = SELECTED_FEATURES.get(model_name, SELECTED_FEATURES["face_orientation"])
118
+ pattern = os.path.join(data_dir, "collected_*", "*.npz")
119
+ npz_files = sorted(glob.glob(pattern))
120
+
121
+ if not npz_files:
122
+ raise FileNotFoundError(f"No .npz files matching {pattern}")
123
+
124
+ by_person = {}
125
+ all_X, all_y = [], []
126
+ for npz_path in npz_files:
127
+ folder = os.path.basename(os.path.dirname(npz_path))
128
+ person = folder.replace("collected_", "", 1)
129
+ X, y, _ = _load_one_npz(npz_path, target_features)
130
+ all_X.append(X)
131
+ all_y.append(y)
132
+ if person not in by_person:
133
+ by_person[person] = []
134
+ by_person[person].append((X, y))
135
+ print(f"[DATA] + {person}/{os.path.basename(npz_path)}: {X.shape[0]} samples")
136
+
137
+ for person, chunks in by_person.items():
138
+ by_person[person] = (
139
+ np.concatenate([c[0] for c in chunks], axis=0),
140
+ np.concatenate([c[1] for c in chunks], axis=0),
141
+ )
142
+
143
+ X_all = np.concatenate(all_X, axis=0)
144
+ y_all = np.concatenate(all_y, axis=0)
145
+ print(f"[DATA] {len(by_person)} persons, {X_all.shape[0]} total samples, {X_all.shape[1]} features")
146
+ return by_person, X_all, y_all
147
+
148
+
149
+ def load_raw_npz(npz_path):
150
+ """Load a single .npz without cleaning or feature selection. For exploration notebooks."""
151
+ data = np.load(npz_path, allow_pickle=True)
152
+ features = data['features'].astype(np.float32)
153
+ labels = data['labels'].astype(np.int64)
154
+ names = list(data['feature_names'])
155
+ return features, labels, names
156
+
157
+
158
+ # ── Legacy helpers (used by models/mlp/train.py and models/xgboost/train.py) ─
159
+
160
+ def _load_real_data(model_name: str):
161
+ X, y, _ = load_all_pooled(model_name)
162
+ return X, y
163
+
164
+
165
+ def _generate_synthetic_data(model_name: str):
166
+ target_features = SELECTED_FEATURES.get(model_name, SELECTED_FEATURES["face_orientation"])
167
+ n = 500
168
+ d = len(target_features)
169
+ c = 2
170
+ rng = np.random.RandomState(42)
171
+ features = rng.randn(n, d).astype(np.float32)
172
+ labels = rng.randint(0, c, size=n).astype(np.int64)
173
+ print(f"[DATA] Using synthetic data for '{model_name}': {n} samples, {d} features, {c} classes")
174
+ return features, labels
175
+
176
+
177
+ def _split_and_scale(features, labels, split_ratios, seed, scale):
178
+ """Split data into train/val/test (stratified) and optionally scale."""
179
+ test_ratio = split_ratios[2]
180
+ val_ratio = split_ratios[1] / (split_ratios[0] + split_ratios[1])
181
+
182
+ X_train_val, X_test, y_train_val, y_test = train_test_split(
183
+ features, labels, test_size=test_ratio, random_state=seed, stratify=labels,
184
+ )
185
+ X_train, X_val, y_train, y_val = train_test_split(
186
+ X_train_val, y_train_val, test_size=val_ratio, random_state=seed, stratify=y_train_val,
187
+ )
188
+
189
+ scaler = None
190
+ if scale:
191
+ scaler = StandardScaler()
192
+ X_train = scaler.fit_transform(X_train)
193
+ X_val = scaler.transform(X_val)
194
+ X_test = scaler.transform(X_test)
195
+ print("[DATA] Applied StandardScaler (fitted on training split)")
196
+
197
+ splits = {
198
+ "X_train": X_train, "y_train": y_train,
199
+ "X_val": X_val, "y_val": y_val,
200
+ "X_test": X_test, "y_test": y_test,
201
+ }
202
+
203
+ print(f"[DATA] Split (stratified): train={len(y_train)}, val={len(y_val)}, test={len(y_test)}")
204
+ return splits, scaler
205
+
206
+
207
+ def get_numpy_splits(model_name: str, split_ratios=(0.7, 0.15, 0.15), seed: int = 42, scale: bool = True):
208
+ """Return raw numpy arrays for non-PyTorch models (e.g. XGBoost)."""
209
+ features, labels = _load_real_data(model_name)
210
+ num_features = features.shape[1]
211
+ num_classes = int(labels.max()) + 1
212
+ splits, scaler = _split_and_scale(features, labels, split_ratios, seed, scale)
213
+ return splits, num_features, num_classes, scaler
214
+
215
+
216
+ def get_dataloaders(model_name: str, batch_size: int = 32, split_ratios=(0.7, 0.15, 0.15), seed: int = 42, scale: bool = True):
217
+ """Return PyTorch DataLoaders for neural-network models."""
218
+ features, labels = _load_real_data(model_name)
219
+ num_features = features.shape[1]
220
+ num_classes = int(labels.max()) + 1
221
+ splits, scaler = _split_and_scale(features, labels, split_ratios, seed, scale)
222
+
223
+ train_ds = FeatureVectorDataset(splits["X_train"], splits["y_train"])
224
+ val_ds = FeatureVectorDataset(splits["X_val"], splits["y_val"])
225
+ test_ds = FeatureVectorDataset(splits["X_test"], splits["y_test"])
226
+
227
+ train_loader = DataLoader(train_ds, batch_size=batch_size, shuffle=True)
228
+ val_loader = DataLoader(val_ds, batch_size=batch_size, shuffle=False)
229
+ test_loader = DataLoader(test_ds, batch_size=batch_size, shuffle=False)
230
+
231
+ return train_loader, val_loader, test_loader, num_features, num_classes, scaler
232
+
evaluation/README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # evaluation/
2
+
3
+ Training logs and performance metrics.
4
+
5
+ ## 1. Contents
6
+
7
+ ```
8
+ logs/
9
+ ├── face_orientation_training_log.json # MLP (latest run)
10
+ ├── mlp_face_orientation_training_log.json # MLP (alternate)
11
+ └── xgboost_face_orientation_training_log.json # XGBoost
12
+ ```
13
+
14
+ ## 2. Log Format
15
+
16
+ Each JSON file records the full training history:
17
+
18
+ **MLP logs:**
19
+ ```json
20
+ {
21
+ "config": { "epochs": 30, "lr": 0.001, "batch_size": 32, ... },
22
+ "history": {
23
+ "train_loss": [0.287, 0.260, ...],
24
+ "val_loss": [0.256, 0.245, ...],
25
+ "train_acc": [0.889, 0.901, ...],
26
+ "val_acc": [0.905, 0.909, ...]
27
+ },
28
+ "test": { "accuracy": 0.929, "f1": 0.929, "roc_auc": 0.971 }
29
+ }
30
+ ```
31
+
32
+ **XGBoost logs:**
33
+ ```json
34
+ {
35
+ "config": { "n_estimators": 600, "max_depth": 8, "learning_rate": 0.149, ... },
36
+ "train_losses": [0.577, ...],
37
+ "val_losses": [0.576, ...],
38
+ "test": { "accuracy": 0.959, "f1": 0.959, "roc_auc": 0.991 }
39
+ }
40
+ ```
41
+
42
+ ## 3. Generated By
43
+
44
+ - `python -m models.mlp.train` → writes MLP log
45
+ - `python -m models.xgboost.train` → writes XGBoost log
46
+ - Notebooks in `notebooks/` also save logs here
notebooks/README.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # notebooks/
2
+
3
+ Training and evaluation notebooks for MLP and XGBoost models.
4
+
5
+ ## 1. Files
6
+
7
+ | Notebook | Model | Description |
8
+ |----------|-------|-------------|
9
+ | `mlp.ipynb` | PyTorch MLP | Training, evaluation, and LOPO cross-validation |
10
+ | `xgboost.ipynb` | XGBoost | Training, evaluation, and LOPO cross-validation |
11
+
12
+ ## 2. Structure (both notebooks)
13
+
14
+ Each notebook follows the same layout:
15
+
16
+ 1. **Imports and CFG** — single config dict, project root setup
17
+ 2. **ClearML (optional)** — opt-in experiment tracking
18
+ 3. **Data loading** — uses `data_preparation.prepare_dataset` for consistent loading
19
+ 4. **Random split training** — 70/15/15 stratified split with per-epoch/round logging
20
+ 5. **Loss curves** — train vs validation loss plots
21
+ 6. **Test evaluation** — accuracy, F1, ROC-AUC, classification report, confusion matrix
22
+ 7. **Checkpoint saving** — model weights + JSON training log
23
+ 8. **LOPO evaluation** — Leave-One-Person-Out cross-validation across all 9 participants
24
+ 9. **LOPO summary** — per-person accuracy table + bar chart
25
+
26
+ ## 3. Running
27
+
28
+ Open in Jupyter or VS Code with the Python kernel set to the project venv:
29
+
30
+ ```bash
31
+ source venv/bin/activate
32
+ jupyter notebook notebooks/mlp.ipynb
33
+ ```
34
+
35
+ Make sure the kernel's working directory is either the project root or `notebooks/` — the path resolution handles both.
36
+
37
+ ## 4. Results
38
+
39
+ | Model | Random Split Accuracy | Random Split F1 | LOPO (mean) |
40
+ |-------|-----------------------|-----------------|-------------|
41
+ | XGBoost | 95.87% | 0.959 | see notebook |
42
+ | MLP | 92.92% | 0.929 | see notebook |
notebooks/mlp.ipynb ADDED
@@ -0,0 +1,571 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "# MLP Training (ClearML-compatible)\n",
8
+ "\n",
9
+ "PyTorch MLP for focus classification.\n",
10
+ "- Single CFG dict (ClearML `task.connect(CFG)`)\n",
11
+ "- 70/15/15 stratified random split\n",
12
+ "- Per-epoch train/val loss + accuracy table\n",
13
+ "- Test evaluation: accuracy, F1, ROC-AUC\n",
14
+ "- ClearML scalar logging (opt-in)\n",
15
+ "- LOPO comparison at the end"
16
+ ]
17
+ },
18
+ {
19
+ "cell_type": "markdown",
20
+ "metadata": {},
21
+ "source": [
22
+ "## 1. Imports and CFG"
23
+ ]
24
+ },
25
+ {
26
+ "cell_type": "code",
27
+ "execution_count": null,
28
+ "metadata": {},
29
+ "outputs": [],
30
+ "source": [
31
+ "import json\n",
32
+ "import os\n",
33
+ "import sys\n",
34
+ "import random\n",
35
+ "\n",
36
+ "import numpy as np\n",
37
+ "import torch\n",
38
+ "import torch.nn as nn\n",
39
+ "import torch.optim as optim\n",
40
+ "from torch.utils.data import DataLoader, TensorDataset\n",
41
+ "from sklearn.preprocessing import StandardScaler\n",
42
+ "from sklearn.model_selection import train_test_split\n",
43
+ "from sklearn.metrics import (\n",
44
+ " accuracy_score, f1_score, roc_auc_score,\n",
45
+ " classification_report, confusion_matrix, ConfusionMatrixDisplay,\n",
46
+ ")\n",
47
+ "import matplotlib.pyplot as plt\n",
48
+ "import warnings\n",
49
+ "warnings.filterwarnings(\"ignore\")\n",
50
+ "\n",
51
+ "# Add project root to sys.path\n",
52
+ "_cwd = os.getcwd()\n",
53
+ "PROJECT_ROOT = _cwd if os.path.isdir(os.path.join(_cwd, \"models\")) else os.path.abspath(os.path.join(_cwd, \"..\"))\n",
54
+ "if PROJECT_ROOT not in sys.path:\n",
55
+ " sys.path.insert(0, PROJECT_ROOT)\n",
56
+ "\n",
57
+ "from data_preparation.prepare_dataset import load_per_person, SELECTED_FEATURES, _split_and_scale\n",
58
+ "\n",
59
+ "CFG = {\n",
60
+ " \"model_name\": \"face_orientation\",\n",
61
+ " \"seed\": 42,\n",
62
+ " \"split_ratios\": (0.7, 0.15, 0.15),\n",
63
+ " \"scale\": True,\n",
64
+ " \"batch_size\": 32,\n",
65
+ " \"epochs\": 30,\n",
66
+ " \"lr\": 1e-3,\n",
67
+ " \"hidden_sizes\": [64, 32],\n",
68
+ " \"checkpoints_dir\": os.path.join(PROJECT_ROOT, \"checkpoints\"),\n",
69
+ " \"logs_dir\": os.path.join(PROJECT_ROOT, \"evaluation\", \"logs\"),\n",
70
+ "}\n",
71
+ "\n",
72
+ "print(f\"Project root: {PROJECT_ROOT}\")\n",
73
+ "print(f\"Device: {'cuda' if torch.cuda.is_available() else 'cpu'}\")"
74
+ ]
75
+ },
76
+ {
77
+ "cell_type": "markdown",
78
+ "metadata": {},
79
+ "source": [
80
+ "## 2. ClearML (optional)"
81
+ ]
82
+ },
83
+ {
84
+ "cell_type": "code",
85
+ "execution_count": null,
86
+ "metadata": {},
87
+ "outputs": [],
88
+ "source": [
89
+ "USE_CLEARML = False # set True when ClearML credentials are configured\n",
90
+ "task = None\n",
91
+ "\n",
92
+ "if USE_CLEARML:\n",
93
+ " from clearml import Task\n",
94
+ " task = Task.init(\n",
95
+ " project_name=\"FocusGuards Large Group Project\",\n",
96
+ " task_name=\"MLP Model Training\",\n",
97
+ " tags=[\"training\", \"mlp\"]\n",
98
+ " )\n",
99
+ " task.connect(CFG)\n",
100
+ " print(\"[ClearML] Connected\")\n",
101
+ "else:\n",
102
+ " print(\"[ClearML] Disabled (set USE_CLEARML = True to enable)\")"
103
+ ]
104
+ },
105
+ {
106
+ "cell_type": "markdown",
107
+ "metadata": {},
108
+ "source": [
109
+ "## 3. Load data"
110
+ ]
111
+ },
112
+ {
113
+ "cell_type": "code",
114
+ "execution_count": null,
115
+ "metadata": {},
116
+ "outputs": [],
117
+ "source": [
118
+ "def set_seed(seed):\n",
119
+ " random.seed(seed)\n",
120
+ " np.random.seed(seed)\n",
121
+ " torch.manual_seed(seed)\n",
122
+ " if torch.cuda.is_available():\n",
123
+ " torch.cuda.manual_seed_all(seed)\n",
124
+ "\n",
125
+ "set_seed(CFG[\"seed\"])\n",
126
+ "\n",
127
+ "by_person, X_all, y_all = load_per_person(CFG[\"model_name\"])\n",
128
+ "person_names = sorted(by_person.keys())\n",
129
+ "num_features = X_all.shape[1]\n",
130
+ "num_classes = int(y_all.max()) + 1\n",
131
+ "print(f\"\\nPersons: {person_names}\")"
132
+ ]
133
+ },
134
+ {
135
+ "cell_type": "markdown",
136
+ "metadata": {},
137
+ "source": [
138
+ "## 4. Random split (70/15/15) and scaling"
139
+ ]
140
+ },
141
+ {
142
+ "cell_type": "code",
143
+ "execution_count": null,
144
+ "metadata": {},
145
+ "outputs": [],
146
+ "source": [
147
+ "splits, scaler = _split_and_scale(X_all, y_all, CFG[\"split_ratios\"], CFG[\"seed\"], CFG[\"scale\"])\n",
148
+ "X_train, y_train = splits[\"X_train\"], splits[\"y_train\"]\n",
149
+ "X_val, y_val = splits[\"X_val\"], splits[\"y_val\"]\n",
150
+ "X_test, y_test = splits[\"X_test\"], splits[\"y_test\"]\n",
151
+ "\n",
152
+ "print(f\"Features: {num_features}, Classes: {num_classes}\")\n",
153
+ "\n",
154
+ "def make_loader(X, y, batch_size, shuffle=False):\n",
155
+ " ds = TensorDataset(torch.tensor(X, dtype=torch.float32), torch.tensor(y, dtype=torch.long))\n",
156
+ " return DataLoader(ds, batch_size=batch_size, shuffle=shuffle)\n",
157
+ "\n",
158
+ "train_loader = make_loader(X_train, y_train, CFG[\"batch_size\"], shuffle=True)\n",
159
+ "val_loader = make_loader(X_val, y_val, CFG[\"batch_size\"])\n",
160
+ "test_loader = make_loader(X_test, y_test, CFG[\"batch_size\"])"
161
+ ]
162
+ },
163
+ {
164
+ "cell_type": "markdown",
165
+ "metadata": {},
166
+ "source": [
167
+ "## 5. Model definition"
168
+ ]
169
+ },
170
+ {
171
+ "cell_type": "code",
172
+ "execution_count": null,
173
+ "metadata": {},
174
+ "outputs": [],
175
+ "source": [
176
+ "class MLP(nn.Module):\n",
177
+ " def __init__(self, in_features, hidden_sizes, num_classes):\n",
178
+ " super().__init__()\n",
179
+ " layers = []\n",
180
+ " prev = in_features\n",
181
+ " for h in hidden_sizes:\n",
182
+ " layers += [nn.Linear(prev, h), nn.ReLU()]\n",
183
+ " prev = h\n",
184
+ " layers.append(nn.Linear(prev, num_classes))\n",
185
+ " self.network = nn.Sequential(*layers)\n",
186
+ "\n",
187
+ " def forward(self, x):\n",
188
+ " return self.network(x)\n",
189
+ "\n",
190
+ "\n",
191
+ "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
192
+ "model = MLP(num_features, CFG[\"hidden_sizes\"], num_classes).to(device)\n",
193
+ "criterion = nn.CrossEntropyLoss()\n",
194
+ "optimizer = optim.Adam(model.parameters(), lr=CFG[\"lr\"])\n",
195
+ "\n",
196
+ "param_count = sum(p.numel() for p in model.parameters())\n",
197
+ "print(f\"Model: MLP {[num_features] + CFG['hidden_sizes'] + [num_classes]}\")\n",
198
+ "print(f\"Parameters: {param_count:,}\")"
199
+ ]
200
+ },
201
+ {
202
+ "cell_type": "markdown",
203
+ "metadata": {},
204
+ "source": [
205
+ "## 6. Training loop (per-epoch logging)"
206
+ ]
207
+ },
208
+ {
209
+ "cell_type": "code",
210
+ "execution_count": null,
211
+ "metadata": {},
212
+ "outputs": [],
213
+ "source": [
214
+ "history = {\n",
215
+ " \"model_name\": f\"mlp_{CFG['model_name']}\",\n",
216
+ " \"param_count\": param_count,\n",
217
+ " \"epochs\": [],\n",
218
+ " \"train_loss\": [],\n",
219
+ " \"train_acc\": [],\n",
220
+ " \"val_loss\": [],\n",
221
+ " \"val_acc\": [],\n",
222
+ "}\n",
223
+ "best_val_acc = 0.0\n",
224
+ "best_ckpt_path = os.path.join(CFG[\"checkpoints_dir\"], \"mlp_best.pt\")\n",
225
+ "os.makedirs(CFG[\"checkpoints_dir\"], exist_ok=True)\n",
226
+ "\n",
227
+ "print(f\"{'Epoch':>6} | {'Train Loss':>10} | {'Train Acc':>9} | {'Val Loss':>10} | {'Val Acc':>9}\")\n",
228
+ "print(\"-\" * 60)\n",
229
+ "\n",
230
+ "for epoch in range(1, CFG[\"epochs\"] + 1):\n",
231
+ " model.train()\n",
232
+ " t_loss, t_correct, t_total = 0.0, 0, 0\n",
233
+ " for xb, yb in train_loader:\n",
234
+ " xb, yb = xb.to(device), yb.to(device)\n",
235
+ " optimizer.zero_grad()\n",
236
+ " out = model(xb)\n",
237
+ " loss = criterion(out, yb)\n",
238
+ " loss.backward()\n",
239
+ " optimizer.step()\n",
240
+ " t_loss += loss.item() * xb.size(0)\n",
241
+ " t_correct += (out.argmax(1) == yb).sum().item()\n",
242
+ " t_total += xb.size(0)\n",
243
+ " train_loss = t_loss / t_total\n",
244
+ " train_acc = t_correct / t_total\n",
245
+ "\n",
246
+ " model.eval()\n",
247
+ " v_loss, v_correct, v_total = 0.0, 0, 0\n",
248
+ " with torch.no_grad():\n",
249
+ " for xb, yb in val_loader:\n",
250
+ " xb, yb = xb.to(device), yb.to(device)\n",
251
+ " out = model(xb)\n",
252
+ " loss = criterion(out, yb)\n",
253
+ " v_loss += loss.item() * xb.size(0)\n",
254
+ " v_correct += (out.argmax(1) == yb).sum().item()\n",
255
+ " v_total += xb.size(0)\n",
256
+ " val_loss = v_loss / v_total\n",
257
+ " val_acc = v_correct / v_total\n",
258
+ "\n",
259
+ " history[\"epochs\"].append(epoch)\n",
260
+ " history[\"train_loss\"].append(round(train_loss, 4))\n",
261
+ " history[\"train_acc\"].append(round(train_acc, 4))\n",
262
+ " history[\"val_loss\"].append(round(val_loss, 4))\n",
263
+ " history[\"val_acc\"].append(round(val_acc, 4))\n",
264
+ "\n",
265
+ " if task is not None:\n",
266
+ " task.logger.report_scalar(\"Loss\", \"Train\", float(train_loss), iteration=epoch)\n",
267
+ " task.logger.report_scalar(\"Loss\", \"Val\", float(val_loss), iteration=epoch)\n",
268
+ " task.logger.report_scalar(\"Accuracy\", \"Train\", float(train_acc), iteration=epoch)\n",
269
+ " task.logger.report_scalar(\"Accuracy\", \"Val\", float(val_acc), iteration=epoch)\n",
270
+ " task.logger.report_scalar(\"Learning Rate\", \"LR\", float(optimizer.param_groups[0][\"lr\"]), iteration=epoch)\n",
271
+ " task.logger.flush()\n",
272
+ "\n",
273
+ " marker = \"\"\n",
274
+ " if val_acc > best_val_acc:\n",
275
+ " best_val_acc = val_acc\n",
276
+ " torch.save(model.state_dict(), best_ckpt_path)\n",
277
+ " marker = \" *\"\n",
278
+ "\n",
279
+ " print(f\"{epoch:>6} | {train_loss:>10.4f} | {train_acc:>8.2%} | {val_loss:>10.4f} | {val_acc:>8.2%}{marker}\")\n",
280
+ "\n",
281
+ "print(f\"\\nBest val accuracy: {best_val_acc:.2%}\")\n",
282
+ "print(f\"Checkpoint: {best_ckpt_path}\")"
283
+ ]
284
+ },
285
+ {
286
+ "cell_type": "markdown",
287
+ "metadata": {},
288
+ "source": [
289
+ "## 7. Loss and accuracy curves"
290
+ ]
291
+ },
292
+ {
293
+ "cell_type": "code",
294
+ "execution_count": null,
295
+ "metadata": {},
296
+ "outputs": [],
297
+ "source": [
298
+ "fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))\n",
299
+ "epochs = history[\"epochs\"]\n",
300
+ "ax1.plot(epochs, history[\"train_loss\"], label=\"Train\")\n",
301
+ "ax1.plot(epochs, history[\"val_loss\"], label=\"Val\")\n",
302
+ "ax1.set_xlabel(\"Epoch\"); ax1.set_ylabel(\"Loss\"); ax1.set_title(\"Loss\"); ax1.legend()\n",
303
+ "ax2.plot(epochs, history[\"train_acc\"], label=\"Train\")\n",
304
+ "ax2.plot(epochs, history[\"val_acc\"], label=\"Val\")\n",
305
+ "ax2.set_xlabel(\"Epoch\"); ax2.set_ylabel(\"Accuracy\"); ax2.set_title(\"Accuracy\"); ax2.legend()\n",
306
+ "plt.suptitle(f\"MLP Training — {CFG['model_name']}\")\n",
307
+ "plt.tight_layout()\n",
308
+ "plt.show()"
309
+ ]
310
+ },
311
+ {
312
+ "cell_type": "markdown",
313
+ "metadata": {},
314
+ "source": [
315
+ "## 8. Test evaluation (random split)"
316
+ ]
317
+ },
318
+ {
319
+ "cell_type": "code",
320
+ "execution_count": null,
321
+ "metadata": {},
322
+ "outputs": [],
323
+ "source": [
324
+ "model.load_state_dict(torch.load(best_ckpt_path, weights_only=True))\n",
325
+ "model.eval()\n",
326
+ "\n",
327
+ "all_preds, all_labels, all_probs = [], [], []\n",
328
+ "test_loss_sum, test_total = 0.0, 0\n",
329
+ "with torch.no_grad():\n",
330
+ " for xb, yb in test_loader:\n",
331
+ " xb, yb = xb.to(device), yb.to(device)\n",
332
+ " out = model(xb)\n",
333
+ " test_loss_sum += criterion(out, yb).item() * xb.size(0)\n",
334
+ " test_total += xb.size(0)\n",
335
+ " probs = torch.softmax(out, dim=1)\n",
336
+ " all_preds.extend(out.argmax(1).cpu().numpy())\n",
337
+ " all_labels.extend(yb.cpu().numpy())\n",
338
+ " all_probs.extend(probs.cpu().numpy())\n",
339
+ "\n",
340
+ "test_loss = test_loss_sum / test_total\n",
341
+ "test_preds = np.array(all_preds)\n",
342
+ "test_labels = np.array(all_labels)\n",
343
+ "test_probs = np.array(all_probs)\n",
344
+ "\n",
345
+ "test_acc = float(accuracy_score(test_labels, test_preds))\n",
346
+ "test_f1 = float(f1_score(test_labels, test_preds, average=\"weighted\"))\n",
347
+ "if num_classes > 2:\n",
348
+ " test_auc = float(roc_auc_score(test_labels, test_probs, multi_class=\"ovr\", average=\"weighted\"))\n",
349
+ "else:\n",
350
+ " test_auc = float(roc_auc_score(test_labels, test_probs[:, 1]))\n",
351
+ "\n",
352
+ "print(f\"[TEST] Loss: {test_loss:.4f}\")\n",
353
+ "print(f\"[TEST] Accuracy: {test_acc:.2%}\")\n",
354
+ "print(f\"[TEST] F1: {test_f1:.4f}\")\n",
355
+ "print(f\"[TEST] ROC-AUC: {test_auc:.4f}\")\n",
356
+ "\n",
357
+ "if task is not None:\n",
358
+ " task.logger.report_single_value(\"test_accuracy\", test_acc)\n",
359
+ " task.logger.report_single_value(\"test_f1\", test_f1)\n",
360
+ " task.logger.report_single_value(\"test_auc\", test_auc)\n",
361
+ " task.logger.flush()\n",
362
+ "\n",
363
+ "print(\"\\nClassification report:\")\n",
364
+ "print(classification_report(test_labels, test_preds, target_names=[\"Unfocused (0)\", \"Focused (1)\"]))"
365
+ ]
366
+ },
367
+ {
368
+ "cell_type": "markdown",
369
+ "metadata": {},
370
+ "source": [
371
+ "## 9. Confusion matrix"
372
+ ]
373
+ },
374
+ {
375
+ "cell_type": "code",
376
+ "execution_count": null,
377
+ "metadata": {},
378
+ "outputs": [],
379
+ "source": [
380
+ "fig, ax = plt.subplots(figsize=(5, 4))\n",
381
+ "cm = confusion_matrix(test_labels, test_preds)\n",
382
+ "ConfusionMatrixDisplay(cm, display_labels=[\"Unfocused\", \"Focused\"]).plot(ax=ax, cmap=\"Blues\")\n",
383
+ "ax.set_title(f\"MLP confusion matrix — test acc {test_acc:.2%}\")\n",
384
+ "plt.tight_layout()\n",
385
+ "plt.show()"
386
+ ]
387
+ },
388
+ {
389
+ "cell_type": "markdown",
390
+ "metadata": {},
391
+ "source": [
392
+ "## 10. Save checkpoint and JSON log"
393
+ ]
394
+ },
395
+ {
396
+ "cell_type": "code",
397
+ "execution_count": null,
398
+ "metadata": {},
399
+ "outputs": [],
400
+ "source": [
401
+ "history[\"test_loss\"] = round(test_loss, 4)\n",
402
+ "history[\"test_acc\"] = round(test_acc, 4)\n",
403
+ "history[\"test_f1\"] = round(test_f1, 4)\n",
404
+ "history[\"test_auc\"] = round(test_auc, 4)\n",
405
+ "\n",
406
+ "os.makedirs(CFG[\"logs_dir\"], exist_ok=True)\n",
407
+ "log_path = os.path.join(CFG[\"logs_dir\"], f\"mlp_{CFG['model_name']}_training_log.json\")\n",
408
+ "with open(log_path, \"w\") as f:\n",
409
+ " json.dump(history, f, indent=2)\n",
410
+ "\n",
411
+ "print(f\"[CKPT] Best model: {best_ckpt_path}\")\n",
412
+ "print(f\"[LOG] History: {log_path}\")"
413
+ ]
414
+ },
415
+ {
416
+ "cell_type": "markdown",
417
+ "metadata": {},
418
+ "source": [
419
+ "## 11. LOPO comparison (MLP)\n",
420
+ "\n",
421
+ "Train+test with Leave-One-Person-Out so we can compare fairly with XGBoost/RF under LOPO."
422
+ ]
423
+ },
424
+ {
425
+ "cell_type": "code",
426
+ "execution_count": null,
427
+ "metadata": {},
428
+ "outputs": [],
429
+ "source": [
430
+ "def train_mlp_on_splits(X_train, y_train, X_test, y_test, cfg, n_features, n_classes):\n",
431
+ " device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
432
+ " sc = StandardScaler()\n",
433
+ " X_tr = sc.fit_transform(X_train)\n",
434
+ " X_te = sc.transform(X_test)\n",
435
+ "\n",
436
+ " tr_ds = TensorDataset(torch.tensor(X_tr, dtype=torch.float32), torch.tensor(y_train, dtype=torch.long))\n",
437
+ " te_ds = TensorDataset(torch.tensor(X_te, dtype=torch.float32), torch.tensor(y_test, dtype=torch.long))\n",
438
+ " tr_loader = DataLoader(tr_ds, batch_size=cfg[\"batch_size\"], shuffle=True)\n",
439
+ " te_loader = DataLoader(te_ds, batch_size=cfg[\"batch_size\"])\n",
440
+ "\n",
441
+ " net = MLP(n_features, cfg[\"hidden_sizes\"], n_classes).to(device)\n",
442
+ " opt = optim.Adam(net.parameters(), lr=cfg[\"lr\"])\n",
443
+ " crit = nn.CrossEntropyLoss()\n",
444
+ "\n",
445
+ " for _ in range(cfg[\"epochs\"]):\n",
446
+ " net.train()\n",
447
+ " for xb, yb in tr_loader:\n",
448
+ " xb, yb = xb.to(device), yb.to(device)\n",
449
+ " opt.zero_grad()\n",
450
+ " crit(net(xb), yb).backward()\n",
451
+ " opt.step()\n",
452
+ "\n",
453
+ " net.eval()\n",
454
+ " preds_list, probs_list, labels_list = [], [], []\n",
455
+ " with torch.no_grad():\n",
456
+ " for xb, yb in te_loader:\n",
457
+ " xb = xb.to(device)\n",
458
+ " out = net(xb)\n",
459
+ " preds_list.extend(out.argmax(1).cpu().numpy())\n",
460
+ " probs_list.extend(torch.softmax(out, dim=1).cpu().numpy())\n",
461
+ " labels_list.extend(yb.numpy())\n",
462
+ "\n",
463
+ " preds = np.array(preds_list)\n",
464
+ " probs = np.array(probs_list)\n",
465
+ " labels = np.array(labels_list)\n",
466
+ " acc = accuracy_score(labels, preds)\n",
467
+ " f1 = f1_score(labels, preds, average=\"weighted\")\n",
468
+ " auc = roc_auc_score(labels, probs[:, 1]) if n_classes == 2 else roc_auc_score(labels, probs, multi_class=\"ovr\", average=\"weighted\")\n",
469
+ " return {\"accuracy\": acc, \"f1\": f1, \"roc_auc\": auc}"
470
+ ]
471
+ },
472
+ {
473
+ "cell_type": "code",
474
+ "execution_count": null,
475
+ "metadata": {},
476
+ "outputs": [],
477
+ "source": [
478
+ "print(\"MLP LOPO evaluation\")\n",
479
+ "print(\"-\" * 60)\n",
480
+ "\n",
481
+ "lopo_results = []\n",
482
+ "for test_person in person_names:\n",
483
+ " train_persons = [p for p in person_names if p != test_person]\n",
484
+ " X_tr = np.concatenate([by_person[p][0] for p in train_persons], axis=0)\n",
485
+ " y_tr = np.concatenate([by_person[p][1] for p in train_persons], axis=0)\n",
486
+ " X_te, y_te = by_person[test_person]\n",
487
+ "\n",
488
+ " set_seed(CFG[\"seed\"])\n",
489
+ " metrics = train_mlp_on_splits(X_tr, y_tr, X_te, y_te, CFG, num_features, num_classes)\n",
490
+ " metrics[\"test_person\"] = test_person\n",
491
+ " metrics[\"n_test\"] = len(y_te)\n",
492
+ " lopo_results.append(metrics)\n",
493
+ " print(f\" test={test_person}: acc={metrics['accuracy']:.2%} F1={metrics['f1']:.4f} AUC={metrics['roc_auc']:.4f} (n={len(y_te)})\")\n",
494
+ "\n",
495
+ "print(\"\\nMLP LOPO summary (mean +/- std):\")\n",
496
+ "for m in [\"accuracy\", \"f1\", \"roc_auc\"]:\n",
497
+ " vals = [r[m] for r in lopo_results]\n",
498
+ " print(f\" {m}: {np.mean(vals):.4f} +/- {np.std(vals):.4f}\")"
499
+ ]
500
+ },
501
+ {
502
+ "cell_type": "markdown",
503
+ "metadata": {},
504
+ "source": [
505
+ "## 12. Random split vs LOPO summary"
506
+ ]
507
+ },
508
+ {
509
+ "cell_type": "code",
510
+ "execution_count": null,
511
+ "metadata": {},
512
+ "outputs": [],
513
+ "source": [
514
+ "import pandas as pd\n",
515
+ "\n",
516
+ "lopo_acc = np.mean([r[\"accuracy\"] for r in lopo_results])\n",
517
+ "lopo_f1 = np.mean([r[\"f1\"] for r in lopo_results])\n",
518
+ "lopo_auc = np.mean([r[\"roc_auc\"] for r in lopo_results])\n",
519
+ "\n",
520
+ "summary = pd.DataFrame([\n",
521
+ " {\"Method\": \"Random split (70/15/15)\", \"Accuracy\": f\"{test_acc:.2%}\", \"F1\": f\"{test_f1:.4f}\", \"ROC-AUC\": f\"{test_auc:.4f}\"},\n",
522
+ " {\"Method\": \"LOPO (mean)\", \"Accuracy\": f\"{lopo_acc:.2%}\", \"F1\": f\"{lopo_f1:.4f}\", \"ROC-AUC\": f\"{lopo_auc:.4f}\"},\n",
523
+ "])\n",
524
+ "display(summary)\n",
525
+ "\n",
526
+ "print(\"\\nCompare these MLP LOPO numbers with XGBoost (from xgboost.ipynb).\")\n",
527
+ "print(\"If XGB LOPO > MLP LOPO, XGB generalises better across unseen persons.\")"
528
+ ]
529
+ },
530
+ {
531
+ "cell_type": "markdown",
532
+ "metadata": {},
533
+ "source": [
534
+ "## 13. Per-person accuracy bar chart"
535
+ ]
536
+ },
537
+ {
538
+ "cell_type": "code",
539
+ "execution_count": null,
540
+ "metadata": {},
541
+ "outputs": [],
542
+ "source": [
543
+ "fig, ax = plt.subplots(figsize=(10, 4))\n",
544
+ "names_sorted = [r[\"test_person\"] for r in lopo_results]\n",
545
+ "accs = [r[\"accuracy\"] for r in lopo_results]\n",
546
+ "ax.bar(names_sorted, accs, color=\"steelblue\", edgecolor=\"black\")\n",
547
+ "ax.axhline(y=lopo_acc, color=\"red\", linestyle=\"--\", label=f\"Mean = {lopo_acc:.2%}\")\n",
548
+ "ax.set_ylabel(\"Accuracy\")\n",
549
+ "ax.set_xlabel(\"Left-out person\")\n",
550
+ "ax.set_title(\"MLP LOPO: test accuracy per left-out person\")\n",
551
+ "ax.legend()\n",
552
+ "plt.xticks(rotation=45, ha=\"right\")\n",
553
+ "plt.tight_layout()\n",
554
+ "plt.show()"
555
+ ]
556
+ }
557
+ ],
558
+ "metadata": {
559
+ "kernelspec": {
560
+ "display_name": "Python 3",
561
+ "language": "python",
562
+ "name": "python3"
563
+ },
564
+ "language_info": {
565
+ "name": "python",
566
+ "version": "3.13.0"
567
+ }
568
+ },
569
+ "nbformat": 4,
570
+ "nbformat_minor": 4
571
+ }
notebooks/xgboost.ipynb ADDED
@@ -0,0 +1,475 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "# XGBoost Training + LOPO evaluation (ClearML-compatible)\n",
8
+ "\n",
9
+ "XGBoost for focus classification.\n",
10
+ "- Single CFG dict (ClearML `task.connect(CFG)`)\n",
11
+ "- 70/15/15 stratified random split with per-round loss logging\n",
12
+ "- Test evaluation: accuracy, F1, ROC-AUC\n",
13
+ "- ClearML scalar logging (opt-in)\n",
14
+ "- LOPO comparison at the end"
15
+ ]
16
+ },
17
+ {
18
+ "cell_type": "markdown",
19
+ "metadata": {},
20
+ "source": [
21
+ "## 1. Imports and CFG"
22
+ ]
23
+ },
24
+ {
25
+ "cell_type": "code",
26
+ "execution_count": null,
27
+ "metadata": {},
28
+ "outputs": [],
29
+ "source": [
30
+ "import json\n",
31
+ "import os\n",
32
+ "import sys\n",
33
+ "import random\n",
34
+ "\n",
35
+ "import numpy as np\n",
36
+ "from xgboost import XGBClassifier\n",
37
+ "from sklearn.model_selection import train_test_split\n",
38
+ "from sklearn.metrics import (\n",
39
+ " accuracy_score, f1_score, roc_auc_score,\n",
40
+ " classification_report, confusion_matrix, ConfusionMatrixDisplay,\n",
41
+ ")\n",
42
+ "import matplotlib.pyplot as plt\n",
43
+ "import warnings\n",
44
+ "warnings.filterwarnings(\"ignore\")\n",
45
+ "\n",
46
+ "# Add project root to sys.path\n",
47
+ "_cwd = os.getcwd()\n",
48
+ "PROJECT_ROOT = _cwd if os.path.isdir(os.path.join(_cwd, \"models\")) else os.path.abspath(os.path.join(_cwd, \"..\"))\n",
49
+ "if PROJECT_ROOT not in sys.path:\n",
50
+ " sys.path.insert(0, PROJECT_ROOT)\n",
51
+ "\n",
52
+ "from data_preparation.prepare_dataset import load_per_person, SELECTED_FEATURES, _split_and_scale\n",
53
+ "\n",
54
+ "CFG = {\n",
55
+ " \"model_name\": \"face_orientation\",\n",
56
+ " \"seed\": 42,\n",
57
+ " \"split_ratios\": (0.7, 0.15, 0.15),\n",
58
+ " \"scale\": False, # tree-based model — scaling unnecessary\n",
59
+ " \"n_estimators\": 600,\n",
60
+ " \"max_depth\": 8,\n",
61
+ " \"learning_rate\": 0.149,\n",
62
+ " \"subsample\": 0.9625,\n",
63
+ " \"colsample_bytree\": 0.9013,\n",
64
+ " \"reg_alpha\": 1.1407,\n",
65
+ " \"reg_lambda\": 2.4181,\n",
66
+ " \"eval_metric\": \"logloss\",\n",
67
+ " \"checkpoints_dir\": os.path.join(PROJECT_ROOT, \"models\", \"xgboost\", \"checkpoints\"),\n",
68
+ " \"logs_dir\": os.path.join(PROJECT_ROOT, \"evaluation\", \"logs\"),\n",
69
+ "}\n",
70
+ "\n",
71
+ "print(f\"Project root: {PROJECT_ROOT}\")"
72
+ ]
73
+ },
74
+ {
75
+ "cell_type": "markdown",
76
+ "metadata": {},
77
+ "source": [
78
+ "## 2. ClearML (optional)"
79
+ ]
80
+ },
81
+ {
82
+ "cell_type": "code",
83
+ "execution_count": null,
84
+ "metadata": {},
85
+ "outputs": [],
86
+ "source": [
87
+ "USE_CLEARML = False # set True when ClearML credentials are configured\n",
88
+ "task = None\n",
89
+ "\n",
90
+ "if USE_CLEARML:\n",
91
+ " from clearml import Task\n",
92
+ " task = Task.init(\n",
93
+ " project_name=\"FocusGuards Large Group Project\",\n",
94
+ " task_name=\"XGBoost Training + LOPO\",\n",
95
+ " tags=[\"training\", \"xgboost\"]\n",
96
+ " )\n",
97
+ " task.connect(CFG)\n",
98
+ " print(\"[ClearML] Connected\")\n",
99
+ "else:\n",
100
+ " print(\"[ClearML] Disabled (set USE_CLEARML = True to enable)\")"
101
+ ]
102
+ },
103
+ {
104
+ "cell_type": "markdown",
105
+ "metadata": {},
106
+ "source": [
107
+ "## 3. Load data"
108
+ ]
109
+ },
110
+ {
111
+ "cell_type": "code",
112
+ "execution_count": null,
113
+ "metadata": {},
114
+ "outputs": [],
115
+ "source": [
116
+ "def set_seed(seed):\n",
117
+ " random.seed(seed)\n",
118
+ " np.random.seed(seed)\n",
119
+ "\n",
120
+ "set_seed(CFG[\"seed\"])\n",
121
+ "\n",
122
+ "by_person, X_all, y_all = load_per_person(CFG[\"model_name\"])\n",
123
+ "person_names = sorted(by_person.keys())\n",
124
+ "num_features = X_all.shape[1]\n",
125
+ "num_classes = int(y_all.max()) + 1\n",
126
+ "print(f\"\\nPersons: {person_names}\")"
127
+ ]
128
+ },
129
+ {
130
+ "cell_type": "markdown",
131
+ "metadata": {},
132
+ "source": [
133
+ "## 4. Random split (70/15/15)"
134
+ ]
135
+ },
136
+ {
137
+ "cell_type": "code",
138
+ "execution_count": null,
139
+ "metadata": {},
140
+ "outputs": [],
141
+ "source": [
142
+ "splits, _ = _split_and_scale(X_all, y_all, CFG[\"split_ratios\"], CFG[\"seed\"], CFG[\"scale\"])\n",
143
+ "X_train, y_train = splits[\"X_train\"], splits[\"y_train\"]\n",
144
+ "X_val, y_val = splits[\"X_val\"], splits[\"y_val\"]\n",
145
+ "X_test, y_test = splits[\"X_test\"], splits[\"y_test\"]\n",
146
+ "\n",
147
+ "print(f\"Features: {num_features}, Classes: {num_classes}\")"
148
+ ]
149
+ },
150
+ {
151
+ "cell_type": "markdown",
152
+ "metadata": {},
153
+ "source": [
154
+ "## 5. Model definition and training"
155
+ ]
156
+ },
157
+ {
158
+ "cell_type": "code",
159
+ "execution_count": null,
160
+ "metadata": {},
161
+ "outputs": [],
162
+ "source": [
163
+ "model = XGBClassifier(\n",
164
+ " n_estimators=CFG[\"n_estimators\"],\n",
165
+ " max_depth=CFG[\"max_depth\"],\n",
166
+ " learning_rate=CFG[\"learning_rate\"],\n",
167
+ " subsample=CFG[\"subsample\"],\n",
168
+ " colsample_bytree=CFG[\"colsample_bytree\"],\n",
169
+ " reg_alpha=CFG[\"reg_alpha\"],\n",
170
+ " reg_lambda=CFG[\"reg_lambda\"],\n",
171
+ " eval_metric=CFG[\"eval_metric\"],\n",
172
+ " use_label_encoder=False,\n",
173
+ " random_state=CFG[\"seed\"],\n",
174
+ " verbosity=1,\n",
175
+ ")\n",
176
+ "\n",
177
+ "model.fit(\n",
178
+ " X_train, y_train,\n",
179
+ " eval_set=[(X_train, y_train), (X_val, y_val)],\n",
180
+ " verbose=10,\n",
181
+ ")\n",
182
+ "\n",
183
+ "print(f\"\\n[TRAIN] Training complete: {CFG['n_estimators']} rounds\")"
184
+ ]
185
+ },
186
+ {
187
+ "cell_type": "markdown",
188
+ "metadata": {},
189
+ "source": [
190
+ "## 6. Per-round loss logging"
191
+ ]
192
+ },
193
+ {
194
+ "cell_type": "code",
195
+ "execution_count": null,
196
+ "metadata": {},
197
+ "outputs": [],
198
+ "source": [
199
+ "evals = model.evals_result()\n",
200
+ "train_losses = evals[\"validation_0\"][CFG[\"eval_metric\"]]\n",
201
+ "val_losses = evals[\"validation_1\"][CFG[\"eval_metric\"]]\n",
202
+ "rounds = list(range(1, len(train_losses) + 1))\n",
203
+ "\n",
204
+ "if task is not None:\n",
205
+ " for i, (tl, vl) in enumerate(zip(train_losses, val_losses)):\n",
206
+ " task.logger.report_scalar(\"Loss\", \"Train\", tl, iteration=i + 1)\n",
207
+ " task.logger.report_scalar(\"Loss\", \"Val\", vl, iteration=i + 1)\n",
208
+ " task.logger.flush()\n",
209
+ "\n",
210
+ "print(f\"Final train logloss: {train_losses[-1]:.4f}\")\n",
211
+ "print(f\"Final val logloss: {val_losses[-1]:.4f}\")"
212
+ ]
213
+ },
214
+ {
215
+ "cell_type": "markdown",
216
+ "metadata": {},
217
+ "source": [
218
+ "## 7. Loss curve"
219
+ ]
220
+ },
221
+ {
222
+ "cell_type": "code",
223
+ "execution_count": null,
224
+ "metadata": {},
225
+ "outputs": [],
226
+ "source": [
227
+ "fig, ax = plt.subplots(figsize=(8, 4))\n",
228
+ "ax.plot(rounds, train_losses, label=\"Train\")\n",
229
+ "ax.plot(rounds, val_losses, label=\"Val\")\n",
230
+ "ax.set_xlabel(\"Boosting round\")\n",
231
+ "ax.set_ylabel(\"Log loss\")\n",
232
+ "ax.set_title(f\"XGBoost Training — {CFG['model_name']}\")\n",
233
+ "ax.legend()\n",
234
+ "plt.tight_layout()\n",
235
+ "plt.show()"
236
+ ]
237
+ },
238
+ {
239
+ "cell_type": "markdown",
240
+ "metadata": {},
241
+ "source": [
242
+ "## 8. Test evaluation (random split)"
243
+ ]
244
+ },
245
+ {
246
+ "cell_type": "code",
247
+ "execution_count": null,
248
+ "metadata": {},
249
+ "outputs": [],
250
+ "source": [
251
+ "test_preds = model.predict(X_test)\n",
252
+ "test_probs = model.predict_proba(X_test)\n",
253
+ "test_acc = float(accuracy_score(y_test, test_preds))\n",
254
+ "test_f1 = float(f1_score(y_test, test_preds, average=\"weighted\"))\n",
255
+ "if num_classes > 2:\n",
256
+ " test_auc = float(roc_auc_score(y_test, test_probs, multi_class=\"ovr\", average=\"weighted\"))\n",
257
+ "else:\n",
258
+ " test_auc = float(roc_auc_score(y_test, test_probs[:, 1]))\n",
259
+ "\n",
260
+ "print(f\"[TEST] Accuracy: {test_acc:.2%}\")\n",
261
+ "print(f\"[TEST] F1: {test_f1:.4f}\")\n",
262
+ "print(f\"[TEST] ROC-AUC: {test_auc:.4f}\")\n",
263
+ "\n",
264
+ "if task is not None:\n",
265
+ " task.logger.report_single_value(\"test_accuracy\", test_acc)\n",
266
+ " task.logger.report_single_value(\"test_f1\", test_f1)\n",
267
+ " task.logger.report_single_value(\"test_auc\", test_auc)\n",
268
+ " task.logger.flush()\n",
269
+ "\n",
270
+ "print(\"\\nClassification report:\")\n",
271
+ "print(classification_report(y_test, test_preds, target_names=[\"Unfocused (0)\", \"Focused (1)\"]))"
272
+ ]
273
+ },
274
+ {
275
+ "cell_type": "markdown",
276
+ "metadata": {},
277
+ "source": [
278
+ "## 9. Confusion matrix"
279
+ ]
280
+ },
281
+ {
282
+ "cell_type": "code",
283
+ "execution_count": null,
284
+ "metadata": {},
285
+ "outputs": [],
286
+ "source": [
287
+ "fig, ax = plt.subplots(figsize=(5, 4))\n",
288
+ "cm = confusion_matrix(y_test, test_preds)\n",
289
+ "ConfusionMatrixDisplay(cm, display_labels=[\"Unfocused\", \"Focused\"]).plot(ax=ax, cmap=\"Blues\")\n",
290
+ "ax.set_title(f\"XGBoost confusion matrix — test acc {test_acc:.2%}\")\n",
291
+ "plt.tight_layout()\n",
292
+ "plt.show()"
293
+ ]
294
+ },
295
+ {
296
+ "cell_type": "markdown",
297
+ "metadata": {},
298
+ "source": [
299
+ "## 10. Save checkpoint and JSON log"
300
+ ]
301
+ },
302
+ {
303
+ "cell_type": "code",
304
+ "execution_count": null,
305
+ "metadata": {},
306
+ "outputs": [],
307
+ "source": [
308
+ "os.makedirs(CFG[\"checkpoints_dir\"], exist_ok=True)\n",
309
+ "model_path = os.path.join(CFG[\"checkpoints_dir\"], f\"{CFG['model_name']}_best.json\")\n",
310
+ "model.save_model(model_path)\n",
311
+ "\n",
312
+ "history = {\n",
313
+ " \"model_name\": f\"xgboost_{CFG['model_name']}\",\n",
314
+ " \"n_estimators\": CFG[\"n_estimators\"],\n",
315
+ " \"max_depth\": CFG[\"max_depth\"],\n",
316
+ " \"epochs\": rounds,\n",
317
+ " \"train_loss\": [round(v, 4) for v in train_losses],\n",
318
+ " \"val_loss\": [round(v, 4) for v in val_losses],\n",
319
+ " \"test_acc\": round(test_acc, 4),\n",
320
+ " \"test_f1\": round(test_f1, 4),\n",
321
+ " \"test_auc\": round(test_auc, 4),\n",
322
+ "}\n",
323
+ "\n",
324
+ "os.makedirs(CFG[\"logs_dir\"], exist_ok=True)\n",
325
+ "log_path = os.path.join(CFG[\"logs_dir\"], f\"xgboost_{CFG['model_name']}_training_log.json\")\n",
326
+ "with open(log_path, \"w\") as f:\n",
327
+ " json.dump(history, f, indent=2)\n",
328
+ "\n",
329
+ "print(f\"[CKPT] Model: {model_path}\")\n",
330
+ "print(f\"[LOG] History: {log_path}\")"
331
+ ]
332
+ },
333
+ {
334
+ "cell_type": "markdown",
335
+ "metadata": {},
336
+ "source": [
337
+ "## 11. LOPO comparison (XGBoost)\n",
338
+ "\n",
339
+ "Train+test with Leave-One-Person-Out so we can compare fairly with MLP/RF under LOPO."
340
+ ]
341
+ },
342
+ {
343
+ "cell_type": "code",
344
+ "execution_count": null,
345
+ "metadata": {},
346
+ "outputs": [],
347
+ "source": [
348
+ "def train_xgb_on_splits(X_train, y_train, X_test, y_test, cfg):\n",
349
+ " m = XGBClassifier(\n",
350
+ " n_estimators=cfg[\"n_estimators\"],\n",
351
+ " max_depth=cfg[\"max_depth\"],\n",
352
+ " learning_rate=cfg[\"learning_rate\"],\n",
353
+ " subsample=cfg[\"subsample\"],\n",
354
+ " colsample_bytree=cfg[\"colsample_bytree\"],\n",
355
+ " reg_alpha=cfg[\"reg_alpha\"],\n",
356
+ " reg_lambda=cfg[\"reg_lambda\"],\n",
357
+ " eval_metric=cfg[\"eval_metric\"],\n",
358
+ " use_label_encoder=False,\n",
359
+ " random_state=cfg[\"seed\"],\n",
360
+ " verbosity=0,\n",
361
+ " )\n",
362
+ " m.fit(X_train, y_train, verbose=False)\n",
363
+ "\n",
364
+ " preds = m.predict(X_test)\n",
365
+ " probs = m.predict_proba(X_test)\n",
366
+ " n_cls = probs.shape[1]\n",
367
+ " acc = accuracy_score(y_test, preds)\n",
368
+ " f1 = f1_score(y_test, preds, average=\"weighted\")\n",
369
+ " auc = roc_auc_score(y_test, probs[:, 1]) if n_cls == 2 else roc_auc_score(y_test, probs, multi_class=\"ovr\", average=\"weighted\")\n",
370
+ " return {\"accuracy\": acc, \"f1\": f1, \"roc_auc\": auc}"
371
+ ]
372
+ },
373
+ {
374
+ "cell_type": "code",
375
+ "execution_count": null,
376
+ "metadata": {},
377
+ "outputs": [],
378
+ "source": [
379
+ "print(\"XGBoost LOPO evaluation\")\n",
380
+ "print(\"-\" * 60)\n",
381
+ "\n",
382
+ "lopo_results = []\n",
383
+ "for test_person in person_names:\n",
384
+ " train_persons = [p for p in person_names if p != test_person]\n",
385
+ " X_tr = np.concatenate([by_person[p][0] for p in train_persons], axis=0)\n",
386
+ " y_tr = np.concatenate([by_person[p][1] for p in train_persons], axis=0)\n",
387
+ " X_te, y_te = by_person[test_person]\n",
388
+ "\n",
389
+ " set_seed(CFG[\"seed\"])\n",
390
+ " metrics = train_xgb_on_splits(X_tr, y_tr, X_te, y_te, CFG)\n",
391
+ " metrics[\"test_person\"] = test_person\n",
392
+ " metrics[\"n_test\"] = len(y_te)\n",
393
+ " lopo_results.append(metrics)\n",
394
+ " print(f\" test={test_person}: acc={metrics['accuracy']:.2%} F1={metrics['f1']:.4f} AUC={metrics['roc_auc']:.4f} (n={len(y_te)})\")\n",
395
+ "\n",
396
+ "print(\"\\nXGBoost LOPO summary (mean +/- std):\")\n",
397
+ "for m in [\"accuracy\", \"f1\", \"roc_auc\"]:\n",
398
+ " vals = [r[m] for r in lopo_results]\n",
399
+ " print(f\" {m}: {np.mean(vals):.4f} +/- {np.std(vals):.4f}\")"
400
+ ]
401
+ },
402
+ {
403
+ "cell_type": "markdown",
404
+ "metadata": {},
405
+ "source": [
406
+ "## 12. Random split vs LOPO summary"
407
+ ]
408
+ },
409
+ {
410
+ "cell_type": "code",
411
+ "execution_count": null,
412
+ "metadata": {},
413
+ "outputs": [],
414
+ "source": [
415
+ "import pandas as pd\n",
416
+ "\n",
417
+ "lopo_acc = np.mean([r[\"accuracy\"] for r in lopo_results])\n",
418
+ "lopo_f1 = np.mean([r[\"f1\"] for r in lopo_results])\n",
419
+ "lopo_auc = np.mean([r[\"roc_auc\"] for r in lopo_results])\n",
420
+ "\n",
421
+ "summary = pd.DataFrame([\n",
422
+ " {\"Method\": \"Random split (70/15/15)\", \"Accuracy\": f\"{test_acc:.2%}\", \"F1\": f\"{test_f1:.4f}\", \"ROC-AUC\": f\"{test_auc:.4f}\"},\n",
423
+ " {\"Method\": \"LOPO (mean)\", \"Accuracy\": f\"{lopo_acc:.2%}\", \"F1\": f\"{lopo_f1:.4f}\", \"ROC-AUC\": f\"{lopo_auc:.4f}\"},\n",
424
+ "])\n",
425
+ "display(summary)\n",
426
+ "\n",
427
+ "print(\"\\nPer-fold LOPO results:\")\n",
428
+ "display(pd.DataFrame(lopo_results))\n",
429
+ "\n",
430
+ "print(\"\\nCompare these XGBoost LOPO numbers with MLP (from mlp.ipynb).\")\n",
431
+ "print(\"If XGB LOPO > MLP LOPO, XGB generalises better across unseen persons.\")"
432
+ ]
433
+ },
434
+ {
435
+ "cell_type": "markdown",
436
+ "metadata": {},
437
+ "source": [
438
+ "## 13. Per-person accuracy bar chart"
439
+ ]
440
+ },
441
+ {
442
+ "cell_type": "code",
443
+ "execution_count": null,
444
+ "metadata": {},
445
+ "outputs": [],
446
+ "source": [
447
+ "fig, ax = plt.subplots(figsize=(10, 4))\n",
448
+ "names_sorted = [r[\"test_person\"] for r in lopo_results]\n",
449
+ "accs = [r[\"accuracy\"] for r in lopo_results]\n",
450
+ "ax.bar(names_sorted, accs, color=\"steelblue\", edgecolor=\"black\")\n",
451
+ "ax.axhline(y=lopo_acc, color=\"red\", linestyle=\"--\", label=f\"Mean = {lopo_acc:.2%}\")\n",
452
+ "ax.set_ylabel(\"Accuracy\")\n",
453
+ "ax.set_xlabel(\"Left-out person\")\n",
454
+ "ax.set_title(\"XGBoost LOPO: test accuracy per left-out person\")\n",
455
+ "ax.legend()\n",
456
+ "plt.xticks(rotation=45, ha=\"right\")\n",
457
+ "plt.tight_layout()\n",
458
+ "plt.show()"
459
+ ]
460
+ }
461
+ ],
462
+ "metadata": {
463
+ "kernelspec": {
464
+ "display_name": "Python 3",
465
+ "language": "python",
466
+ "name": "python3"
467
+ },
468
+ "language_info": {
469
+ "name": "python",
470
+ "version": "3.13.0"
471
+ }
472
+ },
473
+ "nbformat": 4,
474
+ "nbformat_minor": 4
475
+ }
tests/test_data_preparation.py ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import sys
3
+ import numpy as np
4
+
5
+
6
+ PROJECT_ROOT = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
7
+ if PROJECT_ROOT not in sys.path:
8
+ sys.path.insert(0, PROJECT_ROOT)
9
+
10
+ from data_preparation.prepare_dataset import (
11
+ SELECTED_FEATURES,
12
+ _generate_synthetic_data,
13
+ get_numpy_splits,
14
+ )
15
+
16
+
17
+ def test_generate_synthetic_data_shape():
18
+ X, y = _generate_synthetic_data("face_orientation")
19
+ assert X.shape[0] == 500
20
+ assert y.shape[0] == 500
21
+ assert X.shape[1] == len(SELECTED_FEATURES["face_orientation"])
22
+
23
+
24
+ def test_get_numpy_splits_consistency():
25
+ splits, num_features, num_classes, scaler = get_numpy_splits("face_orientation")
26
+
27
+ # number of sample > 0,and split each in train/val/test
28
+ n_train = len(splits["y_train"])
29
+ n_val = len(splits["y_val"])
30
+ n_test = len(splits["y_test"])
31
+ assert n_train > 0
32
+ assert n_val > 0
33
+ assert n_test > 0
34
+
35
+ # feature dim should same as num_features
36
+ assert splits["X_train"].shape[1] == num_features
37
+
38
+ assert num_classes >= 2
39
+
tests/test_health_endpoint.py ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pytest
2
+ from fastapi.testclient import TestClient
3
+
4
+ from main import app
5
+
6
+
7
+ client = TestClient(app)
8
+
9
+
10
+ def test_health_endpoint_ok():
11
+ resp = client.get("/health")
12
+ assert resp.status_code == 200
13
+ data = resp.json()
14
+ assert "status" in data
15
+ assert isinstance(data["status"], str)
16
+ assert "models_loaded" in data
17
+ assert isinstance(data["models_loaded"], list)
18
+
tests/test_models_clip_features.py ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import sys
3
+
4
+ import numpy as np
5
+
6
+ PROJECT_ROOT = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
7
+ if PROJECT_ROOT not in sys.path:
8
+ sys.path.insert(0, PROJECT_ROOT)
9
+
10
+ from ui.pipeline import _clip_features
11
+ from models.collect_features import FEATURE_NAMES
12
+
13
+
14
+ def test_clip_features_clamps_ranges():
15
+ idx = {name: i for i, name in enumerate(FEATURE_NAMES)}
16
+ vec = np.zeros(len(FEATURE_NAMES), dtype=np.float32)
17
+
18
+ # 设置超出合理范围的值
19
+ vec[idx["yaw"]] = 90.0
20
+ vec[idx["pitch"]] = -90.0
21
+ vec[idx["roll"]] = 90.0
22
+ vec[idx["head_deviation"]] = 999.0
23
+ for name in ("ear_left", "ear_right", "ear_avg"):
24
+ vec[idx[name]] = 2.0
25
+ vec[idx["mar"]] = 5.0
26
+ vec[idx["gaze_offset"]] = 1.0
27
+ vec[idx["perclos"]] = 2.0
28
+ vec[idx["blink_rate"]] = 100.0
29
+ vec[idx["closure_duration"]] = 50.0
30
+ vec[idx["yawn_duration"]] = 50.0
31
+
32
+ out = _clip_features(vec)
33
+
34
+ assert -45.0 <= out[idx["yaw"]] <= 45.0
35
+ assert -30.0 <= out[idx["pitch"]] <= 30.0
36
+ assert -30.0 <= out[idx["roll"]] <= 30.0
37
+
38
+ for name in ("ear_left", "ear_right", "ear_avg"):
39
+ assert 0.0 <= out[idx[name]] <= 0.85
40
+
41
+ assert 0.0 <= out[idx["mar"]] <= 1.0
42
+ assert 0.0 <= out[idx["gaze_offset"]] <= 0.5
43
+ assert 0.0 <= out[idx["perclos"]] <= 0.8
44
+ assert 0.0 <= out[idx["blink_rate"]] <= 30.0
45
+ assert 0.0 <= out[idx["closure_duration"]] <= 10.0
46
+ assert 0.0 <= out[idx["yawn_duration"]] <= 10.0
47
+