import React, { useState } from 'react'; function Help() { const [clearMsg, setClearMsg] = useState(''); const clearAllHistory = async () => { if (!window.confirm('Delete all saved sessions? My Records and My Achievement will reset.')) return; setClearMsg(''); try { const res = await fetch('/api/history', { method: 'DELETE' }); const data = await res.json().catch(() => ({})); if (res.ok && data.status === 'success') { setClearMsg('Session history cleared.'); } else { setClearMsg(data.message || 'Could not clear history.'); } } catch (e) { setClearMsg('Request failed.'); } }; return (

Help

How to Use Focus Guard

  1. Navigate to the Focus page from the menu
  2. Allow camera access when prompted
  3. Click the green "Start" button to begin monitoring
  4. Position yourself in front of the camera
  5. The system will track your focus in real-time using face mesh analysis
  6. Use the model selector to switch between detection models (MLP, XGBoost, Geometric, Hybrid)
  7. Click "Stop" when you're done to save the session

What is "Focused"?

The system considers you focused when:

  • Your face is detected and visible in the camera frame
  • Your head is oriented toward the screen (low yaw/pitch deviation)
  • Your eyes are open and gaze is directed forward
  • You are not yawning

The system uses MediaPipe Face Mesh to extract 478 facial landmarks, then computes features like head pose, eye aspect ratio (EAR), gaze offset, PERCLOS, and blink rate to determine focus.

Available Models

MLP: Neural network trained on extracted facial features. Good balance of speed and accuracy.

XGBoost: Gradient-boosted tree model using 10 selected features. Strong on tabular data with fast inference.

Geometric: Rule-based scoring using head pose and eye openness. No ML model needed, lightweight.

Hybrid: Combines MLP predictions with geometric scoring for robust results.

Adjusting Settings

Frame Rate: Controls how many frames per second are sent for analysis. Recommended: 15-30 FPS. Minimum is 10 FPS to ensure temporal features (blink rate, PERCLOS) remain accurate.

Model Selection: Switch models in real-time using the pill buttons above the timeline. Different models may perform better depending on your lighting and setup.

Privacy & Data

Video frames are processed in real-time on the server and are never stored. Only focus status metadata (timestamps, confidence scores) is saved to the session database. View past runs under My Records; stats and badges live under My Achievement.

{clearMsg && ( {clearMsg} )}

FAQ

Why is my focus score low?

Ensure good lighting so the face mesh can detect your landmarks clearly. Face the camera directly and avoid large head movements. Try switching to a different model if one isn't working well for your setup.

Can I use this without a camera?

No, camera access is required. The system relies on real-time face landmark detection to determine focus.

Does this work on mobile?

Yes, it works on mobile browsers that support camera access and WebSocket connections. Performance depends on your device and network speed.

Is my data private?

Yes. No video frames are stored. Processing happens in real-time and only metadata (focus/unfocused status, confidence, timestamps) is saved.

Why does the face mesh lag behind my movements?

The face mesh overlay updates each time the server returns a detection result. The camera feed itself renders at 60fps locally. Any visible lag depends on network latency and server processing time.

Technical Info

Face Detection: MediaPipe Face Mesh (478 landmarks)

Feature Extraction: Head pose (yaw/pitch/roll), EAR, MAR, gaze offset, PERCLOS, blink rate

ML Models: MLP (scikit-learn), XGBoost, Geometric, Hybrid

Storage: SQLite database

Framework: FastAPI + React (Vite) + WebSocket

); } export default Help;