Spaces:
Sleeping
Sleeping
File size: 3,998 Bytes
2eba0cc | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 |
<p align="center">
<img src="https://github.com/Ahmednull/Storage/blob/main/gaze.gif" alt="animated" />
</p>
___
# L2CS-Net
The official PyTorch implementation of L2CS-Net for gaze estimation and tracking.
## Installation
<img src="https://img.shields.io/badge/python%20-%2314354C.svg?&style=for-the-badge&logo=python&logoColor=white"/> <img src="https://img.shields.io/badge/PyTorch%20-%23EE4C2C.svg?&style=for-the-badge&logo=PyTorch&logoColor=white" />
Install package with the following:
```
pip install git+https://github.com/Ahmednull/L2CS-Net.git@main
```
Or, you can git clone the repo and install with the following:
```
pip install [-e] .
```
Now you should be able to import the package with the following command:
```
$ python
>>> import l2cs
```
## Usage
Detect face and predict gaze from webcam
```python
from l2cs import Pipeline, render
import cv2
gaze_pipeline = Pipeline(
weights=CWD / 'models' / 'L2CSNet_gaze360.pkl',
arch='ResNet50',
device=torch.device('cpu') # or 'gpu'
)
cap = cv2.VideoCapture(cam)
_, frame = cap.read()
# Process frame and visualize
results = gaze_pipeline.step(frame)
frame = render(frame, results)
```
## Demo
* Download the pre-trained models from [here](https://drive.google.com/drive/folders/17p6ORr-JQJcw-eYtG2WGNiuS_qVKwdWd?usp=sharing) and Store it to *models/*.
* Run:
```
python demo.py \
--snapshot models/L2CSNet_gaze360.pkl \
--gpu 0 \
--cam 0 \
```
This means the demo will run using *L2CSNet_gaze360.pkl* pretrained model
## Community Contributions
- [Gaze Detection and Eye Tracking: A How-To Guide](https://blog.roboflow.com/gaze-direction-position/): Use L2CS-Net through a HTTP interface with the open source Roboflow Inference project.
## MPIIGaze
We provide the code for train and test MPIIGaze dataset with leave-one-person-out evaluation.
### Prepare datasets
* Download **MPIIFaceGaze dataset** from [here](https://www.mpi-inf.mpg.de/departments/computer-vision-and-machine-learning/research/gaze-based-human-computer-interaction/its-written-all-over-your-face-full-face-appearance-based-gaze-estimation).
* Apply data preprocessing from [here](http://phi-ai.buaa.edu.cn/Gazehub/3D-dataset/).
* Store the dataset to *datasets/MPIIFaceGaze*.
### Train
```
python train.py \
--dataset mpiigaze \
--snapshot output/snapshots \
--gpu 0 \
--num_epochs 50 \
--batch_size 16 \
--lr 0.00001 \
--alpha 1 \
```
This means the code will perform leave-one-person-out training automatically and store the models to *output/snapshots*.
### Test
```
python test.py \
--dataset mpiigaze \
--snapshot output/snapshots/snapshot_folder \
--evalpath evaluation/L2CS-mpiigaze \
--gpu 0 \
```
This means the code will perform leave-one-person-out testing automatically and store the results to *evaluation/L2CS-mpiigaze*.
To get the average leave-one-person-out accuracy use:
```
python leave_one_out_eval.py \
--evalpath evaluation/L2CS-mpiigaze \
--respath evaluation/L2CS-mpiigaze \
```
This means the code will take the evaluation path and outputs the leave-one-out gaze accuracy to the *evaluation/L2CS-mpiigaze*.
## Gaze360
We provide the code for train and test Gaze360 dataset with train-val-test evaluation.
### Prepare datasets
* Download **Gaze360 dataset** from [here](http://gaze360.csail.mit.edu/download.php).
* Apply data preprocessing from [here](http://phi-ai.buaa.edu.cn/Gazehub/3D-dataset/).
* Store the dataset to *datasets/Gaze360*.
### Train
```
python train.py \
--dataset gaze360 \
--snapshot output/snapshots \
--gpu 0 \
--num_epochs 50 \
--batch_size 16 \
--lr 0.00001 \
--alpha 1 \
```
This means the code will perform training and store the models to *output/snapshots*.
### Test
```
python test.py \
--dataset gaze360 \
--snapshot output/snapshots/snapshot_folder \
--evalpath evaluation/L2CS-gaze360 \
--gpu 0 \
```
This means the code will perform testing on snapshot_folder and store the results to *evaluation/L2CS-gaze360*.
|