| --- |
| license: apache-2.0 |
| --- |
| Output Video 1: |
| https://youtu.be/cWDXapOfiyk |
|
|
| Output Video 2: |
| https://youtu.be/7rsJmHuLuKk |
|
|
| I used one of RoboFlow's given drone datasets with YOLOv8 detector. The dataset I chose had 7000 images with drones and its bounding box as well as other |
| objects so the detector can differentiate them from drones. Training was done with 20 epochs, 640 imgs, 16 batches, and 64 workers. |
|
|
| Kalfar Filter was designed using the filterpy library with state vector |
| x = [ cx, cy, vx, vy] and measurement vector |
| z = [cx, cy] |
|
|
| Noise Parameters: |
| kf.P = The uncertainty of the drone's starting position = high = 100 |
| kf.Q = How unpredicatble is the drone's movement = low = 0.1 |
| kf.R = Detector Accuracy(Noise measurement) = high = 10 |
|
|
| Overall, my detector has smooth tracking of the drone, but it is slow to react to sudden movements or dealing with small-sized drone frames. |
|
|
| Every frame we collect goes through the Kilman Filter. We deal with missing detections by using a initalized boolean variable, and missing_fram count |
| and a max missing frame limit. The initialized boolean is used to determine if the drone was detected for the first time and set the inital position |
| to that spot. When a frame doesn't have a drone, it increments the missing_frame, but we still keep that frame in the final output video. It is only |
| when missing_frame count exceed max_missing do we ignore the frame. This count resets every time we detect the drone, so it has to be atleast 5 |
| consecutive frames with no drone detection. I did this so we can account for misreads the dector might make and produce a smoother output video |
|
|
|
|