I have a video of simple moving dots (that sometimes overlap) that is saved as a sequence of images. At each image I detect all the dots and save their coordinates:
(snapshot 1 -> snapshot 2)
I would like to infer the trajectory of each dot. The dots move smoothly and not too fast from one frame to the other, but if for each point of the first image I just find their closest point of the next image it often fails to reconstruct the trajectory.
I tried on opencv the multitrackers but the trackers very quickly lose their target by jumping on a different dot when the dots tend to overlap. The detection works very nicely though.
The video and the objects to track are simple. I do not want to believe that I need to implement something more technical to accurately track these dots. Which is why I decided to ask here, I am out of ideas. Any tip or advice is appreciated... Thanks.


Since your detections are accurate and what you are looking for is standard Multi Object Tracking (MOT) style tracking, an easy go to solution is to use SORT.
SORT is well suited for your use-case because of its similarity to real-time tracking of multiple objects. Only odd thing here is SORT is designed for bounding box represented detections but yours are based on coordinates.
If you represent each coordinate with a small bounding box around it, code can be simple like:
track_ids is a np array where each row contains a valid bounding box and track_id (last column)