Feature tracking not working correctly on low Resolution images

626 Views Asked by At

I am using SIFT for feature detection and calcOpticalFlowPyrLK for feature tracking in images. I am working on low resolution images (590x375 after cropping) taken from Microsoft kinect.

// feature detection
cv::Ptr<Feature2D> detector = cv::xfeatures2d::SIFT::create();
detector->detect(img_1,keypoints_1);
KeyPoint::convert(keypoints_1, points1, vector<int>());

// feature tracking
vector<float> err;
Size winSize=Size(21,21);
TermCriteria termcrit=TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, 0.01);
calcOpticalFlowPyrLK(img_1, img_2, points1, points2, status, err, winSize, 1, termcrit, 0, 0.001);

I ran this on consective images of steady scene (just to get idea) taken from same camera position at rate of 30fps. To eyes, images looks same but somehow the calcOpticalFlowPyrLK in not able to track same features from one image to another. Position (x,y coordinates) should be same in detected feature and tracked feature. Somehow it isn't.

As per AldurDisciple suggestion, I think I am detecting noise as features. The black images below are difference between consuctive elements, shows noise. Next ones are original images and then images with detected features.

My goal is to use information to find change in robot's position over time.

I used

GaussianBlur( currImageDepth, currImageDepth, Size(9,9), 0, 0); 

for noise but it didn't help.

Find complete code in here enter image description here

enter image description here

enter image description here

enter image description here

enter image description here

enter image description here

1

There are 1 best solutions below

2
On

I think there are two factors you should take into account:

  1. Your scene basically consists of 3 homogeneous regions, hence the FAST points in these regions will likely be generated by noise in the image. Since the noise pattern can be completely different in two successive images, the best match for some of the points may be at a completely different position in the image.

  2. Your image already has quite a low resolution and the 3 in the parameter list of the calcOpticalFlowPyrLK function mean that you require the function to track the points using a pyramid of 4 levels. This means that the points will first be tracked in the image resized by a factor 2^3 = 16 (i.e. ~ 36x23 image), then in the image resized by a factor 2^2 = 8 (i.e. ~ 73x46 image) and so on. An initial resolution of 36x23 is much too low for an image with almost no texture.

To solve your problem, you can try to use only two pyramid levels (i.e. pass 1 instead of 3) or even a single level (i.e. pass 0 instead of 3). But keep in mind that the noise issue implies that in general you will always have a few false matches.

On the other hand, tracking points in a static scene with no camera motion very much seems like an artificial problem. In real scenarios, you will probably be more interested in tracking motion in the scene or a static scene using a moving camera, in which case using several pyramid levels will be useful.