I am trying to implement feature matching on multiple images. The idea is to track some features in an image data set. I am using mexopenCV on Matlab and the basics of the algorithm are:
1. Feature Detection using SIFT or SURF
2. Feature Description using SIFT or SURF
3. Feature matching using Flann matcher or Brute Force
4. Filtering matches using RANSAC
My problem is the following: Using a single object in a scene, all of the tracked features are on that object. However, when I add another object to the scene, the tracked features are only existing on the new object and there are no features on the first object. Is there an explanation for why this is happening ?
Image 1
Image 2
P.S: The features on each image are the ones that are tracked on all the data set (8 images).
I guess I found the reason for finding features only on one object. As I mentioned in a comment, RANSAC will try to find the best model when matching the features. Since we have a change in depth for the two objects, we basically have two models to befitted. I searched for multi-modal fitting and found that there's Sequential RANSAC and Multi-RANSAC that solves this. I have tried with sequential RANSAC by setting the number of models to 2 and got a nice result.