How can I get disparity map and depth map from the feature matching result?

667 Views Asked by At

I get a good result from the paper: LoFTR: Detector-Free Local Feature Matching with Transformers.

result just like this

Now I want to get a depth map from the feature matching result.

So I really hope that maybe someone can give me a link or code to reach this goal. Thank you so much.

1

There are 1 best solutions below

3
BHawk On

You will not be able to achieve a reliable depth or disparity map using the feature matches, especially with the example you posted. A general algorithm to get you started on a low quality depth map would be:

  • Find a rotation factor for one or both images that minimizes the sum of all of the y-offsets in the feature matches.
  • Iterate through each feature and record the x-offset from image A to image B. This will give you a sparse disparity map.
  • Now the difficult part... use an inpainting method (there are lots of them, look them up) to fill in the missing pixel values based on the existent values. (This would give you an unreliable result even if your initial images were well aligned, but it's your only option given your starting point.)
  • Now you have a dense disparity map. Conversion from depth to disparity is a simple calculation but it requires knowledge of the camera's position, rotation, and properties (focal length, sensor size, etc) when each image was taken. You can make those values up in order to create a fake depth map, but again it will introduce even less accuracy.