OpenCV Python/C++: Feature Matching in Images with different Lighting, etc

1.1k Views Asked by At

I am trying to detect and match features in two sets of images of the same scene from different sources (and different lighting, contrast etc.).
For now I've tried various different feature detection/description methods (SURF, SIFT, ORB) as well as a few simple pre-processing steps (downscaling the images, histogram equalization) without satisfying results.
I am using the bruteforce matcher with either ratio-test or cross-check as well as homography+ransac. However, i get no (or very few) matches in almost all cases.

What pre-processing can I do to make the images better suitable for feature matching?
What algorithms are best suited for these conditions?

Here are two sample Images: Image 1 Image 2

Here is the code I've got so far:

import cv2
import matplotlib.pyplot as plt
import numpy as np


img1 = cv2.imread('test1.png', cv2.IMREAD_GRAYSCALE)
img2 = cv2.imread('test2.png', cv2.IMREAD_GRAYSCALE)


extended = False
hessian_threshold = 300
mask = None
d_lim = 100


surf = cv2.xfeatures2d.SURF_create(
    hessian_threshold,
    upright=True,
    extended=extended)
surf2 = cv2.xfeatures2d.SURF_create(
    hessian_threshold,
    upright=True,
    extended=extended)

kp1, des1 = surf.detectAndCompute(img1, mask)
kp2, des2 = surf2.detectAndCompute(img2, mask)
print("keypoints found:\nimg1:\t",
      len(kp1), "\nimg2:\t", len(kp2))


bf = cv2.BFMatcher(cv2.NORM_L2, True)
matches = bf.match(des1, des2)
print("bruteforce matches:\t\t", len(matches))

matches = sorted(matches, key=lambda x: x.distance)

better = []
for m in matches:
    if np.linalg.norm(
            np.array(kp1[m.queryIdx].pt) -
            np.array(kp2[m.trainIdx].pt)) < d_lim:
        better.append(m)
print("distance test matches:\t\t", len(better))

list1 = []
list2 = []

for match in better:
    ptr = (kp1[match.queryIdx].pt)
    list1.append(np.array(ptr))
    ptc2 = np.array(kp2[match.trainIdx].pt)
    list2.append(ptc2.reshape(2))

uv1 = np.array(list1)
uv2 = np.array(list2)

M, mask = cv2.findHomography(uv1, uv2, cv2.RANSAC, 5.0)
if mask is not None:
    uv1 = uv1[mask.flatten().astype('bool')]
    uv2 = uv2[mask.flatten().astype('bool')]
    print("homography test matches:\t", mask.sum())


plt.subplots(1, 2, figsize=(9, 9))
plt.subplot(1, 2, 1)
plt.imshow(img1, cmap='gray')
plt.scatter(uv1[:, 0], uv1[:, 1], marker='x', color='red', s=3)

plt.subplot(1, 2, 2)
plt.imshow(img2, cmap='gray')
plt.scatter(uv2[:, 0], uv2[:, 1], marker='x', color='red', s=3)

plt.show()

Output:

keypoints found:
img1:    3609
img2:    387
bruteforce matches:              202
distance test matches:           14
homography test matches:         6

Here, 2 points seem to approximately match up, but the other 'matches' don't. (Removing outliers later on isn't the problem, it's just that there aren't any other candidates.)

Hopefully someone can provide some insight or at least point me in the direction of relevant literature!
Thanks!

0

There are 0 best solutions below