Image preparation before SIFT extracting features, or how to make SIFT work stable

54 Views Asked by At

I have a set of satellite images (which I'm going to index using SIFT) and there are also images from another source (drone) I extract the features from image made by the drone and try to find similar features in the index that I made on satellite images, but unfortunately it does not work.

Drone Image: enter image description here

Keypoints: enter image description here

Satellite Image: enter image description here

Keypoints: enter image description here

Matching Result: enter image description here As you can see, the matching lines are crossing each other so I guess this means that matcher failed to find right pairs

Code:

import cv2

detector = cv2.SIFT_create()
image = cv2.imread('search_block.png', cv2.IMREAD_GRAYSCALE)

image = cv2.medianBlur(image, 7)
image = cv2.equalizeHist(image)
query_key_points, query_descriptors = detector.detectAndCompute(image, None)
# # Draw keypoints on the image
image_with_keypoints = cv2.drawKeypoints(image, query_key_points, None,
                                         flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Display the image with keypoints
cv2.imshow("Query image with SIFT keypoints", image_with_keypoints)

bf = cv2.BFMatcher()
index_image = cv2.imread('search_block2.png', cv2.IMREAD_GRAYSCALE)
index_image = cv2.equalizeHist(index_image)

index_key_points, index_descriptors = detector.detectAndCompute(index_image, None)
image_with_keypoints = cv2.drawKeypoints(index_image, index_key_points, None,
                                         flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Display the image with keypoints
cv2.imshow("Index image with SIFT keypoints", image_with_keypoints)
# Perform matching between descriptors of template and target images
matches = bf.knnMatch(query_descriptors, index_descriptors, k=2)

# Apply ratio test to filter matches
good_matches = []
for m, n in matches:
    if m.distance < 0.75 * n.distance:
        good_matches.append(m)

# Draw matched keypoints on the matched image
matched_image = cv2.drawMatches(image, query_key_points, index_image, index_key_points, good_matches, None,
                                flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
cv2.imshow('Matched Keypoints', matched_image)
cv2.waitKey(0)
cv2.destroyAllWindows()

I pre-processed the shot from the drone, blurred it with the median filter (to remove insignificant noise) and equalized the the histogram, my main idea was to leave only the most significant features, i.e. geometric shapes etc., but it seems that the whole algorithm simply does not work. Although if you pay attention on these two photos, they are practically identical. Please help me understand what I'm doing wrong, maybe I should choose a different algorithm for the feature extraction(based on geometrical shapes) or I just need to 'cook' searching image in different way?

0

There are 0 best solutions below