Computer vision question relating harris corner detector; not detecting properly endpoints

121 Views Asked by At

Input image:

Input image

Code used:


import cv2
import numpy as np
from google.colab.patches import cv2_imshow

# Read the image and convert to grayscale
image = cv2.imread('input.png')
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray_image, 127, 255, 0)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

for i, cnt in enumerate(contours):
    hull = cv2.convexHull(cnt)
    epsilon = 0.02 * cv2.arcLength(cnt, True)
    approx = cv2.approxPolyDP(cnt, epsilon, True)

    # Draw approximated polygon
    for ap in approx:
        cv2.circle(image, tuple(ap[0]), 5, (255, 0, 0), -1)

# Display the result
cv2_imshow(image)

Output:

Input output wrong

I've tried using other corner detector methods and some filtering methods such as the gaussian blur without any improvement. Can somebody help me to detect correctly the ending points and the intersection points?

2

There are 2 best solutions below

6
Lukas S On

For a point on the lines i.e. a point that is black in your picture imagine counting the points that are slightly closer to it than the lines are thick. You get the biggest count if you at a crossing and the smallest if you're at an end of a line.

An efficient way to count the points near a point in an image is taking the convolution with ones.

I put some white space around the image:

import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import convolve2d

mask = img[:,:,0] == 1
widened_img = np.ones([mask.shape[0]+100, mask.shape[1]+100])
widened_img[50:-50, 50:-50] = mask

picture with whitespace

Calculate points that are very dark in the convolution:

def scatter(y,x):
    plt.scatter(x,y)

conv = convolve2d(widened_img, np.ones([15,15]), mode='same', fillvalue=1)
scatter(*np.where(conv<140))
plt.imshow(conv, cmap='gray')

Convolution with dark points marked

Finally I plot the convolution with very light points marked that are not white in the original image:

conv = convolve2d(widened_img, np.ones([15,15]), mode='same', fillvalue=1)
scatter(*np.where((conv>180) & (widened_img <1)))
plt.imshow(conv, cmap='gray')

end points

5
Mark Setchell On

I just implemented @CrisLuengo's suggestion from the comments. It is pretty similar to Lukas's answer, but I think it should be more tolerant of different line thicknesses because it skeletonises first:

#!/usr/bin/env python3

import numpy as np
import cv2 as cv
from skimage.morphology import skeletonize
from scipy.signal import convolve2d as convolve2d

# Load image, greyscale and threshold
orig   = cv.imread("ySvxY.png")
grey   = cv.cvtColor(orig, cv.COLOR_BGR2GRAY)
_, thr = cv.threshold(grey, 127, 255, cv.THRESH_BINARY_INV)

# Skeletonise image
skel = skeletonize(thr)
cv.imwrite('DEBUG-skeleton.png',skel*255)

# Define 3x3 kernel for counting neighbours
kernel = np.ones((3,3), np.uint8)
print(f'{kernel=}')

# Convolve with ring kernel
conv = convolve2d(skel, kernel, mode='same', fillvalue=1)
scaling = 255/conv.max()
cv.imwrite('result.png', (conv*scaling).astype(np.uint8))

max = conv.max()
print(f'{max=}')
print(np.where(conv==max))
# Make all pixels with "max" neighbours red
orig[conv==max] = [0,0,255]
# Make all pixels with "max-1" neighbours green
orig[conv==(max-1)] = [0,255,0]
cv.imwrite('coloured.png', orig)

Here is DEBUG-skeleton.png:

enter image description here

And here is result.png which shows pixels with brightness proportional to their number of neighbours, i.e. intersections are brighter:

enter image description here

And here is the original image but coloured red where pixels have most neighbours, and green where they have nearly the most neighbours:

enter image description here


For some reason, the results are not as expected so I am still looking at that. If anyone has any bright ideas for corrections please edit them in, or add a comment.