How to increase contrast and detect edges in a 16-bit image?

151 Views Asked by At

My problem is with contrast enhancement and edge detection. I would like to get a good Contrast stretching and and make it work the edge detection.

First of all, I read my images using rawpy because I have .nef files (Nikon raw). I know that my images have a bit depth per color channel equal to 14 bits. But I can't find a function or code to read 14-bit images. So I open my images in 16 bits with rawpy...

Then I use opencv to transform my images into grayscale and calculate (I(x,y)-Io)/Io(x,y). Where I(x,y) is "brut" and Io(x,y) is "init". I apply a mask to hide the uninteresting region.

My masked image (with edges that I want to detect) and my "contrasted image": enter image description here

and my masked image with my edge detection:

enter image description here

my two images are here (brut and init)

I don't understand why it doesn't works ? Why my images size (adjusted_image and edges) isn't correct ? If you have any comments or ideas ? thank you

my code :

import numpy as np
import cv2
import rawpy
import rawpy.enhance
import matplotlib.pyplot as plt
import glob
import imutils

####################
# Reading a Nikon RAW (NEF) image    2023-09-25_18-02-21.406
init="initialisation/2023-09-19_19-02-33.473.nef"
brut="DT10/16-45-31_2023-09-06.nef"

####################
# This uses rawpy library
print("reading init file using rawpy.")
raw_init = rawpy.imread(init)

image_init = raw_init.postprocess(use_camera_wb=True, output_bps=16)
print("Size of init image read:" + str(image_init.shape))

print("reading brut file using rawpy.")
raw_brut = rawpy.imread(brut)
image_brut = raw_brut.postprocess(use_camera_wb=True, output_bps=16)
print("Size of brut image read:" + str(image_brut.shape))

####################
# (grayscale) OpenCV
init_grayscale = cv2.cvtColor(image_init, cv2.COLOR_RGB2GRAY)
brut_grayscale = cv2.cvtColor(image_brut, cv2.COLOR_RGB2GRAY)

test = cv2.divide((brut_grayscale-init_grayscale),(init_grayscale))

print("test image max =" + str(np.max(test)))

# Step 1: Create an empty mask of the same shape as your image
mask = np.zeros_like(test)
mask = mask.astype(np.uint8)
# Step 2: Create a circle in the mask
height, width = mask.shape
center_y, center_x = height // 2, width // 2
radius =3* min(height, width) // 6  # Adjust the radius as needed
cv2.circle(mask, (center_x, center_y), radius, 1, thickness=-1)
# Step 3: Apply the mask to your image
masked_image = cv2.bitwise_and(test, test, mask=mask)
print("masked image max =" + str(np.max(masked_image)))
print("masked image type =" + str((masked_image.dtype)))

####################
# Adjust contrast
alpha = 10
adjusted_image = cv2.multiply(test, alpha)
#adjusted_image = np.clip(adjusted_image, 0, 65535)

print(masked_image.dtype, adjusted_image.dtype)

# Afficher l'image originale et l'image avec le contraste augmenté
cv2.imshow('Image originale', imutils.resize(masked_image * 65535, width=1080))
cv2.imshow('Contraste augmenté', imutils.resize(adjusted_image * 65535, width=1080))
cv2.waitKey(0)
cv2.destroyAllWindows()

###Edge detection
# Appliquer l'opérateur Canny pour détecter les contours
seuil_min = 0  # Seuil minimal pour les bords faibles
seuil_max = 0.1  # Seuil maximal pour les bords forts
edges = cv2.Canny(masked_image.astype(np.uint8), seuil_min, seuil_max)

# Afficher l'image originale et l'image des contours
cv2.imshow("Image originale", imutils.resize(masked_image * 65535, width=1080))
cv2.imshow("Contours détectés (Canny)", imutils.resize(edges * 65535, width=1080))

cv2.waitKey(0)
cv2.destroyAllWindows()
1

There are 1 best solutions below

2
On

If masked_image is a 16-bit array, then masked_image.astype(np.uint8), which you use as input to Canny, takes the lower 8 bits of each value, throwing away the upper 8 bits, which likely contain the most important information. It’s like doing mod(masked_image, 256).

Instead, first divide by 256 before casting to 8 bits: (masked_image // 256).astype(np.uint8).