Image processing: exposure fusioned images are washed out

842 Views Asked by At

I am trying to replicate T. Mertens' et. al. paper [ 1 ] where the authors present a method to fuse multiple pictures captured with different camera exposures into a "better"-exposed picture. There is also a Matlab demo code available for the paper [ 2 ]. The method is very simple: you calculate a pixel-weight-map for each pixel and then the images are combined using the weight maps and a Laplace/Gaussian pyramid blending approach to prevent blending artifacts.

I have basically ported the Matlab code to C++ but the resulting images look washed out compared to the Matlab implementation (images: http://imageshack.us/photo/my-images/204/exposuresample.jpg/).

I already compared different steps in the processing workflow of my C++ port but these seem to be okay. There seems to be something wrong with my pyramid processing.

Has someone with image processing background a suggestion or idea what could cause the washed-out result?

Regards,

[ 1 ] http://research.edm.uhasselt.be/%7Etmertens/exposure_fusion/ [ 2 ] http : //research.edm.uhasselt.be/%7Etmertens/exposure_fusion/exposure_fusion.zip

1

There are 1 best solutions below

0
On

It appears as though the second image is either offset by some constant, effectively causing it to appear 'brighter' and saturated on very bright areas, or it is multiplied by a constant, causing it to be saturated in some areas. You can test this by checking the value of a few pixels you assume to be black. If expected black is indeed black, then it's multiplicative. I cannot make it out in the image you attach.

My bet would be on the first case, though.

To debug this, I would check throughout the algorithm if any pixel operation results in over 255 (or 1, depending if you work with doubles or integers) and work from there. Or for a quick and dirty solution, check if you can correct the final image by subtracting a value or dividing by a small value (1.3 or something)