In my case I have some images, captured by CMOS-camera (global shutter) during non accelerated motion (with fixed illumination and focus and known velocity and exposure time, so field of view travels 210px during acquisition) and I want to remove motion blur. To estimate the blur kernel I used DeconvolutionLab2 plugin in ImageJ and performed inverse filtering (naive Wiener filter, NIF) of motion-blurred and still image of the same field of view. I expected the kernel to be a straight line, but I got a few points overlaid the line instead. If to deconvolve the motion-blurred image with the kernel obtained using Lucy-Richardson iterative deconvolution the result is satisfactory, but when I use a binary image of a line (all variants: 210px, length of the line in the kernel, distance between point in the kernel) the results are much worse. Could you please tell:
- Is that right to use a line-shaped deconvolution kernel?
- How should I interpret the form of experimental kernel?
- Which approach could you recommend for restoration of motion-blurred images?