08-10-2008, 05:07 AM
Hello-
Disclaimer: I have no background in optics or image signal
processing, so I apologize in advance for abusing any terminology. I
just discovered this list via Google, and I hope that some of you
might be able to help point me in the right direction. Here goes...
I'm working on a software project that involves recognizing a certain
pattern in extremely blurry images. I have experimented with standard
deconvolution techniques in Photoshop, GIMP, ImageMagick, etc., but
none have sharpened the image enough for the recognition algorithm to
work. So instead, I thought I might try it the other way around --
that is, distort the pattern that I'm looking for and then compare the
distorted pattern to the actual image. I already have a series of
specimen images taken with a particular camera, and for each one, I
can precisely define what the image signal is supposed to be. So I
assume that the next step is to determine the point-spread function or
convolution kernel that would best translate my model signal into the
specimen images.
So my questions are:
Does this sort of recognition-by-reverse-deconvolution approach sound
sane?
What sort of parameters do I need to know about my camera and the
images to compute the PSF?
Can the PSF be estimated empirically, perhaps through some sort of
regression analysis?
Is there a particular book or body of research that you think would be
most helpful to me?
Thanks for your time.
-Jeff
Disclaimer: I have no background in optics or image signal
processing, so I apologize in advance for abusing any terminology. I
just discovered this list via Google, and I hope that some of you
might be able to help point me in the right direction. Here goes...
I'm working on a software project that involves recognizing a certain
pattern in extremely blurry images. I have experimented with standard
deconvolution techniques in Photoshop, GIMP, ImageMagick, etc., but
none have sharpened the image enough for the recognition algorithm to
work. So instead, I thought I might try it the other way around --
that is, distort the pattern that I'm looking for and then compare the
distorted pattern to the actual image. I already have a series of
specimen images taken with a particular camera, and for each one, I
can precisely define what the image signal is supposed to be. So I
assume that the next step is to determine the point-spread function or
convolution kernel that would best translate my model signal into the
specimen images.
So my questions are:
Does this sort of recognition-by-reverse-deconvolution approach sound
sane?
What sort of parameters do I need to know about my camera and the
images to compute the PSF?
Can the PSF be estimated empirically, perhaps through some sort of
regression analysis?
Is there a particular book or body of research that you think would be
most helpful to me?
Thanks for your time.
-Jeff