PDA

View Full Version : How does one "determine" the point spread function?


08-10-2008, 05:07 AM
Hello-

Disclaimer: I have no background in optics or image signal
processing, so I apologize in advance for abusing any terminology. I
just discovered this list via Google, and I hope that some of you
might be able to help point me in the right direction. Here goes...

I'm working on a software project that involves recognizing a certain
pattern in extremely blurry images. I have experimented with standard
deconvolution techniques in Photoshop, GIMP, ImageMagick, etc., but
none have sharpened the image enough for the recognition algorithm to
work. So instead, I thought I might try it the other way around --
that is, distort the pattern that I'm looking for and then compare the
distorted pattern to the actual image. I already have a series of
specimen images taken with a particular camera, and for each one, I
can precisely define what the image signal is supposed to be. So I
assume that the next step is to determine the point-spread function or
convolution kernel that would best translate my model signal into the
specimen images.

So my questions are:

Does this sort of recognition-by-reverse-deconvolution approach sound
sane?

What sort of parameters do I need to know about my camera and the
images to compute the PSF?

Can the PSF be estimated empirically, perhaps through some sort of
regression analysis?

Is there a particular book or body of research that you think would be
most helpful to me?

Thanks for your time.

-Jeff

Rune Allnor
08-10-2008, 09:27 AM
On 10 Aug, 06:07, [email protected] wrote:
> Hello-
>
> Disclaimer: *I have no background in optics or image signal
> processing, so I apologize in advance for abusing any terminology. *I
> just discovered this list via Google, and I hope that some of you
> might be able to help point me in the right direction. *Here goes...
>
> I'm working on a software project that involves recognizing a certain
> pattern in extremely blurry images. *I have experimented with standard
> deconvolution techniques in Photoshop, GIMP, ImageMagick, etc., but
> none have sharpened the image enough for the recognition algorithm to
> work. *So instead, I thought I might *try it the other way around --
> that is, distort the pattern that I'm looking for and then compare the
> distorted pattern to the actual image. *I already have a series of
> specimen images taken with a particular camera, and for each one, I
> can precisely define what the image signal is supposed to be. *So I
> assume that the next step is to determine the point-spread function or
> convolution kernel that would best *translate my model signal into the
> specimen images.
>
> So my questions are:
>
> Does this sort of recognition-by-reverse-deconvolution approach sound
> sane?

It *sounds* sane. Whether it *is* sane depends on the resources at
your disposal; these kinds of things might not be trivial to
implement.

> What sort of parameters do I need to know about my camera and the
> images to compute the PSF?

In principle, all you need is a complete and total knowledge of
every tiny detail of the camera. Of course, *obtaining* this
complete and total knowledge of every tiny detail of the camera
might prove to become quite a hazzle.

> Can the PSF be estimated empirically, perhaps through some sort of
> regression analysis?

The PSF can be estiamted, but only within limits. In time series
analysis it is usually no problem to estimate a power spectrum.
But one needs the phase terms as well if one wants an accurate
PSF. Obtaining the phase spectrum is not necessarily easy.

> Is there a particular book or body of research that you think would be
> most helpful to me?

Gonzales & Woods "Digital Image Processing" 3rd ed., (2008)

http://www.amazon.com/Digital-Image-Processing-Rafael-Gonzalez/dp/013168728X/ref=pd_bbs_1?ie=UTF8&s=books&qid=1218356784&sr=8-1

is the standard intro on these sorts of problems. It shows a
comparision
between a number of image reconstruction algorithms.

Chapter 5 deals with image restoration and reconstruction techniques.
Section 5.6 deals with estimating the degrading function.

Rune

John E. Hadstate
08-10-2008, 02:09 PM
<[email protected]> wrote in message
news:4d628ada-cbb1-48a5-ad26-3d421e9018d4@i24g2000prf.googlegroups.com...
> ... none have sharpened the image enough for
> the recognition algorithm to work. So instead,
> I thought I might try it the other way around --
> that is, distort the pattern that I'm looking
> for and then compare the distorted pattern
> to the actual image.
>
> Does this sort of recognition-by-reverse-
> deconvolution approach sound sane?
>

It seems to me that if the output of your program is "Yes the
image is present" or "No, the image is not present", you might
have to convince someone by letting them find the image with
their own eyes. The "standard of proof" might be very high,
depending on what else might be done with the knowledge that
the image is or is not present. In other words, no matter what
techniques your program uses, you might have to sharpen the
actual image enough so that a human can convince himself that
your identification is correct.

aruzinsky
08-10-2008, 03:57 PM
On Aug 9, 10:07*pm, [email protected] wrote:
> Hello-
>
> Disclaimer: *I have no background in optics or image signal
> processing, so I apologize in advance for abusing any terminology. *I
> just discovered this list via Google, and I hope that some of you
> might be able to help point me in the right direction. *Here goes...
>
> I'm working on a software project that involves recognizing a certain
> pattern in extremely blurry images. *I have experimented with standard
> deconvolution techniques in Photoshop, GIMP, ImageMagick, etc., but
> none have sharpened the image enough for the recognition algorithm to
> work. *So instead, I thought I might *try it the other way around --
> that is, distort the pattern that I'm looking for and then compare the
> distorted pattern to the actual image. *I already have a series of
> specimen images taken with a particular camera, and for each one, I
> can precisely define what the image signal is supposed to be. *So I
> assume that the next step is to determine the point-spread function or
> convolution kernel that would best *translate my model signal into the
> specimen images.
>
> So my questions are:
>
> Does this sort of recognition-by-reverse-deconvolution approach sound
> sane?
>
> What sort of parameters do I need to know about my camera and the
> images to compute the PSF?
>
> Can the PSF be estimated empirically, perhaps through some sort of
> regression analysis?
>
> Is there a particular book or body of research that you think would be
> most helpful to me?
>
> Thanks for your time.
>
> -Jeff

Your PSF will be have the same shape as the iris aperture of your
camera.

The PSF can be estimated from an image of point source of light such
as a star. Maybe, a laser pointer will work, but I never tried it.

08-11-2008, 03:05 AM
On Aug 10, 6:09*am, "John E. Hadstate" <[email protected]> wrote:

> It seems to me that if the output of your program is "Yes the
> image is present" or "No, the image is not present", you might
> have to convince someone by letting them find the image with
> their own eyes. *The "standard of proof" might be very high,
> depending on what else might be done with the knowledge that
> the image is or is not present. *In other words, no matter what
> techniques your program uses, you might have to sharpen the
> actual image enough so that a human can convince himself that
> your identification is correct.

It's a little more complicated than just "yes or no." The images
contain one of several different patterns. So the objective is
to determine 1) if a pattern is present, and 2) which pattern it is.
Each pattern is well defined and can be mathematically described.

You raise an interesting point, perhaps with respect to "logical
correctness". However, in this case, we can assume that
we know exactly which pattern appears in each image.
And that's why I'm looking for some way to estimate
the PSF by comparing the sample images with the model
signals.

Chris Bore
08-11-2008, 09:35 AM
On Aug 10, 5:07*am, [email protected] wrote:
> Hello-
>
> Disclaimer: *I have no background in optics or image signal
> processing, so I apologize in advance for abusing any terminology. *I
> just discovered this list via Google, and I hope that some of you
> might be able to help point me in the right direction. *Here goes...
>
> I'm working on a software project that involves recognizing a certain
> pattern in extremely blurry images. *I have experimented with standard
> deconvolution techniques in Photoshop, GIMP, ImageMagick, etc., but
> none have sharpened the image enough for the recognition algorithm to
> work. *So instead, I thought I might *try it the other way around --
> that is, distort the pattern that I'm looking for and then compare the
> distorted pattern to the actual image. *I already have a series of
> specimen images taken with a particular camera, and for each one, I
> can precisely define what the image signal is supposed to be. *So I
> assume that the next step is to determine the point-spread function or
> convolution kernel that would best *translate my model signal into the
> specimen images.
>
> So my questions are:
>
> Does this sort of recognition-by-reverse-deconvolution approach sound
> sane?
>
> What sort of parameters do I need to know about my camera and the
> images to compute the PSF?
>
> Can the PSF be estimated empirically, perhaps through some sort of
> regression analysis?
>
> Is there a particular book or body of research that you think would be
> most helpful to me?
>
> Thanks for your time.
>
> -Jeff

Try ImageJ (public domain software), and look for the deconvolution
plug-in - this does pretty much what you are trying to do. It uses an
adaptive iterative approach. Personally I find that these kinds of
things don't work very well except on the example images they use to
demonstrate, but ImageJ's is one of the better ones. It doesn't need
you to have a PSF first.

> Can the PSF be estimated empirically

To measure the point spread function (PSF) of a camera, I proceed as
follows:

1) In the dark, get someone to hold a laser pointer shining a dot onto
card at a distance, far enough that the size of the dot, subtended
onto your camera pixel array, is less than the size of one pixel. For
instance, suppose you have a 3.1 Mpixel camera, whose field of view is
45 degrees. Then 1 pixel at the center subtends an angle of 1' of arc.
At 10 m, a 4 mm dot will subtend an angle less than 1' of arc. So a
laser dot less than 4 mm in size will besmaller than one pixel. (You
use a laser so you get enough light to be sen at this distance). Take
a photo of the dot. This is a direct measure of the PSF.

You can verify this by calculation - I have done this for many cameras
and it usually works out about as expected.

Another check is to have a photo with a sharp edge (black to white is
best). The shape of the edge should match the shape of one half of
your PSF.

You can reverse this by measuring the Modulation Transfer Function
(MTF - essentially, the spatial frequency spectrum of the PSF).
Generate an image made of vertical bars whose shape is sinusoidal and
whose spatial frequency increases up to, and a bit beyond, the
Nyquist. Photograph this. The reduction of contrast should plot an MTF
that is the Fourier Transform of the PSF.

If you already have a set of 'ideal' and real images for each camera,
then your PSF can be determined by de-convolving one with the other.

Personally, I find that measuring the PSF with the laser dot is as
good as anything, satisfyingly close to practical reality, and kind of
fun too.

> Does this sort of recognition-by-reverse-deconvolution approach sound sane?

Your recognition by reverse de-convolution sounds sane, but why not
transpose the problem into the frequency domain and do the recognition
there? The features may be more recognizable, and you will be freed
from the constraints of superimposing the image patterns.

You can estimate PSF quite easily, because most cameras just have a
blobby blur of two or three pixels width. Make sure to disable any
auto sharpening if you can (although strictly, that sort of thing also
counts towards PSF anyway).

> Is there a particular book or body of research that you think would be most helpful to me?

Well, at the risk of shameless plugging, I teach all about this in my
short training class Image Processing for Consumer Electronics. And we
do this PSF measurement and deconvolution as a practical exercise.

http://www.bores.com/courses/ipce_ipce.htm

We usually do these classes on demand for consumer electronics and
mobile phone companies, but I can always ask if you could 'piggy back'
onto one of those if you wanted to. email me - [email protected] - and
I'll see what I can do.

Chris
=========================
Chris Bore
BORES Signal Processing
www.bores.com

Rune Allnor
08-11-2008, 10:07 AM
On 11 Aug, 10:35, Chris Bore <[email protected]> wrote:
> On Aug 10, 5:07*am, [email protected] wrote:

> > Can the PSF be estimated empirically
>
> To measure the point spread function (PSF) of a camera, I proceed as
> follows:
>
> 1) In the dark, get someone to hold a laser pointer shining a dot onto
> card at a distance,

A very nifty trick!

Just be aware that you need a reasonably good quality laser pointer
to do this, or you measure the PSF of the pointer.

I have two pointers, one red which is intended as aid during
presentations and one green to point at astronomical objects in
the night sky.

The red pointer gives a rather large light spot because it needs
to be seen from the back of a large auditorium. The green pointer
is not at all 'pointed' but has a very bright spot surrounded by
a fainter 'halo'.

As long as one is aware about these things and don't try to
invert for components that are caused by imperfections in the
laser pointer, this approach ought to be very useful.

Coming to think of it, that's why you said 'point at a card'
and not 'point at a wall,' right? The main lobe hits and reflects
from the card and there is nothing to reflect the sidelobes?

Rune

Chris Bore
08-11-2008, 10:24 AM
On Aug 11, 10:07*am, Rune Allnor <[email protected]> wrote:
> On 11 Aug, 10:35, Chris Bore <[email protected]> wrote:
>
> > On Aug 10, 5:07*am, [email protected] wrote:
> > > Can the PSF be estimated empirically
>
> > To measure the point spread function (PSF) of a camera, I proceed as
> > follows:
>
> > 1) In the dark, get someone to hold a laser pointer shining a dot onto
> > card at a distance,
>
> A very nifty trick!
>
> Just be aware that you need a reasonably good quality laser pointer
> to do this, or you measure the PSF of the pointer.
>
> I have two pointers, one red which is intended as aid during
> presentations and one green to point at astronomical objects in
> the night sky.
>
> The red pointer gives a rather large light spot because it needs
> to be seen from the back of a large auditorium. The green pointer
> is not at all 'pointed' but has a very bright spot surrounded by
> a fainter 'halo'.
>
> As long as one is aware about these things and don't try to
> invert for components that are caused by imperfections in the
> laser pointer, this approach ought to be very useful.
>
> Coming to think of it, that's why you said 'point at a card'
> and not 'point at a wall,' right? The main lobe hits and reflects
> from the card and there is nothing to reflect the sidelobes?
>
> Rune

Yeah. The other point to note is, the laser pointer is held close to
the card so the spot is small. People get confused and try to shine it
from the camera, which gives a big dot. So you need two people to do
the experiment (it is more fun if you are of opposite ***, and very
friendly, because it is best done in total darkness)..

Martin Brown
08-11-2008, 12:40 PM
aruzinsky wrote:
> On Aug 9, 10:07 pm, [email protected] wrote:
>> Hello-
>>
>> Disclaimer: I have no background in optics or image signal
>> processing, so I apologize in advance for abusing any terminology. I
>> just discovered this list via Google, and I hope that some of you
>> might be able to help point me in the right direction. Here goes...
>>
>> I'm working on a software project that involves recognizing a certain
>> pattern in extremely blurry images. I have experimented with standard
>> deconvolution techniques in Photoshop, GIMP, ImageMagick, etc., but
>> none have sharpened the image enough for the recognition algorithm to
>> work. So instead, I thought I might try it the other way around --
>> that is, distort the pattern that I'm looking for and then compare the
>> distorted pattern to the actual image. I already have a series of
>> specimen images taken with a particular camera, and for each one, I
>> can precisely define what the image signal is supposed to be. So I
>> assume that the next step is to determine the point-spread function or
>> convolution kernel that would best translate my model signal into the
>> specimen images.
>>
>> So my questions are:
>>
>> Does this sort of recognition-by-reverse-deconvolution approach sound
>> sane?

Yes. It is one of the ways that regularised deconvolutions are done.

Essentially take a model of the true outside world convolve it with the
idealised point spread function of your imaging system and compare it
with the actual observed data. The trick is in knowing the PSF and
choosing a suitable regularising function.
>>
>> What sort of parameters do I need to know about my camera and the
>> images to compute the PSF?

To compute it accurately - just about everything including the geometry
of optics, aperture used and the properties of the glass. But you might
get away with a crude disk.
>>
>> Can the PSF be estimated empirically, perhaps through some sort of
>> regression analysis?

If there is a point specular reflection in the blurred image you can
guestimate the psf from that.
>>
>> Is there a particular book or body of research that you think would be
>> most helpful to me?

Try looking for "blind deconvolution" that might do what you want.
>>
>> Thanks for your time.
>>
>> -Jeff
>
> Your PSF will be have the same shape as the iris aperture of your
> camera.

Provided that the image is not diffraction limited.
>
> The PSF can be estimated from an image of point source of light such
> as a star. Maybe, a laser pointer will work, but I never tried it.

Any specular reflection in the original image will give a rough idea of
the psf at that particular depth of field. Unless everything is the same
distance away then the psf varies with subject distance too.

Astronomer have it easy since the subjects are effectively at infinity.

Regards,
Martin Brown
** Posted from http://www.teranews.com **

jim
08-11-2008, 12:49 PM
[email protected] wrote:
>
> On Aug 10, 6:09 am, "John E. Hadstate" <[email protected]> wrote:
>
> > It seems to me that if the output of your program is "Yes the
> > image is present" or "No, the image is not present", you might
> > have to convince someone by letting them find the image with
> > their own eyes. The "standard of proof" might be very high,
> > depending on what else might be done with the knowledge that
> > the image is or is not present. In other words, no matter what
> > techniques your program uses, you might have to sharpen the
> > actual image enough so that a human can convince himself that
> > your identification is correct.
>
> It's a little more complicated than just "yes or no." The images
> contain one of several different patterns. So the objective is
> to determine 1) if a pattern is present, and 2) which pattern it is.
> Each pattern is well defined and can be mathematically described.
>
> You raise an interesting point, perhaps with respect to "logical
> correctness". However, in this case, we can assume that
> we know exactly which pattern appears in each image.
> And that's why I'm looking for some way to estimate
> the PSF by comparing the sample images with the model
> signals.

There are several things not so clear in your problem statement:

Is the problem to identify the image content or to identify the PSF
from any given image? I mean is the PSF always going to be the same
and is your intent to identify it once in advance or are you trying to
figure out how to extract it from the image alone.
Are the patterns in the image always in the same position and
orientation? The "mathematically described" statement suggests they
might be.
And also are your "extremely blurry images" due to the camera being
out of focus or because the object is beyond the limits of the cameras
resolution?

-jim


----== Posted via Pronews.Com - Unlimited-Unrestricted-Secure Usenet News==----
http://www.pronews.com The #1 Newsgroup Service in the World! >100,000 Newsgroups
---= - Total Privacy via Encryption =---

Martin Brown
08-11-2008, 12:58 PM
[email protected] wrote:
> Hello-
>
> Disclaimer: I have no background in optics or image signal
> processing, so I apologize in advance for abusing any terminology. I
> just discovered this list via Google, and I hope that some of you
> might be able to help point me in the right direction. Here goes...

You might find the following introductory chapter helpful.

http://www.ph.ed.ac.uk/~wjh/teaching/dia/documents/reconstruction.pdf

You really need to describe what you are trying to do.

Regards,
Martin Brown
** Posted from http://www.teranews.com **

Andrew_M
08-13-2008, 11:46 PM
[email protected]:
> On Aug 10, 6:09�am, "John E. Hadstate" <[email protected]> wrote:

I think, that to compute PSF is possible only for manufacturer of your
lens, who knows all necessary characteristiks like geometry, sort of
glass and so on. You may only measure PSF as you've explained by
another ones, but you must make set of photos with point light sources
placed at different angles from the optical axis of your lens. After
it you have two variants of action: you may try to reconstruct the
source image and compare it with yours patterns or to compute using
PSF iamges, what would be produced by your lens as if your patterns
are sources. Then you may compare these computed images with your real
photos. It is hardly possible to say, which way is better. Oh, there
is probably one more problem- diffraction. If you have images,
distorted by diffraction, to reconstruct them is more complicate
thing, even if you know PSF. Non-linear problem. Here was similar
discussion and I've plased two images- diifracted one and result of
reconstruction, using known PSF. Look at them, may be it'll be
interesting for you, if you have something like them. (
http://www.smartfills.com/Html/Images/textdiffr.jpg and
http://www.smartfills.com/Html/Images/textrestr.jpg )

JimAtQuarktet
08-20-2008, 08:10 PM
On Aug 10, 10:05 pm, [email protected] wrote:
> On Aug 10, 6:09 am, "John E. Hadstate" <[email protected]> wrote:

> You raise an interesting point, perhaps with respect to "logical
> correctness". However, in this case, we can assume that
> we know exactly which pattern appears in each image.
> And that's why I'm looking for some way to estimate
> the PSF by comparing the sample images with the model
> signals.

One word...actually its an acronym, SeDDaRA. SeDDaRA is a blind
deconvolution technique that requires no knowledge of the imaging
system. Instead, the method compares a blurred image with a known
reference image to extract the PSF. Since the operator has some idea
about he/she is viewing, this is easier than trying to model the PSF
or iteratively construct it. The reference image does not necessarily
even need to resemble the target image, as long as it contains the
desired spatial frequencies. You can see examples of the
deconvolutions at http://www.quarktet.com/Gallery1.html. Our program
Tria (free-to-try) runs the SeDDaRA method producing both the PSF and
the deconvolved image.

A word of caution though, in order for any deconvolution method to
work, the PSF information must be preserved in the image. This
information can be lost due to high noise, image compression, or
digital truncation (saving a 16-bit image as an 8-bit image.)

Best Regards,
Jim C