PDA

View Full Version : Image spatial frequency resolution


Chris Bore
10-04-2005, 02:37 PM
Image 'spatial frequency resolution' seems to be used to mean:
- the highest spatial frequency that can be resolved.
Experimentally, an image of sinusoidal intensity is displayed and the
contrast varied until the viewer can see the bar pattern.

This is using 'spatial frequency resolution' to measure our ability to
resolve adjacent light and dark features.

I am interested in spatial frequency resolution as in, ability to
resolve adjacent spatial frequncies (I think this is closer to a normal
definition of 'frequency resolution').

For example, if we have an image with two textures, each representing a
spatial frequency, then what separation is required between those
spatial frequncies so that we can resolve them (in the frequency
spectrum)?

I think this will be related to the 'size' of the image relative to the
detail within it - similar to 1D DSP where the 'total sampling time'
determines the frequency resolution.

But I can't find any references to this topic. So will be grateful for
any comments or pointers.

Thanks,

Chris
====================
Chris Bore
BORES Signal Processing
www.bores.com

Tim Wescott
10-04-2005, 03:15 PM
You may be inventing a metric. You can probably calculate what you want
knowing a minimum-resolvable contrast curve (frequency vs. contrast),
but you'd have to take a whole bunch of variables into account such as
the size of the targets you want to distinguish, their relative
contrast, the level to which their position and phase is known, the
major angle of the variation, etc., etc., etc.

Since you can calculate it all if you know the MRC curve then why use
some other non-standard metric for rating your camera?

Chris Bore wrote:

> Image 'spatial frequency resolution' seems to be used to mean:
> - the highest spatial frequency that can be resolved.
> Experimentally, an image of sinusoidal intensity is displayed and the
> contrast varied until the viewer can see the bar pattern.
>
> This is using 'spatial frequency resolution' to measure our ability to
> resolve adjacent light and dark features.
>
> I am interested in spatial frequency resolution as in, ability to
> resolve adjacent spatial frequncies (I think this is closer to a normal
> definition of 'frequency resolution').
>
> For example, if we have an image with two textures, each representing a
> spatial frequency, then what separation is required between those
> spatial frequncies so that we can resolve them (in the frequency
> spectrum)?
>
> I think this will be related to the 'size' of the image relative to the
> detail within it - similar to 1D DSP where the 'total sampling time'
> determines the frequency resolution.
>
> But I can't find any references to this topic. So will be grateful for
> any comments or pointers.
>
> Thanks,
>
> Chris
> ====================
> Chris Bore
> BORES Signal Processing
> www.bores.com
>


--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Jerry Avins
10-04-2005, 05:03 PM
Chris Bore wrote:
> Image 'spatial frequency resolution' seems to be used to mean:
> - the highest spatial frequency that can be resolved.
> Experimentally, an image of sinusoidal intensity is displayed and the
> contrast varied until the viewer can see the bar pattern.
>
> This is using 'spatial frequency resolution' to measure our ability to
> resolve adjacent light and dark features.
>
> I am interested in spatial frequency resolution as in, ability to
> resolve adjacent spatial frequncies (I think this is closer to a normal
> definition of 'frequency resolution').
>
> For example, if we have an image with two textures, each representing a
> spatial frequency, then what separation is required between those
> spatial frequncies so that we can resolve them (in the frequency
> spectrum)?
>
> I think this will be related to the 'size' of the image relative to the
> detail within it - similar to 1D DSP where the 'total sampling time'
> determines the frequency resolution.
>
> But I can't find any references to this topic. So will be grateful for
> any comments or pointers.

Chris,

Look into "modulation transfer function" (MTF) It is commonly applied to
lenses and the images they produce, and was developed by Otto Schade (I
knew him) to rationalize TV analysis. He published mostly in the SMPTE
journal, but references abound today. An illustrated tutorial in
photographic terms: http://www.normankoren.com/Tutorials/MTF.html.

I suspect that it isn't right on the mark for you, but some knowledge of
it will deepen your understanding of what you really do want and allow
the right questions to be well put.

Oh, yes: welcome back!

Jerry

P.S. I haven't figured out what "Pattern contrast drios ub half at 42
cycles/mm." was intended to be. I asked: we'll see.
--
Engineering is the art of making what you want from things you can get.
ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻ ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻ

Chris Bore
10-04-2005, 06:51 PM
Thanks, Jerry.

I'm looking into this in connection with digital camera design -
sampling sensors, and associated ways to enhance or 'correct' the
image. MTF may be the key, but I can't quite wrap my mind around it yet
to be sure. :-)


MTF seems aimed at describing the frequency response of the imaging
system, but perhaps focussing :-) on the optical/CCD/sensor area
aspects of this? I'm interested to understand the effect of selecting
only a region of the scene to image.

MTF is the frequency response of the imaging system - if you like, its
frequency response to an impulse? So in a sampled system (eg CCD array)
the MTF will be periodic, with the 'original image' tiling all space -
leading to aliasing, as in the 1D case? The MTF then explains aliasing,
as well as its intended aim of describing attenuation of (usually
higher) spatial frequencies?

Now, though, as well as sampling let's introduce the fact that we
measure only over a finite region of space. Think of this as sampling
the image over all space, then multiplying that with a 'box' with 1's
inside and 0's outside. The MTF then becomes convolved with the 'MTF of
the box' - spread, because of the truncation of the image outside the
box.

It may be this is obvious to everyone except me, but...

...the spreading of the MTF will mean that we are unable to resolve two
spatial frequencies that are close. This is not the same as the
'resolve two adjacent lines' thing. I think it means that some
textures, that are already close in spatial frequency, will become
indistinguishable. It may be unimportant, but think of Al Bovis'
example of a fingerprint image where aliasing makes the print
completely different...

...also, if we follow a crude model of signal truncation causing Gibbs'
phenomenon, then the spreading of MTF will depend on the amplitude of
the image at the edges and so will vary depending on the scene -
artefacts then being scene-dependent which could be more annoying (or
more difficult to detect) than scene-independent artefacts. This could
lead us to implement windowing (as in window filter design) to reduce
the variations.

It may seem like worrying over some tiny detil, but since much image
correction will be done in the frequency domain (and even if not, the
math will have been worked out that way) then I want to understand
these issues properly before I discard them as of no importance.

Oh, and thanks for welcoming me back. A mad schedule of travel, and
focus on CPU architectures and video programming, over the past few
years has meant I had little time to devote to fundamental topics. It
is nice to make contact again.

Chris
========================
Chris Bore
BORES Signal Processing
www.bores.com

Chris Bore
10-04-2005, 06:58 PM
> P.S. I haven't figured out what "Pattern contrast drios ub half at 42 cycles/mm." was intended to be. I asked: we'll see.

I think he meant to say: "Pattern contrast drops by half at
42 cycles/mm".

This would fit with the image shown, and with his idea that 50%
contrast drop is the psychologically point at which we stop bothering
to resolve detail. 'drops' becomes 'drios' if you let your fingers
drift left one key for the two middle letters. 'by' becomes 'ub' if you
type the letters in the reverse order and drift right one key on the
'y'. :-)

Chris
=================
Chris Bore
BORES Signal Processing
www.bores.com

Jerry Avins
10-04-2005, 07:56 PM
Chris Bore wrote:
> Thanks, Jerry.
>
> I'm looking into this in connection with digital camera design -
> sampling sensors, and associated ways to enhance or 'correct' the
> image. MTF may be the key, but I can't quite wrap my mind around it yet
> to be sure. :-)
>
>
> MTF seems aimed at describing the frequency response of the imaging
> system, but perhaps focussing :-) on the optical/CCD/sensor area
> aspects of this? I'm interested to understand the effect of selecting
> only a region of the scene to image.
>
> MTF is the frequency response of the imaging system - if you like, its
> frequency response to an impulse? So in a sampled system (eg CCD array)
> the MTF will be periodic, with the 'original image' tiling all space -
> leading to aliasing, as in the 1D case? The MTF then explains aliasing,
> as well as its intended aim of describing attenuation of (usually
> higher) spatial frequencies?

Yes. Aliasing isn't normally a problem, though. There are in any event
ways to deal with it. As you might expect, one trades away resolution.

> Now, though, as well as sampling let's introduce the fact that we
> measure only over a finite region of space. Think of this as sampling
> the image over all space, then multiplying that with a 'box' with 1's
> inside and 0's outside. The MTF then becomes convolved with the 'MTF of
> the box' - spread, because of the truncation of the image outside the
> box.

Good eyes distinguish .003" at an "inspection" viewing distance, but
..01" is a more usual measure. the MTF of a box much larger than that --
say 20 times -- has little effect.

> It may be this is obvious to everyone except me, but...
>
> ..the spreading of the MTF will mean that we are unable to resolve two
> spatial frequencies that are close. This is not the same as the
> 'resolve two adjacent lines' thing. I think it means that some
> textures, that are already close in spatial frequency, will become
> indistinguishable. It may be unimportant, but think of Al Bovis'
> example of a fingerprint image where aliasing makes the print
> completely different...

Undersampling is bad in polling and fingerprinting. We know how to do
both right.

> ..also, if we follow a crude model of signal truncation causing Gibbs'
> phenomenon, then the spreading of MTF will depend on the amplitude of
> the image at the edges and so will vary depending on the scene -
> artefacts then being scene-dependent which could be more annoying (or
> more difficult to detect) than scene-independent artefacts. This could
> lead us to implement windowing (as in window filter design) to reduce
> the variations.

All the popular explanations aside, there is a very simple definition of
MTF: the Fourier transform of the line-spread function. Not only a
practical way to measure, it provides the answer to many questions.

> It may seem like worrying over some tiny detil, but since much image
> correction will be done in the frequency domain (and even if not, the
> math will have been worked out that way) then I want to understand
> these issues properly before I discard them as of no importance.
>
> Oh, and thanks for welcoming me back. A mad schedule of travel, and
> focus on CPU architectures and video programming, over the past few
> years has meant I had little time to devote to fundamental topics. It
> is nice to make contact again.

In a digital camera, diffraction (small lens openings) and diffusion (a
plate just in front of the sensor array) are practical anti-alias
filters. Cameras with high MTFs at the Nyquist limit can aliasing, but
care must be taken to demonstrate it. (Sometimes, bad luck replaces
care.) Koren's tutorial is well worth reading despite its emphasis on
film. Everything I've written here and more has its parallel there.

Jerry
--
Engineering is the art of making what you want from things you can get.
ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻ ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻ

Chris Bore
10-04-2005, 10:41 PM
I was wondering about the function of lens and diffuser as antialias
filters.

For instance, if a lens were used that had a sharper focus than one CCD
sensor, then aliasing would be more of a problem - which I think is
demonstrated in several examples I have seen.

Yes, the tutorial is very good, even though it doesn't deal much with
sampling.

Another very good reference is Al Bovik's book 'image and video
processing' which also has some really useful examples.

Chris

Fred Marshall
10-05-2005, 03:47 AM
"Chris Bore" <[email protected]> wrote in message
news:[email protected] oups.com...
>
>> P.S. I haven't figured out what "Pattern contrast drios ub half at 42
>> cycles/mm." was intended to be. I asked: we'll see.
>
> I think he meant to say: "Pattern contrast drops by half at
> 42 cycles/mm".
>
> This would fit with the image shown, and with his idea that 50%
> contrast drop is the psychologically point at which we stop bothering
> to resolve detail. 'drops' becomes 'drios' if you let your fingers
> drift left one key for the two middle letters. 'by' becomes 'ub' if you
> type the letters in the reverse order and drift right one key on the
> 'y'. :-)
>
> Chris
> =================
> Chris Bore
> BORES Signal Processing
> www.bores.com

Chris,

Have you pondered the uses of a "zone plate"?

http://www.worldserver.com/turk/opensource/

http://www.optexint.com/pdf/tc66.pdf

http://www.imatest.com/docs/tour_testcharts.html

http://www.cinedrome.ch/hometheater/testpatterns/zoneplate.html

Relative to the comment about resolving lines in 1-D, why isn't it the same
thing?
Why isn't the 2-D equivalent of a sinc still not the resolution limiting
function?

Fred

Clay S. Turner
10-05-2005, 04:09 AM
"Chris Bore" <[email protected]> wrote in message
news:[email protected] oups.com...
> Image 'spatial frequency resolution' seems to be used to mean:
> - the highest spatial frequency that can be resolved.
> Experimentally, an image of sinusoidal intensity is displayed and the
> contrast varied until the viewer can see the bar pattern.
>
> This is using 'spatial frequency resolution' to measure our ability to
> resolve adjacent light and dark features.
>
> I am interested in spatial frequency resolution as in, ability to
> resolve adjacent spatial frequncies (I think this is closer to a normal
> definition of 'frequency resolution').
>
> For example, if we have an image with two textures, each representing a
> spatial frequency, then what separation is required between those
> spatial frequncies so that we can resolve them (in the frequency
> spectrum)?
>
> I think this will be related to the 'size' of the image relative to the
> detail within it - similar to 1D DSP where the 'total sampling time'
> determines the frequency resolution.
>
> But I can't find any references to this topic. So will be grateful for
> any comments or pointers.
>

Hello Chris,

I haven't seen you post here in a while - welcome back.

It seems like what you are asking about actually spans several photographic
topics.

The 1st deals with raw resolution. In the case with 35mm camera systems it
is not unusual to have lenses resolve anywhere in the range of 75 to 120
line pairs per mm. Leica had one that actually resolved over 200 lp/mm! The
film 's area (35mm format) is 24 by 36 mm. However extensive sampling of
slides taken by professional photographers show their resolution tops out at
73 lp/mm. And most were under 60 lp/mm. And these are the quality images.
The practical limitation mostly stems from camera handling - mirror slap and
shutter shake and a little from dye bleed (diffusion) in the film.

Many of today's digital SLRs use either an APS sized sensors - about 16 by
24 mm and there are a few models having "full frame" sensors with 24 by 36
mm areas. Some of the lower ranked models with say 6MP basically hit the
Nyquist limit around 40 or so lp/mm. The densest sensor is currently in the
Nikon D2X which crams 12.4MP into a 16 by 24 mm sensor. This tops out at
over 60lp/mm. Some amateurs have been disappointed with the D2X, since its
high resolution reveals any weaknesses in the lenses and camera handling
techniques.

The anti-alias filter is sometimes described as a blurry piece of glass.
Actually these are made of exotic materials and on the newer DSLRs not only
perform the anti-alias function but also IR and UV rejection. My D100 sees
IR from my studio strobes so when they are used in combination with sunlight
I end up with mixed color lighting. However with a D2X, the strobe's color
match to the sun.

The sensor, as you are probably aware, uses a Bayer color filter array -
except for Foveon sensors. This has the property of sampling the green
channel at nearly twice the rate of the red and blue channels. So in the
case where a camera manufacturer has used a weak anti-alias filter, the
problem of color more' arises. Kodak's DCS-14N is a 14MP full frame camera
with no anti-alias filter. Its replacements - the Pro SLR 14/n and Pro SLR
14/c needless to say did. Also unlike film, digital sensors really prefer
that the light strike them at a nearly normal angle. So today's sensors have
micro lenses. When light strikes the sensor from strongly oblique angles
(wide angle lenses especially on full frame cameras) suffer from light
falloff and exaggerated chromatic aberration.

Currently I don't know of a single standard by which to measure today's
optical systems. And yes photography is moving towards the system concept
and away from the idea of the sensors and lenses working independently.
Lateral chromatic aberration and vignetting are easily handled in post
processing now. Plus both Canon and Nikon offer lenses with reduced image
coverage for cameras with APS sized sensors. Also these newer wideangle
lense are being designed with nearly telecentric optics to ward off
problems. With film, wide angle lenses were simple retrofocus (opposite of
telephoto) designs.

When you talk about comparing textures of two similar fabrics, one of the
things that comes to mind is the idea of local contrast as espoused in Ansel
Adams zone method. Typically fabric detail is low contrast which means it
probably spans 1/2 to 1/3 of an F stop in brightness. So your optical system
will have to be specked to a spatial frequency at a given low contrast.

I hope I've given some food for thought.

Clay

Jerry Avins
10-05-2005, 05:05 AM
Chris Bore wrote:
> I was wondering about the function of lens and diffuser as antialias
> filters.
>
> For instance, if a lens were used that had a sharper focus than one CCD
> sensor, then aliasing would be more of a problem - which I think is
> demonstrated in several examples I have seen.
>
> Yes, the tutorial is very good, even though it doesn't deal much with
> sampling.

It deals with digital photography and scanning, and discusses aliasing
and moires in those contexts.

> Another very good reference is Al Bovik's book 'image and video
> processing' which also has some really useful examples.

Thanks for the reference.

Jerry
--
Engineering is the art of making what you want from things you can get.
ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻ ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻ

Chris Bore
10-07-2005, 10:32 AM
Thank you, Clay.

That is really helpful.

How on earth do you comp.dsp poeple know so much?

Chris