PDA

View Full Version : Interpolation and decimation


seb
01-14-2004, 03:10 AM
Hello,

i am looking for decimation and interpolation technique in order to,
given a sampling rate fs, obtain a new sampling rate like (a/b)*fs.

A way to to do is to decimate and then use linear interpolation...

Is there some other ways (documents) to do this ?
If so, have you got some book or url ?

Thanks

Jim Gort
01-14-2004, 03:52 AM
seb:

I don't know of any "new" ways to do it, but make sure that if you do it the
old way, and a<b, you LPF (or BPF is your frequency region of interest is
other than baseband) the original data prior to decimation so that folded
content is not present in downsampled data.

Jim

"seb" <[email protected]> wrote in message
news:[email protected] om...
> Hello,
>
> i am looking for decimation and interpolation technique in order to,
> given a sampling rate fs, obtain a new sampling rate like (a/b)*fs.
>
> A way to to do is to decimate and then use linear interpolation...
>
> Is there some other ways (documents) to do this ?
> If so, have you got some book or url ?
>
> Thanks

Fred Marshall
01-14-2004, 06:49 AM
"seb" <[email protected]> wrote in message
news:[email protected] om...
> Hello,
>
> i am looking for decimation and interpolation technique in order to,
> given a sampling rate fs, obtain a new sampling rate like (a/b)*fs.
>
> A way to to do is to decimate and then use linear interpolation...
>
> Is there some other ways (documents) to do this ?
> If so, have you got some book or url ?
>
> Thanks

See www.dspguru.com on sample rate conversion

Jerry Avins
01-14-2004, 01:37 PM
Jim Gort wrote:

> seb:
>
> I don't know of any "new" ways to do it, but make sure that if you do it the
> old way, and a<b, you LPF (or BPF is your frequency region of interest is
> other than baseband) the original data prior to decimation so that folded
> content is not present in downsampled data.
>
> Jim
>
> "seb" <[email protected]> wrote in message
> news:[email protected] om...
>
>>Hello,
>>
>>i am looking for decimation and interpolation technique in order to,
>>given a sampling rate fs, obtain a new sampling rate like (a/b)*fs.
>>
>>A way to to do is to decimate and then use linear interpolation...
>>
>>Is there some other ways (documents) to do this ?
>>If so, have you got some book or url ?
>>
>>Thanks

By upsampling first, frequencies that the final sample rate will support
don't have to be removed.

Jerry
--
Engineering is the art of making what you want from things you can get.
ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻ ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻ

Bhaskar Thiagarajan
01-14-2004, 05:49 PM
"seb" <[email protected]> wrote in message
news:[email protected] om...
> Hello,
>
> i am looking for decimation and interpolation technique in order to,
> given a sampling rate fs, obtain a new sampling rate like (a/b)*fs.
>
> A way to to do is to decimate and then use linear interpolation...
>
> Is there some other ways (documents) to do this ?
> If so, have you got some book or url ?
>
> Thanks

There have been many many discussion on this topic in comp.dsp. Use google
search and look for multirate sampling.
A good book for reference on this topic is
Multirate Systems and Filterbanks by P.P. Vaidyanathan.

Cheers
Bhaskar

Rick Lyons
01-15-2004, 12:17 PM
On Wed, 14 Jan 2004 09:49:54 -0800, "Bhaskar Thiagarajan"
<[email protected]> wrote:

>
(snipped)
>>
>> Is there some other ways (documents) to do this ?
>> If so, have you got some book or url ?
>>
>> Thanks
>
>There have been many many discussion on this topic in comp.dsp. Use google
>search and look for multirate sampling.
>A good book for reference on this topic is
>Multirate Systems and Filterbanks by P.P. Vaidyanathan.
>
>Cheers
>Bhaskar

Hi Bhasker,
yep, below are a few other references for seb.
I'll bet he doesn't realize that a person could
make a career out of learning all the theory of
sample rate changing. (At least I thinkk so.)

By the way, I heard that fred harris (of windows
fame) is writing a book on multirate signal
processing.

[-Rick-]

[8] Fliege, N. Multirate Digital Signal Processing: Multirate Systems,
Filter Banks, Wavelets, John Wiley & Sons, 1995.

[9] Crochiere, R. and Rabiner, L. Multirate Digital Signal Processing,
Prentice-Hall, Upper Saddle River, New Jersey, 1983.

[11] Crochiere, R. and Rabiner, L. "Decimation and Interpolation of
Digital Signals-A Tutorial Review," Proceedings of the IEEE, Vol. 69,
No. 3, March 1981.

[12] Crochiere, R. and Rabiner, L. "Further Considerations in the
Design of Decimators and Interpolators," IEEE Trans. on Acoust.
Speech, and Signal Proc., Vol. ASSP?24, No. 4, August 1976.

[13] Ballanger, M. et al, "Interpolation, Extrapolation, and
Reduction of Computational Speed in Digital Filters," IEEE Trans. on
Acoust. Speech, and Signal Proc., Vol. ASSP?22, No. 4, August 1974.

Bhaskar Thiagarajan
01-15-2004, 05:17 PM
"Rick Lyons" <[email protected]> wrote in message
news:[email protected]...
> On Wed, 14 Jan 2004 09:49:54 -0800, "Bhaskar Thiagarajan"
> <[email protected]> wrote:
>
> >
> (snipped)
> >>
> >> Is there some other ways (documents) to do this ?
> >> If so, have you got some book or url ?
> >>
> >> Thanks
> >
> >There have been many many discussion on this topic in comp.dsp. Use
google
> >search and look for multirate sampling.
> >A good book for reference on this topic is
> >Multirate Systems and Filterbanks by P.P. Vaidyanathan.
> >
> >Cheers
> >Bhaskar
>
> Hi Bhasker,
> yep, below are a few other references for seb.
> I'll bet he doesn't realize that a person could
> make a career out of learning all the theory of
> sample rate changing. (At least I thinkk so.)
>
> By the way, I heard that fred harris (of windows
> fame) is writing a book on multirate signal
> processing.

That's great. I can finally have some text to go with the multiple block
diagrams and pictures I have from a class he taught - really hard to
decipher some of his magic more than a few months after the class.

Cheers
Bhaskar



>
> [-Rick-]
>
> [8] Fliege, N. Multirate Digital Signal Processing: Multirate Systems,
> Filter Banks, Wavelets, John Wiley & Sons, 1995.
>
> [9] Crochiere, R. and Rabiner, L. Multirate Digital Signal Processing,
> Prentice-Hall, Upper Saddle River, New Jersey, 1983.
>
> [11] Crochiere, R. and Rabiner, L. "Decimation and Interpolation of
> Digital Signals-A Tutorial Review," Proceedings of the IEEE, Vol. 69,
> No. 3, March 1981.
>
> [12] Crochiere, R. and Rabiner, L. "Further Considerations in the
> Design of Decimators and Interpolators," IEEE Trans. on Acoust.
> Speech, and Signal Proc., Vol. ASSP?24, No. 4, August 1976.
>
> [13] Ballanger, M. et al, "Interpolation, Extrapolation, and
> Reduction of Computational Speed in Digital Filters," IEEE Trans. on
> Acoust. Speech, and Signal Proc., Vol. ASSP?22, No. 4, August 1974.
>

seb
01-15-2004, 06:31 PM
[email protected] (Rick Lyons) wrote in message news:<[email protected]>...
> On Wed, 14 Jan 2004 09:49:54 -0800, "Bhaskar Thiagarajan"
> <[email protected]> wrote:
>
> >
> (snipped)
> >>
> >> Is there some other ways (documents) to do this ?
> >> If so, have you got some book or url ?
> >>
> >> Thanks
> >
> >There have been many many discussion on this topic in comp.dsp. Use google
> >search and look for multirate sampling.
> >A good book for reference on this topic is
> >Multirate Systems and Filterbanks by P.P. Vaidyanathan.
> >
> >Cheers
> >Bhaskar
>
> Hi Bhasker,
> yep, below are a few other references for seb.
> I'll bet he doesn't realize that a person could
> make a career out of learning all the theory of
> sample rate changing. (At least I thinkk so.)
>
> By the way, I heard that fred harris (of windows
> fame) is writing a book on multirate signal
> processing.
>
> [-Rick-]
>
> [8] Fliege, N. Multirate Digital Signal Processing: Multirate Systems,
> Filter Banks, Wavelets, John Wiley & Sons, 1995.
>
> [9] Crochiere, R. and Rabiner, L. Multirate Digital Signal Processing,
> Prentice-Hall, Upper Saddle River, New Jersey, 1983.
>
> [11] Crochiere, R. and Rabiner, L. "Decimation and Interpolation of
> Digital Signals-A Tutorial Review," Proceedings of the IEEE, Vol. 69,
> No. 3, March 1981.
>
> [12] Crochiere, R. and Rabiner, L. "Further Considerations in the
> Design of Decimators and Interpolators," IEEE Trans. on Acoust.
> Speech, and Signal Proc., Vol. ASSP?24, No. 4, August 1976.
>
> [13] Ballanger, M. et al, "Interpolation, Extrapolation, and
> Reduction of Computational Speed in Digital Filters," IEEE Trans. on
> Acoust. Speech, and Signal Proc., Vol. ASSP?22, No. 4, August 1974.


Hello,

First, thinks for all this information
I am not intend to learn all the the theory of sample rate changing.
I just have a project running on a DSP on each running differents
processing algorithme, each need a differents sampling rate. So, using
the incoming signal, i have to do downsampling and upsampling in order
to give all algorithme the rigth data.

Thinks

Bhaskar Thiagarajan
01-15-2004, 07:01 PM
"seb" <[email protected]> wrote in message
news:[email protected] m...
> [email protected] (Rick Lyons) wrote in message
news:<[email protected]>...
> > On Wed, 14 Jan 2004 09:49:54 -0800, "Bhaskar Thiagarajan"
> > <[email protected]> wrote:
> >
> > >
> > (snipped)
> > >>
> > >> Is there some other ways (documents) to do this ?
> > >> If so, have you got some book or url ?
> > >>
> > >> Thanks
> > >
> > >There have been many many discussion on this topic in comp.dsp. Use
google
> > >search and look for multirate sampling.
> > >A good book for reference on this topic is
> > >Multirate Systems and Filterbanks by P.P. Vaidyanathan.
> > >
> > >Cheers
> > >Bhaskar
> >
> > Hi Bhasker,
> > yep, below are a few other references for seb.
> > I'll bet he doesn't realize that a person could
> > make a career out of learning all the theory of
> > sample rate changing. (At least I thinkk so.)
> >
> > By the way, I heard that fred harris (of windows
> > fame) is writing a book on multirate signal
> > processing.
> >
> > [-Rick-]
> >
> > [8] Fliege, N. Multirate Digital Signal Processing: Multirate Systems,
> > Filter Banks, Wavelets, John Wiley & Sons, 1995.
> >
> > [9] Crochiere, R. and Rabiner, L. Multirate Digital Signal Processing,
> > Prentice-Hall, Upper Saddle River, New Jersey, 1983.
> >
> > [11] Crochiere, R. and Rabiner, L. "Decimation and Interpolation of
> > Digital Signals-A Tutorial Review," Proceedings of the IEEE, Vol. 69,
> > No. 3, March 1981.
> >
> > [12] Crochiere, R. and Rabiner, L. "Further Considerations in the
> > Design of Decimators and Interpolators," IEEE Trans. on Acoust.
> > Speech, and Signal Proc., Vol. ASSP?24, No. 4, August 1976.
> >
> > [13] Ballanger, M. et al, "Interpolation, Extrapolation, and
> > Reduction of Computational Speed in Digital Filters," IEEE Trans. on
> > Acoust. Speech, and Signal Proc., Vol. ASSP?22, No. 4, August 1974.
>
>
> Hello,
>
> First, thinks for all this information
> I am not intend to learn all the the theory of sample rate changing.
> I just have a project running on a DSP on each running differents
> processing algorithme, each need a differents sampling rate. So, using
> the incoming signal, i have to do downsampling and upsampling in order
> to give all algorithme the rigth data.

If you are looking for a shortcut - you are pretty much out of luck.
Without learning the fundamentals of sample rate changing, you will find it
pretty hard to do anything quick and dirty.
There are some good tutorials on this topic though that will help you
quickly learn the basics and some sample code as well that should help speed
your learning.
Look here
http://dspguru.com/info/faqs/mrfaq.htm


>
> Thinks

Ronald H. Nicholson Jr.
01-15-2004, 10:04 PM
The fastest method of interpolation is to just use the nearest
neighbor, but this usually introduces lots of sampling jitter noise.
Slightly better, but slower by one or two multiply-adds, is 2-point
linear interpolation.

The multirate literature seems to describe lots of variations on high
quality, but much slower, N-tap windowed-sinc FIR filters, with one
or two multiply-adds per tap, depending on whether one uses a large
multi-phase table, or interpolates inside a smaller table of coefficients.

Are there methods of interpolation in between these two in performance?
e.g. if one has enough performance overhead to do more than linear
interpolation, but less than enough for a high quality 11-tap FIR filter
with a large cache-busting multi-phase coefficient table, what other
methods should one try?

Would a 3 or 4 point parabolic or cubic interpolation work? Or would a
3 or 4 tap FIR filter with, say, a cubic approximation to the windowed
sinc be better? Or would using 4 or 5 taps and the nearest phase
neighbor inside a small multi-phase coefficient table be sufficient?

Other options?

Thanks.

--
Ron Nicholson rhn AT nicholson DOT com http://www.nicholson.com/rhn/
#include <canonical.disclaimer> // only my own opinions, etc.

Jon Harris
01-16-2004, 12:22 AM
Good questions! Yes it is possible to do something in between linear and a
much higher-order FIR filter. Myself, I've experimented with cubic
interpolation in audio applications and it sounded better than linear
without too much extra computational effort.

As to the question about a cubic vs. short FIR, it seems to me that you
should be able to design a better interpolating filter than the Lagrange*
filters by taking advantage of a priori knowledge about your signal. For
example, if you have or can create some "sampling headroom", e.g. the
signal's frequency content does not extend all the way to Nyquist, you can
bring in the cut-off point of your low pass filter and hence obtain more
stop-band rejection for the same order. Rolling your own FIR also lets you
trade-off high frequency response vs. stop-band rejection or pass-band
ripple vs. stop-band rejection.

The filter can be designed with either the windowed-sync or an optimal
design method such as Remez, aka Parks-McClellan. In my own experience, I
found the windowed-sync method to be adequate and simple to implement.

As far as how many "phases" you need in your table, if your interpolation
resolution is limited/quantized (e.g. at most 7 points between any 2 real
samples) then you can simply limit the phases to that number. If not, use
the biggest table you can afford and pick the nearest phase neighbor.
Linear interpolation between phases can help too, but at considerable extra
effort. (In my application, it worked well because I had to apply the same
interpolation to multiple channels. Hence I could do the coefficient
interpolation just once and re-use the coefs for all channels.)

BTW, you can also implement Lagrange interpolation with a multi-phase
look-up table. For cubic, that might makes sense e.g. if processing power
is tight and memory isn't.

Best wishes!
-Jon

* general name for linear, parabolic, cubic, etc.

"Ronald H. Nicholson Jr." <[email protected]> wrote in message
news:[email protected]...
> The fastest method of interpolation is to just use the nearest
> neighbor, but this usually introduces lots of sampling jitter noise.
> Slightly better, but slower by one or two multiply-adds, is 2-point
> linear interpolation.
>
> The multirate literature seems to describe lots of variations on high
> quality, but much slower, N-tap windowed-sinc FIR filters, with one
> or two multiply-adds per tap, depending on whether one uses a large
> multi-phase table, or interpolates inside a smaller table of coefficients.
>
> Are there methods of interpolation in between these two in performance?
> e.g. if one has enough performance overhead to do more than linear
> interpolation, but less than enough for a high quality 11-tap FIR filter
> with a large cache-busting multi-phase coefficient table, what other
> methods should one try?
>
> Would a 3 or 4 point parabolic or cubic interpolation work? Or would a
> 3 or 4 tap FIR filter with, say, a cubic approximation to the windowed
> sinc be better? Or would using 4 or 5 taps and the nearest phase
> neighbor inside a small multi-phase coefficient table be sufficient?
>
> Other options?
>
> Thanks.
>
> --
> Ron Nicholson rhn AT nicholson DOT com http://www.nicholson.com/rhn/
> #include <canonical.disclaimer> // only my own opinions, etc.

Fred Marshall
01-16-2004, 01:03 AM
"Ronald H. Nicholson Jr." <[email protected]> wrote in message
news:[email protected]...
> The fastest method of interpolation is to just use the nearest
> neighbor, but this usually introduces lots of sampling jitter noise.
> Slightly better, but slower by one or two multiply-adds, is 2-point
> linear interpolation.
>
> The multirate literature seems to describe lots of variations on high
> quality, but much slower, N-tap windowed-sinc FIR filters, with one
> or two multiply-adds per tap, depending on whether one uses a large
> multi-phase table, or interpolates inside a smaller table of coefficients.
>
> Are there methods of interpolation in between these two in performance?
> e.g. if one has enough performance overhead to do more than linear
> interpolation, but less than enough for a high quality 11-tap FIR filter
> with a large cache-busting multi-phase coefficient table, what other
> methods should one try?
>
> Would a 3 or 4 point parabolic or cubic interpolation work? Or would a
> 3 or 4 tap FIR filter with, say, a cubic approximation to the windowed
> sinc be better? Or would using 4 or 5 taps and the nearest phase
> neighbor inside a small multi-phase coefficient table be sufficient?
>
> Other options?

Rob,

The first thing you need to decide is if you're wanting to increase the
sample rate by a rational factor or if you want to do arbitrary-point
interpolation. If you're talking about FIR filters, etc. then it seems like
you're talking about regularly sampled data?

Also, are you wanting to generate a fully interpolated sequence which may
even go so far as to be almost "continuous" or, more simply, to generate an
occasional interim data point from a set of samples?

[1/2 1 1/2] is a typical filter to interpolate between samples and is the
same as straight line averaging at a midpoint. The filter sample rate is 2x
the input series. It reproduces the input samples exactly. If you like to
think of polyphase implementation, then it's a [1/2 1/2] filter on the data
every other output sample and it's a [1] filter on the data every other
output sample. But, polyphase is simply a way of looking at things and a
way suggesting how to handle the data and multiplies, etc.

The mid-point between interpolating by a factor of 2 or 3 or 4 .... in all
this is to conceptually insert *lots* of zeros between the input samples so
as to increase the output sample rate by a bunch. In order to generate
individual output samples then requires a set of weights that appear to come
from a long FIR filter but which only have to be selected as in a polyphase
output. For example, the [1/2 1 1/2] filter for midpoint straight line
interpolation can be generalized to something like: [0 .1 .2 .3 .4 .5 .6 .7
..8 .9 1.0 .9 .8 .7 .6 .5 .4 .3 .2 .1 0] and used for 10x interpolation ratio
by applying [0 1 0] or [1] on top of a data point, [0.9 .1] at 1/10th a
sample interval, etc. Again, it's just straight line interpolation with a
different discrete interpolation factor. The alternative for arbitrary
points is to just use a straight line formula but requires that you compute
the coefficients for each abritrary point: like [0.777 0.333].

If you use something like a truncated sinc sort of interpolating filter then
the results will be much better than straight line interpolation and the
filter coefficients will be fixed at least. Otherwise, methods of
polynomial interpolation require that the function's coefficients be
generated from the data samples first. So, you'd have to weigh the compute
load to do that sort of thing. While the "filter" might be shorter,
figuring out the coefficients for each output point might be prohibitive.
And, then you have what amounts to a time-varying filter - which will likely
introduce new frequencies.

I don't believe that a "goodness" measure for interpolation has been dealt
with all that much - but maybe so. Somewhere I have a paper that shows
signal to noise ratio as a function of frequency for different methods.
Here's a couple of thoughts:

1) Does the interpolation method reproduce the original sample values?
Many do but some don't. I should think that keeping them unchanged would be
a good thing.

2) Does the interpolation method result in introducing new frequencies?
That would amount to a type of harmonic distortion and is generally
undesirable in an engineering context. It seems a good measure. This is
the signal to noise measure mentioned above.

3) Perhaps related to (2), does the interpolation method result in
introducing content that is temporally far removed if a single unit sample
is interpolated? This is equivalent to measuring the unit sample / impulse
response of the interpolator.

So, one needs to apply some kind of measures like this if "sufficiency" is
to be assessed.

Fred

Jim Gort
01-16-2004, 03:02 AM
Jerry:

I'm not sure how to read your response, but I'm sure you didn't mean to
imply that filtration is not needed. After upsampling, you have created many
replicas of the spectrum of interest, and this must be LPF/BPF to isolate
one replica prior to decimation, otherwise they will all be folded on top of
each other. However, reading your response verbatim, "by upsampling first,
frequencies that the final sample rate will support don't have to be
removed", I guess I would agree--you need to filter out everything but the
band of interest prior to decimation.... Which further emphasizes my point
that filtration must be applied when resampling.

Perhaps you read my post as my assuming decimation would be applied first? I
only said that filtration was needed prior to decimation. Interpolation
filters could be applied, but their usefulness is very application
dependent, and I don't know the OP's application.

Jim

"Jerry Avins" <[email protected]> wrote in message
news:[email protected]...
> Jim Gort wrote:
>
> > seb:
> >
> > I don't know of any "new" ways to do it, but make sure that if you do it
the
> > old way, and a<b, you LPF (or BPF is your frequency region of interest
is
> > other than baseband) the original data prior to decimation so that
folded
> > content is not present in downsampled data.
> >
> > Jim
> >
> > "seb" <[email protected]> wrote in message
> > news:[email protected] om...
> >
> >>Hello,
> >>
> >>i am looking for decimation and interpolation technique in order to,
> >>given a sampling rate fs, obtain a new sampling rate like (a/b)*fs.
> >>
> >>A way to to do is to decimate and then use linear interpolation...
> >>
> >>Is there some other ways (documents) to do this ?
> >>If so, have you got some book or url ?
> >>
> >>Thanks
>
> By upsampling first, frequencies that the final sample rate will support
> don't have to be removed.
>
> Jerry
> --
> Engineering is the art of making what you want from things you can get.
> ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻ ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻ
>

Jerry Avins
01-16-2004, 03:08 AM
Jim Gort wrote:

> Jerry:
>
> I'm not sure how to read your response, but I'm sure you didn't mean to
> imply that filtration is not needed. After upsampling, you have created many
> replicas of the spectrum of interest, and this must be LPF/BPF to isolate
> one replica prior to decimation, otherwise they will all be folded on top of
> each other. However, reading your response verbatim, "by upsampling first,
> frequencies that the final sample rate will support don't have to be
> removed", I guess I would agree--you need to filter out everything but the
> band of interest prior to decimation.... Which further emphasizes my point
> that filtration must be applied when resampling.
>
> Perhaps you read my post as my assuming decimation would be applied first? I
> only said that filtration was needed prior to decimation. Interpolation
> filters could be applied, but their usefulness is very application
> dependent, and I don't know the OP's application.
>
> Jim

Consider upsampling by 3:2. That requires decimating by two and
interpolating by three. Half the bandwidth has to be discarded if the
decimation comes first. By trippling first, all the original ingormation
can be retained. If course filtering is needed, but the frequency cutoff
needn't reduce the original bandwidth. When changing to a lower sample
rate, the bandwidth needs to be reduced to what that will support, but
not lower.

Jerry
--
Engineering is the art of making what you want from things you can get.
ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻ ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻ

Jim Gort
01-16-2004, 03:45 AM
Jerry:

Now I see your misunderstanding--I said be careful about decimation
filtration if a<b. Your example of 3:2 is a=3 and b=2.

Jim

"Jerry Avins" <[email protected]> wrote in message
news:[email protected]...
> Jim Gort wrote:
>
> > Jerry:
> >
> > I'm not sure how to read your response, but I'm sure you didn't mean to
> > imply that filtration is not needed. After upsampling, you have created
many
> > replicas of the spectrum of interest, and this must be LPF/BPF to
isolate
> > one replica prior to decimation, otherwise they will all be folded on
top of
> > each other. However, reading your response verbatim, "by upsampling
first,
> > frequencies that the final sample rate will support don't have to be
> > removed", I guess I would agree--you need to filter out everything but
the
> > band of interest prior to decimation.... Which further emphasizes my
point
> > that filtration must be applied when resampling.
> >
> > Perhaps you read my post as my assuming decimation would be applied
first? I
> > only said that filtration was needed prior to decimation. Interpolation
> > filters could be applied, but their usefulness is very application
> > dependent, and I don't know the OP's application.
> >
> > Jim
>
> Consider upsampling by 3:2. That requires decimating by two and
> interpolating by three. Half the bandwidth has to be discarded if the
> decimation comes first. By trippling first, all the original ingormation
> can be retained. If course filtering is needed, but the frequency cutoff
> needn't reduce the original bandwidth. When changing to a lower sample
> rate, the bandwidth needs to be reduced to what that will support, but
> not lower.
>
> Jerry
> --
> Engineering is the art of making what you want from things you can get.
> ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻ ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻ
>

Jerry Avins
01-16-2004, 03:17 PM
Jim Gort wrote:

> Jerry:
>
> Now I see your misunderstanding--I said be careful about decimation
> filtration if a<b. Your example of 3:2 is a=3 and b=2.
>
> Jim

...

I thought the OP needed to bring two data streams to a common rate so
they could be processed together. The way to do that without losing
bandwidth matches the lower to the higher.

Jerry
--
Engineering is the art of making what you want from things you can get.
ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻ ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻ

Jon Harris
01-16-2004, 07:49 PM
Fred (and others),

There are a few things I didn't quite agree with in your post. I'll comment
on some specifics below. I think you are already aware of this, but just
for the record, let me state the following "big idea":

Interpolating with polynomials and poly-phase FIR filters are not separate,
disjointed methods. Both calculate new samples with weighted sums of
existing samples. Both can be treated by digital filtering theory. Both
are linear operations. Granted, they are typically implemented differently
and most for most people conceptually they seem different, but they really
are accomplishing the same thing and can (and should) be analyzed using the
same methods.

For example, consider the Lagrange polynomial interpolators (linear,
parabolic, cubic, etc.). You can easily implement these using a poly-phase
coefficient table and FIR routine. Often, one computes the coefficients "on
the fly" because they are fairly simple (especially for linear), but this
need not be the case. Take linear interpolation, for example, the
coefficients look like an upside V. As you move to higher order Lagrange
interpolation, the shapes start to resemble a windowed sinc function!

See more comments in-line.

"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
news:[email protected]...
>
>
> The first thing you need to decide is if you're wanting to increase the
> sample rate by a rational factor or if you want to do arbitrary-point
> interpolation. If you're talking about FIR filters, etc. then it seems
like
> you're talking about regularly sampled data?

I'm assuming regularly sampled data as well. With DSPs, is there really
anything other than rational interpolation? I mean, if you have a certain
number of input samples and generate a certain number of output samples, you
have a ratio. It may be really nasty like 1343873/1343895, but it's still
rational. Maybe there are some real-time operations that use irrational

> Also, are you wanting to generate a fully interpolated sequence which may
> even go so far as to be almost "continuous" or, more simply, to generate
an
> occasional interim data point from a set of samples?

Good question. This could effect the number of required "phases" in your
poly-phase filter. And if this number was larger than was practical, it may
dictate calculating coefficients on-the-fly, which could in term dictate the
interpolation method. However, it is still possible to do nearly continuous
interpolation with poly-phase FIRs. You can store as many coefficient
phases as practical in a table and compute the rest through interpolation
(usually linear is good enough). Analog Devices uses this "double
interpolation" method (interpolate to get the coeffeicents, then use them to
interpolate the data) in their audio sample rate converter products.

> [1/2 1 1/2] is a typical filter to interpolate between samples and is the
> same as straight line averaging at a midpoint. The filter sample rate is
2x
> the input series. It reproduces the input samples exactly. If you like
to
> think of polyphase implementation, then it's a [1/2 1/2] filter on the
data
> every other output sample and it's a [1] filter on the data every other
> output sample. But, polyphase is simply a way of looking at things and a
> way suggesting how to handle the data and multiplies, etc.

Right. Poly-phase is a implementation method that can be applied to either
FIR or Lagrange interpolation.

> The mid-point between interpolating by a factor of 2 or 3 or 4 .... in all
> this is to conceptually insert *lots* of zeros between the input samples
so
> as to increase the output sample rate by a bunch. In order to generate
> individual output samples then requires a set of weights that appear to
come
> from a long FIR filter but which only have to be selected as in a
polyphase
> output. For example, the [1/2 1 1/2] filter for midpoint straight line
> interpolation can be generalized to something like: [0 .1 .2 .3 .4 .5 .6
..7
> .8 .9 1.0 .9 .8 .7 .6 .5 .4 .3 .2 .1 0] and used for 10x interpolation
ratio
> by applying [0 1 0] or [1] on top of a data point, [0.9 .1] at 1/10th a
> sample interval, etc. Again, it's just straight line interpolation with a
> different discrete interpolation factor. The alternative for arbitrary
> points is to just use a straight line formula but requires that you
compute
> the coefficients for each abritrary point: like [0.777 0.333].

Right on man!

> If you use something like a truncated sinc sort of interpolating filter
then
> the results will be much better than straight line interpolation and the
> filter coefficients will be fixed at least. Otherwise, methods of
> polynomial interpolation require that the function's coefficients be
> generated from the data samples first. So, you'd have to weigh the
compute
> load to do that sort of thing. While the "filter" might be shorter,
> figuring out the coefficients for each output point might be prohibitive.
> And, then you have what amounts to a time-varying filter - which will
likely
> introduce new frequencies.

A few points of disagreement here. As mentioned above, you can pre-compute
the filter for e.g. cubic interpolation and put it in a table if you want,
in the same manner as you described for linear interpolation. Then there is
no additional computational load at run time. Also, this doesn't end up
generating a time-varying filter (except in the sense that every poly-phase
filter is a time-varying filter). No new frequencies are generated except,
except those that result from the aliasing of signals not perfectly
suppressed by the interpolating filter.

> I don't believe that a "goodness" measure for interpolation has been dealt
> with all that much - but maybe so. Somewhere I have a paper that shows
> signal to noise ratio as a function of frequency for different methods.
> Here's a couple of thoughts:

The best "goodness" measure I've seen is the frequency response of the
interpolation filter. If you treat the Lagrange polynomials as filters as
I've been advocating, you can find their frequency responses and evaluate
their pass-band ripple, stop-band attenuation, side lobes, etc. just as you
can with FIR filters.

> 1) Does the interpolation method reproduce the original sample values?
> Many do but some don't. I should think that keeping them unchanged would
be
> a good thing.

Usually keeping the original samples is only relevant when interpolating by
small rational amounts. Keep in mind that if you do want to keep the
original samples, that significantly limits your choice on interpolation
filters which may prevent you from optimizing some other figure of merit
such as frequency response. In the audio world, the effort is almost always
made to optimize the frequency response rather than keep the original
samples.

> 2) Does the interpolation method result in introducing new frequencies?
> That would amount to a type of harmonic distortion and is generally
> undesirable in an engineering context. It seems a good measure. This is
> the signal to noise measure mentioned above.

Considering the frequency domain again, the filtering operation in sample
rate conversion needs to suppress the higher-frequency images of the
original signal. Then, when you change the sample rate, anything not
perfectly supressed aliases to a frequency within the new Nyquist range.
This generates new frequencies. Hence, the frequency response of the
interpolation filter gives you all the information about the amount of
aliasing (new frequencies generated).

The sample rate conversion ratio tells you _where_ the new frequencies land.
The filter's stop-band rejection tells you _how much_ of the new frequenies
there will be. The SNR of the whole process depends on the input signal and
how well or poorly it aligns with the frequency response of the
interpolation filter.

> 3) Perhaps related to (2), does the interpolation method result in
> introducing content that is temporally far removed if a single unit sample
> is interpolated? This is equivalent to measuring the unit sample /
impulse
> response of the interpolator.

I'm not sure I follow this. I guess you are talking about the "length" of
the filter's impulse response? Actually, the theoretically ideal
interpolating filter has an infinite length impulse response. But
controlling this length is sometimes important, usually because you may need
to minimize the group delay of the filter for a particular application.

> So, one needs to apply some kind of measures like this if "sufficiency" is
> to be assessed.
>
> Fred

Agreed.

-Jon

Fred Marshall
01-17-2004, 12:03 AM
"Jon Harris" <[email protected]> wrote in message
news:[email protected]...
> Fred (and others),
>

Jon,

Great post! I guess I haven't done enough of it to figure this one thing
out:

If you're doing Lagrange interpolation then aren't the coefficients
dependent on the data? it sure seems so from the expressions I've been
reading. If not, then there must be all sorts of tables (FIR filters)
already generated, no? I've not seen them.

Ditto for polynomial interpolation.... So, I must be missing something.
Can you illuminate please?

Polyphase is just an implementation detail for a known filter - so I choose
to leave that out as much as possible.

Fred

Jon Harris
01-17-2004, 12:39 AM
"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
news:[email protected]...
>
> "Jon Harris" <[email protected]> wrote in message
> news:[email protected]...
> > Fred (and others),
>
> Jon,
>
> Great post! I guess I haven't done enough of it to figure this one thing
> out:
>
> If you're doing Lagrange interpolation then aren't the coefficients
> dependent on the data? it sure seems so from the expressions I've been
> reading. If not, then there must be all sorts of tables (FIR filters)
> already generated, no? I've not seen them.

Of course the output is dependent on the data, but the coefficients aren't.
I know it seems that way because of the way the formulas are written.
(Another thing that may confuse this is distinguishing between _polynomial_
coefficients, which would be dependent on the input data and _filter_
coeffieicents which would not be.)

Take your simple linear interpolator, which is the simplest of the Lagrange
family. It's filter coefficeints vary linearly from 0 to 1 depending on
where you are in the between the samples, indpendent of the input data.
Your polynomial coffieneints in the form y = ax + b would be dependent on
the input data of course. The same holds true for higher order polynomials
as well.

The tables that you are looking for are just the impulse responses of the
interpolators. Put in a unit impulse and calculate the outputs at whatever
fractional-precision you want! Assuming the interpolation meets the
criteria of being time-invariant and linear, the superposition principle
tells you that you can calculate the output for any input sequence based on
the impulse response (convolve input with impulse response = FIR!).

> Ditto for polynomial interpolation.... So, I must be missing something.
> Can you illuminate please?

To be honest, I've only studied the Lagrange. There may well indeed be some
other interpolation scheme that doesn't lend itself to an FIR-style
implementation. But it would seem that any *time-invariant linear*
interpolation could be pre-computed, "table-ized", and implemented with a
poly-phase FIR. The reasons it is not commonly implemented this way are
probably practical ones rather than due to any limitation in the theory.

> Polyphase is just an implementation detail for a known filter - so I
choose
> to leave that out as much as possible.

Right.

> Fred

Jon

Jon Harris
01-17-2004, 12:49 AM
By the way, it jus occurred to me that I've been making the (unstated)
assumption that both the input and output sample rates are constant/uniform.
Some of what I've written may not apply to the non-uniform data that you may
encounter in some interpolation problems. (I tend to think in terms of
audio most of the time since that's what I've worked on.)

"Jon Harris" <[email protected]> wrote in message
news:[email protected]...
> "Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
> news:[email protected]...
> >
> > "Jon Harris" <[email protected]> wrote in message
> > news:[email protected]...
> > > Fred (and others),
> >
> > Jon,
> >
> > Great post! I guess I haven't done enough of it to figure this one
thing
> > out:
> >
> > If you're doing Lagrange interpolation then aren't the coefficients
> > dependent on the data? it sure seems so from the expressions I've been
> > reading. If not, then there must be all sorts of tables (FIR filters)
> > already generated, no? I've not seen them.
>
> Of course the output is dependent on the data, but the coefficients
aren't.
> I know it seems that way because of the way the formulas are written.
> (Another thing that may confuse this is distinguishing between
_polynomial_
> coefficients, which would be dependent on the input data and _filter_
> coeffieicents which would not be.)
>
> Take your simple linear interpolator, which is the simplest of the
Lagrange
> family. It's filter coefficeints vary linearly from 0 to 1 depending on
> where you are in the between the samples, indpendent of the input data.
> Your polynomial coffieneints in the form y = ax + b would be dependent on
> the input data of course. The same holds true for higher order
polynomials
> as well.
>
> The tables that you are looking for are just the impulse responses of the
> interpolators. Put in a unit impulse and calculate the outputs at
whatever
> fractional-precision you want! Assuming the interpolation meets the
> criteria of being time-invariant and linear, the superposition principle
> tells you that you can calculate the output for any input sequence based
on
> the impulse response (convolve input with impulse response = FIR!).
>
> > Ditto for polynomial interpolation.... So, I must be missing something.
> > Can you illuminate please?
>
> To be honest, I've only studied the Lagrange. There may well indeed be
some
> other interpolation scheme that doesn't lend itself to an FIR-style
> implementation. But it would seem that any *time-invariant linear*
> interpolation could be pre-computed, "table-ized", and implemented with a
> poly-phase FIR. The reasons it is not commonly implemented this way are
> probably practical ones rather than due to any limitation in the theory.
>
> > Polyphase is just an implementation detail for a known filter - so I
> choose
> > to leave that out as much as possible.
>
> Right.
>
> > Fred
>
> Jon
>
>

Clay S. Turner
01-17-2004, 04:39 AM
"Jon Harris" <[email protected]> wrote in message
news:[email protected]...
>
> For example, consider the Lagrange polynomial interpolators (linear,
> parabolic, cubic, etc.). You can easily implement these using a
poly-phase
> coefficient table and FIR routine. Often, one computes the coefficients
"on
> the fly" because they are fairly simple (especially for linear), but this
> need not be the case. Take linear interpolation, for example, the
> coefficients look like an upside V. As you move to higher order Lagrange
> interpolation, the shapes start to resemble a windowed sinc function!
>

Hello Jon and others,
This is what Whittaker noticed. As the number of evenly spaced points in
LaGrangian interpolation increases towards infinity, the interpolation
kernel becomes the sinc function and this function is bandlimited! This was
the 1st discovery needed in WKS (Whittaker-Kotelnikov-Shannon) interpolation
theory.

See
http://imagescience.bigr.nl/meijering/publications/download/pieee2002.pdf
for a great chronology of interpolation.

Clay

Fred Marshall
01-17-2004, 05:16 PM
"Jon Harris" <[email protected]> wrote in message
news:[email protected]...
> By the way, it jus occurred to me that I've been making the (unstated)
> assumption that both the input and output sample rates are
constant/uniform.
> Some of what I've written may not apply to the non-uniform data that you
may
> encounter in some interpolation problems. (I tend to think in terms of
> audio most of the time since that's what I've worked on.)

Jon,

That's OK for this discussion.

Yes, I understand all you said about FIRs and time invariance. I just
haven't figured out how to get there from polynomial interpolation yet.

So, let's see: y=ax+b is the equation for a straight line. So, how are we
going to twist that into a set of coefficients for linear interpolation? (I
know what the result should be).
Ah!
1) Choose the number of interim sample points to be generated
2) Compute the sample values at those points and find that data points are
weighted by constants for each point - with different coefficients for each
interim sample point -> which is where the polyphase idea could come from.
3) If we align the sets of coefficients up so their pairs overlap the
original samples, then we get a FIR filter definition - and, of course, it's
the familiar triangle.
So the weights aren't data dependent. OK. I just hadn't done the
derivation before .. so hadn't made a hard connection as if the two things
were in different universes! It's funny how AHA!s creep in now and then
isn't it?

Presumably this is also applicable for higher orders of interpolation which
really only means that there are more sample data points being used doesn't
it?

I guess this means that the Parks-McClellan program or any other based on
the Remez exchange algorithm could be run using a *huge* lookup table for
the barycentric Lagrange interpolation step? In concept that is. Working
backwards, does it mean that a partial table might be generated for the
final points of the extrema - because they are usually nearly equispaced and
there are guarantees about extrema at the end points in many situations. Of
course I can't imagine that it would ever be practical because the number of
combinations of point sets is just too huge. But, it's an interesting
thought.

Thanks

Fred

Fred Marshall
01-17-2004, 05:24 PM
"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
news:[email protected]...
>
>
>
>Otherwise, methods of
> polynomial interpolation require that the function's coefficients be
> generated from the data samples first. So, you'd have to weigh the
compute
> load to do that sort of thing. While the "filter" might be shorter,
> figuring out the coefficients for each output point might be prohibitive.
> And, then you have what amounts to a time-varying filter - which will
likely
> introduce new frequencies.

Thanks to Jon Harris for correcting both of these assertions. It seems that
such interpolation methods aren't data dependent and, therefore, not time
varying. So, the compute load seems to be independent of the type of
interpolation method chosen and only dependent on the length of the
interpolating filter.

Fred

Fred Marshall
01-17-2004, 05:36 PM
"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
news:[email protected]...
>

Thanks to Jon Harris for correcting me. It seems that such interpolation
methods aren't data dependent and, therefore, not time varying. So, the
compute load seems to be independent of the type of interpolation method
chosen and only dependent on the length of the interpolating filter.

This makes one want to consider using SPT (sums of powers of two)
coefficients. That's something I've done in FPGAs in the past. No
multiplies. I know that multiplies or MACs are now so fast on some DSPs
(single cycle) that there's no multiply penalty. How about recent FPGAs?

So, since I'm onto something new (for me) I have a couple more questions:

1) What is the relationship between the order of the interpolation - and by
this I mean the number of incoming data points that are involved in the
calculation:

- and the interpolation method? ie. polynomial, Lagrange, etc. Do
different coefficients result? I would think so.

- and the basis set for the interpolation? cosines, sincs, powers of the
ordinate (polynomial), etc.

I guess these questions overlap.

Fred

robert bristow-johnson
01-17-2004, 10:00 PM
In article [email protected], Fred Marshall at
fmarshallx@remove_the_x.acm.org wrote on 01/17/2004 12:36:
....
> So, since I'm onto something new (for me) I have a couple more questions:
>
> 1) What is the relationship between the order of the interpolation - and by
> this I mean the number of incoming data points that are involved in the
> calculation:
>
> - and the interpolation method? ie. polynomial, Lagrange, etc. Do
> different coefficients result? I would think so.

and you would be correct. (BTW, "polynomial" can mean more than just
Lagrange, e.g. Hermite, B-spline.)

> - and the basis set for the interpolation? cosines, sincs, powers of the
> ordinate (polynomial), etc.

hmmm. i dunno exactly what you mean. a sinc is a sinc, a fourier series
has cosines (and sines) as a basis, polynomials are power series.

> I guess these questions overlap.

i'm joining in on this late, but i thought i might expand a little on Jon
Harris's responses.

In article [email protected], Jon Harris at
[email protected] wrote on 01/15/2004 19:22:

> As to the question about a cubic vs. short FIR, it seems to me that you
> should be able to design a better interpolating filter than the Lagrange*
> filters by taking advantage of a priori knowledge about your signal. For
> example, if you have or can create some "sampling headroom", e.g. the
> signal's frequency content does not extend all the way to Nyquist, you can
> bring in the cut-off point of your low pass filter and hence obtain more
> stop-band rejection for the same order.

moreover, in this case, there are better polynomials than Lagrange, such as
B-spline which puts some killer notches at the multiples of Fs, so if your
signal is mostly low-pass, the images get killed even better.

> Rolling your own FIR also lets you
> trade-off high frequency response vs. stop-band rejection or pass-band
> ripple vs. stop-band rejection.
>
> The filter can be designed with either the windowed-sync or an optimal
> design method such as Remez, aka Parks-McClellan. In my own experience, I
> found the windowed-sync method to be adequate and simple to implement.

a windowed-sync has the advantage that when the interpolated sample instance
lands exactly on an original sample instance, the interpolated sample is
precisely the original. but, using MATLAB, you can usually see that filters
designed with Parks-McC (remez) or Least Squares (firls) turn out better
than windowed sincs for the same length.

> As far as how many "phases" you need in your table, if your interpolation
> resolution is limited/quantized (e.g. at most 7 points between any 2 real
> samples) then you can simply limit the phases to that number. If not, use
> the biggest table you can afford and pick the nearest phase neighbor.
> Linear interpolation between phases can help too, but at considerable extra
> effort.

well, it doubles the FIR computation (you need to compute it for two
different phases) and then you have to do the linear interpolation. it's
extra effort that i would normally recommend highly.

> BTW, you can also implement Lagrange interpolation with a multi-phase
> look-up table. For cubic, that might makes sense e.g. if processing power
> is tight and memory isn't.

but, IMO, the purpose of any polynomial interpolation (of which linear,
Lagrange, Hermite, B-spline are examples) is so you can implement *any*
phase without having to have a table with infinite entries to look up.

there is an AES paper that Duane Wise and i did a while back and Olli N
picked up on that attempts to define and generalize how to compare different
polynomial interpolations (of a given order) to each other and also to the
traditional P-McC or LS FIR methods. i can send anyone a PDF copy of it, if
you want. you can find Olli's paper at:

http://www.biochem.oulu.fi/~oniemita/dsp/deip.pdf

the idea is that for all interpolation methods, you can model them as the
sampled data (a string of weighted impulses, with full sized images at every
multiple of Fs) being convoluted with a LPF impulse response that can be
determined from the method of interpolation. e.g. linear interpolation is
modeled by passing your sampled data through a hypothetical filter with a
triangular pulse as the impulse response (resulting in a sinc^2 as the
frequency response and a sinc^4 as the power spectrum). then, given some
simple assumptions of the nature of the input signal (like Jon mentions)
such as it has a mostly empty spectrum in the top octave (that would be
oversampled by a factor of two), you can compute how big the images are
after the filtering and, in worse case, assume that all of those images will
get folded back into the baseband with the hypothetical continuous-time
output is resampled at the new sampling instances. that's what Duane and i
did in our paper.

anyway, what i figgered out was that, to get a 120 dB S/N using linear
interpolation between the polyphases, you needed to upsample to a factor of
about 512 and if using *no* interpolation (sometimes called "drop-sample
interpolation"), to get the same S/N, you need to upsample by a factor of
512K, requiring a huge lookup table.

another look at the resampling (or interpolation) issue can be found in an
earlier post of mine:

http://groups.google.com/groups?selm=B8D78660.3B54%25robert%40wavemechanics .
com

r b-j

Jon Harris
01-18-2004, 12:33 AM
Clay S. Turner <[email protected]> wrote in message
news:AX2Ob.1108$D%[email protected]...
>
> "Jon Harris" <[email protected]> wrote in message
> news:[email protected]...
> >
> > For example, consider the Lagrange polynomial interpolators (linear,
> > parabolic, cubic, etc.). You can easily implement these using a
poly-phase
> > coefficient table and FIR routine. Often, one computes the coefficients
"on
> > the fly" because they are fairly simple (especially for linear), but
this
> > need not be the case. Take linear interpolation, for example, the
> > coefficients look like an upside V. As you move to higher order
Lagrange
> > interpolation, the shapes start to resemble a windowed sinc function!
>
> Hello Jon and others,
> This is what Whittaker noticed. As the number of evenly spaced points in
> LaGrangian interpolation increases towards infinity, the interpolation
> kernel becomes the sinc function and this function is bandlimited! This
was
> the 1st discovery needed in WKS (Whittaker-Kotelnikov-Shannon)
interpolation
> theory.
>
> See
> http://imagescience.bigr.nl/meijering/publications/download/pieee2002.pdf
> for a great chronology of interpolation.

Wow, to think I discovered the same thing, just 90 years too late to be
famous! :-)

Jon Harris
01-18-2004, 12:41 AM
Fred Marshall <fmarshallx@remove_the_x.acm.org> wrote in message
news:[email protected]...
>
> "Jon Harris" <[email protected]> wrote in message
> news:[email protected]...
>
> Yes, I understand all you said about FIRs and time invariance. I just
> haven't figured out how to get there from polynomial interpolation yet.
>
> So, let's see: y=ax+b is the equation for a straight line. So, how are we
> going to twist that into a set of coefficients for linear interpolation?
(I
> know what the result should be).
> Ah!
> 1) Choose the number of interim sample points to be generated
> 2) Compute the sample values at those points and find that data points are
> weighted by constants for each point - with different coefficients for
each
> interim sample point -> which is where the polyphase idea could come from.
> 3) If we align the sets of coefficients up so their pairs overlap the
> original samples, then we get a FIR filter definition - and, of course,
it's
> the familiar triangle.
> So the weights aren't data dependent. OK. I just hadn't done the
> derivation before .. so hadn't made a hard connection as if the two things
> were in different universes! It's funny how AHA!s creep in now and then
> isn't it?

I had that same AHA! a few years back when I was studying it. The 2 methods
do seem very different at first because of the way they are derived and
presented. Glad you got it now too!
You've spelled out in more detail what I was trying to communicate when I
wrote about finding the impulse response of an interpolator: "Put in a unit
impulse and calculate the outputs at whatever fractional-precision you
want".

> Presumably this is also applicable for higher orders of interpolation
which
> really only means that there are more sample data points being used
doesn't
> it?

Works for any order and any type of interpolation scheme that is linear
time-invariant. It's just a bit harder to do the math for the more complex
ones (but that's what computers are for, right? :-)

-Jon

Jon Harris
01-18-2004, 01:11 AM
robert bristow-johnson <[email protected]> wrote in message
news:BC2F1ABA.7CB2%[email protected]...
> In article [email protected], Fred Marshall at
> fmarshallx@remove_the_x.acm.org wrote on 01/17/2004 12:36:
> ...
> i'm joining in on this late, but i thought i might expand a little on Jon
> Harris's responses.
>
> In article [email protected], Jon Harris at
> [email protected] wrote on 01/15/2004 19:22:
>
> > As to the question about a cubic vs. short FIR, it seems to me that you
> > should be able to design a better interpolating filter than the
Lagrange*
> > filters by taking advantage of a priori knowledge about your signal.
For
> > example, if you have or can create some "sampling headroom", e.g. the
> > signal's frequency content does not extend all the way to Nyquist, you
can
> > bring in the cut-off point of your low pass filter and hence obtain more
> > stop-band rejection for the same order.
>
> moreover, in this case, there are better polynomials than Lagrange, such
as
> B-spline which puts some killer notches at the multiples of Fs, so if your
> signal is mostly low-pass, the images get killed even better.

That's good to know. When I first looked at the frequency response of a
linear interpolator I was surprised by how bad it looked. The sidelobe
behavior is pretty poor for audio stuff.

> > Rolling your own FIR also lets you
> > trade-off high frequency response vs. stop-band rejection or pass-band
> > ripple vs. stop-band rejection.
> >
> > The filter can be designed with either the windowed-sync or an optimal
> > design method such as Remez, aka Parks-McClellan. In my own experience,
I
> > found the windowed-sync method to be adequate and simple to implement.
>
> a windowed-sync has the advantage that when the interpolated sample
instance
> lands exactly on an original sample instance, the interpolated sample is
> precisely the original.

Well, at least it can have that advantage. We had this discussion a while
back:
http://groups.google.com/groups?selm=B95B6860.5E87%25robert%40wavemechanics .
com&oe=UTF-8&output=gplain

> but, using MATLAB, you can usually see that filters
> designed with Parks-McC (remez) or Least Squares (firls) turn out better
> than windowed sincs for the same length.

True, though after playing with Kaiser windows, you can usually get
something pretty close. And it's a lot simpler than trying to run remez on
a _huge_ filter that's sometimes required when you need to generate lots of
phases.

> > As far as how many "phases" you need in your table, if your
interpolation
> > resolution is limited/quantized (e.g. at most 7 points between any 2
real
> > samples) then you can simply limit the phases to that number. If not,
use
> > the biggest table you can afford and pick the nearest phase neighbor.
> > Linear interpolation between phases can help too, but at considerable
extra
> > effort.
>
> well, it doubles the FIR computation (you need to compute it for two
> different phases) and then you have to do the linear interpolation. it's
> extra effort that i would normally recommend highly.

Sure. In one application, I had to do the same interpolation on lots of
channels. In this the overhead was small because I could do the
coeffiecient linear interpolation just once and then use the results for the
rest of the channels.

> > BTW, you can also implement Lagrange interpolation with a multi-phase
> > look-up table. For cubic, that might makes sense e.g. if processing
power
> > is tight and memory isn't.
>
> but, IMO, the purpose of any polynomial interpolation (of which linear,
> Lagrange, Hermite, B-spline are examples) is so you can implement *any*
> phase without having to have a table with infinite entries to look up.

That's true. I was just correcting the assertion that polynomial
interpolation _can't_ be done with a look-up table.

> there is an AES paper that Duane Wise and i did a while back and Olli N
> picked up on that attempts to define and generalize how to compare
different
> polynomial interpolations (of a given order) to each other and also to the
> traditional P-McC or LS FIR methods. i can send anyone a PDF copy of it,
if
> you want. you can find Olli's paper at:
>
> http://www.biochem.oulu.fi/~oniemita/dsp/deip.pdf

(requested a copy in a direct e-mail to r b-j)

> the idea is that for all interpolation methods, you can model them as the
> sampled data (a string of weighted impulses, with full sized images at
every
> multiple of Fs) being convoluted with a LPF impulse response that can be
> determined from the method of interpolation. e.g. linear interpolation is
> modeled by passing your sampled data through a hypothetical filter with a
> triangular pulse as the impulse response (resulting in a sinc^2 as the
> frequency response and a sinc^4 as the power spectrum). then, given some
> simple assumptions of the nature of the input signal (like Jon mentions)
> such as it has a mostly empty spectrum in the top octave (that would be
> oversampled by a factor of two), you can compute how big the images are
> after the filtering and, in worse case, assume that all of those images
will
> get folded back into the baseband with the hypothetical continuous-time
> output is resampled at the new sampling instances. that's what Duane and
i
> did in our paper.

Cool. That goes along with my idea that you can learn pretty much
everything you need to know about the quality of an interpolator by looking
at its frequency response.

> anyway, what i figgered out was that, to get a 120 dB S/N using linear
> interpolation between the polyphases, you needed to upsample to a factor
of
> about 512 and if using *no* interpolation (sometimes called "drop-sample
> interpolation"), to get the same S/N, you need to upsample by a factor of
> 512K, requiring a huge lookup table.

Sometimes you get lucky. For example, in one application I worked on, the
interpolation was for varispeed playback. The varispeed rate was quantized
such that it limited the number of possible sub-sample intervals to an
amount that allowed fitting all the phases I could possibly need in the
available memory. I got perfect accuracy with no coefficient interpolation
required. Life was good!

--
Jon Harris

Fred Marshall
01-18-2004, 05:42 AM
"robert bristow-johnson" <[email protected]> wrote in message
news:BC2F1ABA.7CB2%[email protected]...
> In article [email protected], Fred Marshall at
> fmarshallx@remove_the_x.acm.org wrote on 01/17/2004 12:36:
> ...
>
> there is an AES paper that Duane Wise and i did a while back and Olli N
> picked up on that attempts to define and generalize how to compare
different
> polynomial interpolations (of a given order) to each other and also to the
> traditional P-McC or LS FIR methods. i can send anyone a PDF copy of it,
if
> you want.

r b-j,

Yes, would you please send the PDF to me?

Fred

Fred Marshall
01-18-2004, 08:32 AM
>
>
http://groups.google.com/groups?selm=B8D78660.3B54%25robert%40wavemechanics .
> com
>
> r b-j

link didn't work.....

Fred

>

Erik de Castro Lopo
01-18-2004, 08:38 AM
seb wrote:
>
> Hello,
>
> i am looking for decimation and interpolation technique in order to,
> given a sampling rate fs, obtain a new sampling rate like (a/b)*fs.
>
> A way to to do is to decimate and then use linear interpolation...
>
> Is there some other ways (documents) to do this ?
> If so, have you got some book or url ?

Try Secret Rabbit Code:

http://www.mega-nerd.com/SRC/


Erik
--
+-----------------------------------------------------------+
Erik de Castro Lopo [email protected] (Yes it's valid)
+-----------------------------------------------------------+
With 22,100,000 legitimate businesses in the US alone,
allowing each to send only one UCE per *year* gets every
mailbox 60,547 emails per day. There will either be email
without UCE or there will be no email.

Olli Niemitalo
01-18-2004, 10:07 AM
On Sun, 18 Jan 2004, Fred Marshall wrote:

> >
> >
> http://groups.google.com/groups?selm=B8D78660.3B54%25robert%40wavemechanics .
> > com
> >
> > r b-j
>
> link didn't work.....
>
> Fred

It works here, with the "com" included at the end.

-olli

Jerry Avins
01-18-2004, 03:55 PM
Fred Marshall wrote:

>>
> http://groups.google.com/groups?selm=B8D78660.3B54%25robert%40wavemechanics .
>
>>com
>>
>>r b-j
>
>
> link didn't work.....
>
> Fred

The "com" wrapped. Did you fix that?

Jerry
--
Engineering is the art of making what you want from things you can get.
ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻ ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻ

robert bristow-johnson
01-18-2004, 07:07 PM
In article [email protected], Jerry Avins at
[email protected] wrote on 01/18/2004 10:55:

> Fred Marshall wrote:
>
>>>
>> http://groups.google.com/groups?selm=B8D78660.3B54%25robert%40wavemechanics .
>>
>>> com
>>>
>>> r b-j
>>
>>
>> link didn't work.....
>>
>> Fred
>
> The "com" wrapped. Did you fix that?

ya know, the stupid-ass thing is that i can't turn it (the $#@&*#@###!! word
wrapping) off with Outlook Express or adjust the line length. on top of
that, it shows the entire wrapped link underlined and *does* work for me.
i'll just have to remember to remind everyone to unwrap it. stupid-ass
micro$hit software.

r b-j

Fred Marshall
01-18-2004, 10:03 PM
"robert bristow-johnson" <[email protected]> wrote in message
news:BC304390.7D23%[email protected]...
> In article [email protected], Jerry Avins at
> [email protected] wrote on 01/18/2004 10:55:
>
> > Fred Marshall wrote:
> >
> >>>
> >>
http://groups.google.com/groups?selm=B8D78660.3B54%25robert%40wavemechanics .
> >>
> >>> com
> >>>
> >>> r b-j
> >>
> >>
> >> link didn't work.....
> >>
> >> Fred
> >
> > The "com" wrapped. Did you fix that?
>

Arrrgh. Didn't even see it. Fixed. Done. Thanks.

Fred

Randy Yates
01-18-2004, 11:44 PM
robert bristow-johnson <[email protected]> writes:

> In article [email protected], Jerry Avins at
> [email protected] wrote on 01/18/2004 10:55:
>
>> Fred Marshall wrote:
>>
>>>>
>>> http://groups.google.com/groups?selm=B8D78660.3B54%25robert%40wavemechanics .
>>>
>>>> com
>>>>
>>>> r b-j
>>>
>>>
>>> link didn't work.....
>>>
>>> Fred
>>
>> The "com" wrapped. Did you fix that?
>
> ya know, the stupid-ass thing is that i can't turn it (the $#@&*#@###!! word
> wrapping) off with Outlook Express or adjust the line length. on top of
> that, it shows the entire wrapped link underlined and *does* work for me.
> i'll just have to remember to remind everyone to unwrap it. stupid-ass
> micro$hit software.

XEmacs, man, XEmacs. With gnus, of course. (Yes, even for windoze:
www.xemacs.org, get the native win32 version. Then allow about a year
for learning about new key sequences, Lisp, and key maps...)
--
% Randy Yates % "Though you ride on the wheels of tomorrow,
%% Fuquay-Varina, NC % you still wander the fields of your
%%% 919-577-9882 % sorrow."
%%%% <[email protected]> % '21st Century Man', *Time*, ELO

Jerry Avins
01-19-2004, 01:48 AM
robert bristow-johnson wrote:

> In article [email protected], Jerry Avins at
> [email protected] wrote on 01/18/2004 10:55:
>
>
>>Fred Marshall wrote:
>>
>>
>>>http://groups.google.com/groups?selm=B8D78660.3B54%25robert%40wavemechanics .
>>>
>>>
>>>>com
>>>>
>>>>r b-j
>>>
>>>
>>>link didn't work.....
>>>
>>>Fred
>>
>>The "com" wrapped. Did you fix that?
>
>
> ya know, the stupid-ass thing is that i can't turn it (the $#@&*#@###!! word
> wrapping) off with Outlook Express or adjust the line length. on top of
> that, it shows the entire wrapped link underlined and *does* work for me.
> i'll just have to remember to remind everyone to unwrap it. stupid-ass
> micro$hit software.
>
> r b-j

Use Netscape. Other stupid bugs, but not that one and none with no
workarounds.

Jerry
--
Engineering is the art of making what you want from things you can get.
ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻ ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻ

Ronald H. Nicholson Jr.
01-19-2004, 01:55 AM
In article <[email protected]>,
Jon Harris <[email protected]> wrote:
>"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
>news:[email protected]...
>... But it would seem that any *time-invariant linear*
>interpolation could be pre-computed, "table-ized", and implemented with a
>poly-phase FIR. The reasons it is not commonly implemented this way are
>probably practical ones rather than due to any limitation in the theory.
>
>> Polyphase is just an implementation detail for a known filter - so I
>>choose to leave that out as much as possible.

I sense recursion here.

The problem is not having continuous data. So we use equally spaced
sampled data, and we select, iterpolate or multi-rate FIR filter to
get (estimated) values in between the ones sampled.

But since no CPU has a one-cycle sinc() generating instruction (etc.), we
don't have a continuous FIR filter coefficients. So we use eqully spaced
"phases" of the FIR coefficients, and we select, interpolate, or...

Or maybe use the interpolation FIR filter on it's own coefficients!

Hmmm... I wonder how many levels of this it takes before diminishing
returns sets in? Everybody seems to assume 1.5 levels in enough:
1 or 2 taps (select or linear interpolation inside the phase table
for the FIR coefficients) plus N taps (for the data).

Given a fixed number of MACs per sample and a fixed number of table
entries (to fit in x% of the dcache for instance), it looks like there
are several possiblities. 0 MACs for the coeffs and N MACs for the data.
N/2 MACs for 2 taps of coeff generation (and with a table now with twice
as many phases in the same number of bytes) and N/2 taps for the data.
Or maybe even more MACs for coefficient generation and an even shorter
FIR filter. An interesting optimization and error bounding problem.


IMHO. YMMV.
--
Ron Nicholson rhn AT nicholson DOT com http://www.nicholson.com/rhn/
#include <canonical.disclaimer> // only my own opinions, etc.

Jon Harris
01-19-2004, 04:28 AM
robert bristow-johnson <[email protected]> wrote in message
news:BC304390.7D23%[email protected]...
> In article [email protected], Jerry Avins at
> [email protected] wrote on 01/18/2004 10:55:
>
> > Fred Marshall wrote:
> >
> >>>
>
http://groups.google.com/groups?selm=B8D78660.3B54%25robert%40wavemechanics .
com
> >>>
> >>> r b-j
> >>
> >> link didn't work.....
> >>
> >> Fred
> >
> > The "com" wrapped. Did you fix that?
>
> ya know, the stupid-ass thing is that i can't turn it (the $#@&*#@###!!
word
> wrapping) off with Outlook Express or adjust the line length. on top of
> that, it shows the entire wrapped link underlined and *does* work for me.

I don't know what version of OE you are using, but in mine (Windows, V5.0,
circa 1999), you can go to Tools->Options, Send tab, click Plain Text
Settings and adjust the wrap link.

Another good option is http://tinyurl.com/.

Jon Harris
01-19-2004, 04:45 AM
Ronald H. Nicholson Jr. <[email protected]> wrote in message
news:[email protected]...
> In article <[email protected]>,
> Jon Harris <[email protected]> wrote:
> >"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
> >news:[email protected]...
> >... But it would seem that any *time-invariant linear*
> >interpolation could be pre-computed, "table-ized", and implemented with a
> >poly-phase FIR. The reasons it is not commonly implemented this way are
> >probably practical ones rather than due to any limitation in the theory.
> >
> >> Polyphase is just an implementation detail for a known filter - so I
> >>choose to leave that out as much as possible.
>
> I sense recursion here.
>
> The problem is not having continuous data. So we use equally spaced
> sampled data, and we select, iterpolate or multi-rate FIR filter to
> get (estimated) values in between the ones sampled.
>
> But since no CPU has a one-cycle sinc() generating instruction (etc.), we
> don't have a continuous FIR filter coefficients. So we use eqully spaced
> "phases" of the FIR coefficients, and we select, interpolate, or...
>
> Or maybe use the interpolation FIR filter on it's own coefficients!

This is usually overkill. Unless your available table memory is so small that
you can only store a handfull of phases, the filter coefficients are essentially
highly oversampled data (or low pass data as r b-j called it). Hence you can
get away a pretty simple interpolation scheme since there is very little
high-frequency content.

> Hmmm... I wonder how many levels of this it takes before diminishing
> returns sets in? Everybody seems to assume 1.5 levels in enough:
> 1 or 2 taps (select or linear interpolation inside the phase table
> for the FIR coefficients) plus N taps (for the data).

This is usually adequate for the reasons stated above (and see below too).

> Given a fixed number of MACs per sample and a fixed number of table
> entries (to fit in x% of the dcache for instance), it looks like there
> are several possiblities. 0 MACs for the coeffs and N MACs for the data.
> N/2 MACs for 2 taps of coeff generation (and with a table now with twice
> as many phases in the same number of bytes) and N/2 taps for the data.
> Or maybe even more MACs for coefficient generation and an even shorter
> FIR filter. An interesting optimization and error bounding problem.

I think it would usually be pretty easy to find the optimial answer. For
example, if you decide to do linear coeficient interpolation, you immediately
have to use a shorter filter which means you can store more phases in the same
memory, which means there is a less critical need for the linear coef
interpolation. You can see why higher order interpolation of the coefs quickly
becomes a losing battle. The only excpeption might be if memory was *extremely*
tight and you had tons of MACs available.

> IMHO. YMMV.
> --
> Ron Nicholson rhn AT nicholson DOT com http://www.nicholson.com/rhn/
> #include <canonical.disclaimer> // only my own opinions, etc.

robert bristow-johnson
01-19-2004, 09:00 PM
In article [email protected], Jon Harris at
[email protected] wrote on 01/18/2004 23:45:

> Ronald H. Nicholson Jr. <[email protected]> wrote in message
> news:[email protected]...
....
>>
>> Or maybe use the interpolation FIR filter on it's own coefficients!

one place where that is useful is in designing the FIR coefs to start with
if the upsample ratio is so high that MATLAB's remez() chokes on it. so you
do it for fewer phases, and then use those very coefs to interpolate a
denser table with more phases.

> This is usually overkill. Unless your available table memory is so small that
> you can only store a handfull of phases, the filter coefficients are
> essentially highly oversampled data (or low pass data as r b-j called it).

it's not what i meant regarding "low pass data", Jon. hmmmm. maybe it *is*
what i meant, but from a different POV.

what i meant is if the audio data is highly oversampled (which lot's of
phases in the FIR table would do to it), then it appears, from the POV of
the high sampling rate, very "low pass data". now if that's the case, the
*images* are narrow little blips at every integer multiple of the high Fs
and a sinc^2 frequency response (which is what you get when you convolute
with the triangular pulse which is what happens when you do linear
interpolation between neighboring samples which is essentially the same
thing you are doing if you are interpolating between coefficients of
neighboring phases) will kill those little blips quite nicely. higher order
B-splines (which has frequency response of sinc^(N-1) for an Nth order
B-spline) will kill those blips even better.

> Hence you can get away a pretty simple interpolation scheme since there
> is very little high-frequency content.
>
>> Hmmm... I wonder how many levels of this it takes before diminishing
>> returns sets in? Everybody seems to assume 1.5 levels in enough:
>> 1 or 2 taps (select or linear interpolation inside the phase table
>> for the FIR coefficients) plus N taps (for the data).
>
> This is usually adequate for the reasons stated above (and see below too).

if you have 512 different phases of your polyphase FIR filter (the same as
oversampling by a factor of 512), then linear interpolation will be adequate
for an S/N of 120 dB, if i recall correctly. if you do a cubic B-spline,
you will need only 16 or 32 different phases (or oversampling by 16 or 32)
to get the same S/N. if you're doing *no* interpolation (sometimes called
"drop-sample") between coefficients (or oversampled samples), then you need
512K phases (or oversample by 512K.

>> Given a fixed number of MACs per sample and a fixed number of table
>> entries (to fit in x% of the dcache for instance), it looks like there
>> are several possiblities. 0 MACs for the coeffs and N MACs for the data.
>> N/2 MACs for 2 taps of coeff generation (and with a table now with twice
>> as many phases in the same number of bytes) and N/2 taps for the data.
>> Or maybe even more MACs for coefficient generation and an even shorter
>> FIR filter. An interesting optimization and error bounding problem.
>
> I think it would usually be pretty easy to find the optimial answer. For
> example, if you decide to do linear coeficient interpolation, you immediately
> have to use a shorter filter which means you can store more phases in the same
> memory, which means there is a less critical need for the linear coef
> interpolation. You can see why higher order interpolation of the coefs
> quickly becomes a losing battle. The only excpeption might be if memory was
> *extremely* tight and you had tons of MACs available.

the tradeoff is strictly between memory usage and computational complexity.
if you have little memory and lotsa MIPs to burn, you will want to use a
higher order interpolation between the few coefficients you have. if you
have 16 megawords laying around, you can save MIPs and do *no* interpolation
of coefficients.

now, if you are *upsampling*, then you can do the linear (or whatever kind)
interpolation on the resulting output samples from the two (or more)
neighboring phases. that is mathematically equivalent to interpolating the
coefficients and usually much simpler to code. this doesn't work as well
for downsampling because you need to perform some low-pass filtering of the
signal anyway to avoid aliasing and one way to do that is to pick off your
sinc() table coefficients at a different spacing. that means more taps on
your FIR filter (per sample) but you are doing fewer output samples per
second anyway and so your computational load is not really increased.

r b-j

Ronald H. Nicholson Jr.
01-20-2004, 10:37 PM
In article <[email protected]>,
Jon Harris <[email protected]> wrote:
>Ronald H. Nicholson Jr. <[email protected]> wrote in message
>news:[email protected]...
>> >> Polyphase is just an implementation detail for a known filter - so I
>> >>choose to leave that out as much as possible.
>>
>> I sense recursion here.
>>
>> The problem is not having continuous data. So we use equally spaced
>> sampled data, and we select, iterpolate or multi-rate FIR filter to
>> get (estimated) values in between the ones sampled.
>>
>> But since no CPU has a one-cycle sinc() generating instruction (etc.), we
>> don't have a continuous FIR filter coefficients. So we use eqully spaced
>> "phases" of the FIR coefficients, and we select, interpolate, or...
>>
>> Or maybe use the interpolation FIR filter on it's own coefficients!
>
>This is usually overkill. Unless your available table memory is so
>small that you can only store a handfull of phases

Actually, that's exactly the case about which I'm thinking. Consider
a DSP/CPU with a very small data cache, and a latency for table memory
access which is many cycles long (e.g. even longer than doing "several"
multiply accumulates). If one is only interpolating a short signal
segment with a non-repeating phase step, a large phase table is almost
guaranteed to not be cache resident, so every table lookup incurs a
cache miss penalty.

How much can the phase table be compressed by using higher orders, or
even recursive, interpolation schemes? At the limit, one could even do
a series expansion of the sin and cosine terms contained in a windowed
sinc with zero table lookups. In the case of common CPUs with a 1 cycle
fp MAC but cache miss penalty of over 200 cycles (2+ GHz CPU with a 100
nS memory system), this almost makes sense. Some DSPs with much smaller
caches are getting near this point also.

So, is there any good way to interpolate or approximate a windowed sinc
using only a tiny number of "phases"/samples, e.g. such that the entire
table fits in only a few cache lines?

Would using a linear interpolation on a very sparse windowed sinc
table on a few more points of the sparse table itself work?


IMHO. YMMV.
--
Ron Nicholson rhn AT nicholson DOT com http://www.nicholson.com/rhn/
#include <canonical.disclaimer> // only my own opinions, etc.

Jon Harris
01-20-2004, 11:44 PM
"Ronald H. Nicholson Jr." <[email protected]> wrote in message
news:[email protected]...
> In article <[email protected]>,
> Jon Harris <[email protected]> wrote:
> >Ronald H. Nicholson Jr. <[email protected]> wrote in message
> >news:[email protected]...
> >> >> Polyphase is just an implementation detail for a known filter - so I
> >> >>choose to leave that out as much as possible.
> >>
> >> I sense recursion here.
> >>
> >> The problem is not having continuous data. So we use equally spaced
> >> sampled data, and we select, iterpolate or multi-rate FIR filter to
> >> get (estimated) values in between the ones sampled.
> >>
> >> But since no CPU has a one-cycle sinc() generating instruction (etc.),
we
> >> don't have a continuous FIR filter coefficients. So we use eqully
spaced
> >> "phases" of the FIR coefficients, and we select, interpolate, or...
> >>
> >> Or maybe use the interpolation FIR filter on it's own coefficients!
> >
> >This is usually overkill. Unless your available table memory is so
> >small that you can only store a handfull of phases
>
> Actually, that's exactly the case about which I'm thinking. Consider
> a DSP/CPU with a very small data cache, and a latency for table memory
> access which is many cycles long (e.g. even longer than doing "several"
> multiply accumulates). If one is only interpolating a short signal
> segment with a non-repeating phase step, a large phase table is almost
> guaranteed to not be cache resident, so every table lookup incurs a
> cache miss penalty.

OK, I see where you're going now. I am guilty of only looking at things in
my little world, where the memory is plentiful but the MACs are limited.

> How much can the phase table be compressed by using higher orders, or
> even recursive, interpolation schemes? At the limit, one could even do
> a series expansion of the sin and cosine terms contained in a windowed
> sinc with zero table lookups. In the case of common CPUs with a 1 cycle
> fp MAC but cache miss penalty of over 200 cycles (2+ GHz CPU with a 100
> nS memory system), this almost makes sense. Some DSPs with much smaller
> caches are getting near this point also.

Don't these CPUs tend to have pretty decent-sized and often multi-level
caches?
Also, does a 2 GHz CPU really have a single-cycle (32-bit) floating-point
MAC? I haven't studied the latest from Intel, etc. but that seems
unrealistic considering the best 32-bit floating point DSPs are an order of
magnitude slower. I was under the impression that a 2 GHz couldn't really
do much of anything in a single clock cycle.

> So, is there any good way to interpolate or approximate a windowed sinc
> using only a tiny number of "phases"/samples, e.g. such that the entire
> table fits in only a few cache lines?

Well, you could generate the sine part of it from your favorite trig library
or an appropriate approximation, then just store the the window and 1/x part
combined in a table. I would think you could get away with a pretty small
table in this case, since the 1/x*window would be very smooth (monotonicly
decreasing). Also, if your sinc-based low-pass filter is designed so it is
exactly at the Nyquist frequency, the sine part of the filter will only need
to be calculated twice (I think) for all frequency coefficients. That might
result in a very efficient implementation for some architectures.

Don't forget about the "free" 2:1 compression you can take advantage of by
storing only one half of the symmetric filter coefs. There is a small
penalty in implementation overhead for doing this, but it's usually not too
bad.

> Would using a linear interpolation on a very sparse windowed sinc
> table on a few more points of the sparse table itself work?

As r b-j has discussed, if you have the MACs available, you're probably
better off with something better than linear interpolation. In his paper,
he shows that not only are higher order interpolation schemes better that
linear overall, but they get "more better faster" than linear, i.e. as the
number of phases in the table increases, the improvement is greater.

>
> IMHO. YMMV.
> --
> Ron Nicholson rhn AT nicholson DOT com http://www.nicholson.com/rhn/
> #include <canonical.disclaimer> // only my own opinions, etc.

Ronald H. Nicholson Jr.
01-21-2004, 01:22 AM
In article <[email protected]>,
Jon Harris <[email protected]> wrote:
>> Actually, that's exactly the case about which I'm thinking. Consider
>> a DSP/CPU with a very small data cache, and a latency for table memory
>> access which is many cycles long (e.g. even longer than doing "several"
>> multiply accumulates). If one is only interpolating a short signal
>> segment with a non-repeating phase step, a large phase table is almost
>> guaranteed to not be cache resident, so every table lookup incurs a
>> cache miss penalty.
>
>OK, I see where you're going now. I am guilty of only looking at things in
>my little world, where the memory is plentiful but the MACs are limited.
>
>> How much can the phase table be compressed by using higher orders, or
>> even recursive, interpolation schemes? At the limit, one could even do
>> a series expansion of the sin and cosine terms contained in a windowed
>> sinc with zero table lookups. In the case of common CPUs with a 1 cycle
>> fp MAC but cache miss penalty of over 200 cycles (2+ GHz CPU with a 100
>> nS memory system), this almost makes sense. Some DSPs with much smaller
>> caches are getting near this point also.
>
>Don't these CPUs tend to have pretty decent-sized and often multi-level
>caches?
>Also, does a 2 GHz CPU really have a single-cycle (32-bit) floating-point
>MAC?

Some (G5/970) can issue and retire (start or finish) even more than
one fp MAC per clock cycle, however the latency can be several cycles.
So for recursive IIR filters, where there are data dependancies on
the previous fp calculation, you won't get single cycle performance.
But for FIR filters, where several taps can be computed independantly
and thus in parallel, one can get near one MAC per clock cycle with
even fairly simple coding. Most of these CPUs can even do the indexing
and data loads in parallel with the fp MACs. I believe both the G5 and
Itanium have Fortran Linpack benchmark numbers of well over 1 flop per Hz.

The caches are huge on the server CPU's, but I doubt they will be as
large on the lower power GHz+ DSP chips in development. In any
case, the first level caches are usually not large compared to a
large multi-tap multi-phase table, and there is still a multi-cycle
penalty going to second level cache, more than enough time for
a few more flop MACs.


IMHO. YMMV.
--
Ron Nicholson rhn AT nicholson DOT com http://www.nicholson.com/rhn/
#include <canonical.disclaimer> // only my own opinions, etc.

Jon Harris
01-22-2004, 07:05 AM
robert bristow-johnson <[email protected]> wrote in message
news:BC31AF92.7DC0%[email protected]...
> In article [email protected], Jon Harris at
> [email protected] wrote on 01/18/2004 23:45:
>
> > Ronald H. Nicholson Jr. <[email protected]> wrote in message
> > news:[email protected]...
> ...
> one place where that is useful is in designing the FIR coefs to start with
> if the upsample ratio is so high that MATLAB's remez() chokes on it. so you
> do it for fewer phases, and then use those very coefs to interpolate a
> denser table with more phases.
>
> > This is usually overkill. Unless your available table memory is so small
that
> > you can only store a handfull of phases, the filter coefficients are
> > essentially highly oversampled data (or low pass data as r b-j called it).
>
> it's not what i meant regarding "low pass data", Jon. hmmmm. maybe it *is*
> what i meant, but from a different POV.
>
> what i meant is if the audio data is highly oversampled (which lot's of
> phases in the FIR table would do to it), then it appears, from the POV of
> the high sampling rate, very "low pass data". now if that's the case, the
> *images* are narrow little blips at every integer multiple of the high Fs
> and a sinc^2 frequency response (which is what you get when you convolute
> with the triangular pulse which is what happens when you do linear
> interpolation between neighboring samples which is essentially the same
> thing you are doing if you are interpolating between coefficients of
> neighboring phases) will kill those little blips quite nicely. higher order
> B-splines (which has frequency response of sinc^(N-1) for an Nth order
> B-spline) will kill those blips even better.

A big AHA! for me a while back was that interpolating between the coefficients
and then using them to interpolate the audio is the same as calculating several
results using the "stright" coefficients and then interpolating between them to
get the final result. (I know I'm not saying that very clearly, but hopefully
you know what I'm trying to get at.) If you look at the math, it's pretty clear
that they are equivalent, but it wasn't intuitively obvious to me. Either way,
the data is "low pass", it's just that in one case the audio is low-pass while
in the other case the coefficients (phases) are low pass.

> > Hence you can get away a pretty simple interpolation scheme since there
> > is very little high-frequency content.
> >
> >> Hmmm... I wonder how many levels of this it takes before diminishing
> >> returns sets in? Everybody seems to assume 1.5 levels in enough:
> >> 1 or 2 taps (select or linear interpolation inside the phase table
> >> for the FIR coefficients) plus N taps (for the data).
> >
> > This is usually adequate for the reasons stated above (and see below too).
>
> if you have 512 different phases of your polyphase FIR filter (the same as
> oversampling by a factor of 512), then linear interpolation will be adequate
> for an S/N of 120 dB, if i recall correctly. if you do a cubic B-spline,
> you will need only 16 or 32 different phases (or oversampling by 16 or 32)
> to get the same S/N. if you're doing *no* interpolation (sometimes called
> "drop-sample") between coefficients (or oversampled samples), then you need
> 512K phases (or oversample by 512K.

Sounds reasonable. I know those early Analog Devices chips used linear coef.
interpolation and got 120dB. I read in a paper once how many phases they kept,
but don't remember anymore.

> >> Given a fixed number of MACs per sample and a fixed number of table
> >> entries (to fit in x% of the dcache for instance), it looks like there
> >> are several possiblities. 0 MACs for the coeffs and N MACs for the data.
> >> N/2 MACs for 2 taps of coeff generation (and with a table now with twice
> >> as many phases in the same number of bytes) and N/2 taps for the data.
> >> Or maybe even more MACs for coefficient generation and an even shorter
> >> FIR filter. An interesting optimization and error bounding problem.
> >
> > I think it would usually be pretty easy to find the optimial answer. For
> > example, if you decide to do linear coeficient interpolation, you
immediately
> > have to use a shorter filter which means you can store more phases in the
same
> > memory, which means there is a less critical need for the linear coef
> > interpolation. You can see why higher order interpolation of the coefs
> > quickly becomes a losing battle. The only excpeption might be if memory was
> > *extremely* tight and you had tons of MACs available.
>
> now, if you are *upsampling*, then you can do the linear (or whatever kind)
> interpolation on the resulting output samples from the two (or more)
> neighboring phases. that is mathematically equivalent to interpolating the
> coefficients and usually much simpler to code.

Regarding which method is best, if you need to do the same interpolation on more
than one data stream (e.g. stereo audio), then you're better off doing the
linear (or whatever kind) of interpolation on the coefs and then applying the
results to all the audio channels. Otherwise, like you said, it's equivalent,
so pick the simplest method.

> <snip>