Fred (and others),
There are a few things I didn't quite agree with in your post. I'll comment
on some specifics below. I think you are already aware of this, but just
for the record, let me state the following "big idea":
Interpolating with polynomials and poly-phase FIR filters are not separate,
disjointed methods. Both calculate new samples with weighted sums of
existing samples. Both can be treated by digital filtering theory. Both
are linear operations. Granted, they are typically implemented differently
and most for most people conceptually they seem different, but they really
are accomplishing the same thing and can (and should) be analyzed using the
same methods.
For example, consider the Lagrange polynomial interpolators (linear,
parabolic, cubic, etc.). You can easily implement these using a poly-phase
coefficient table and FIR routine. Often, one computes the coefficients "on
the fly" because they are fairly simple (especially for linear), but this
need not be the case. Take linear interpolation, for example, the
coefficients look like an upside V. As you move to higher order Lagrange
interpolation, the shapes start to resemble a windowed sinc function!
See more comments in-line.
"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
news:
[email protected]...
>
>
> The first thing you need to decide is if you're wanting to increase the
> sample rate by a rational factor or if you want to do arbitrary-point
> interpolation. If you're talking about FIR filters, etc. then it seems
like
> you're talking about regularly sampled data?
I'm assuming regularly sampled data as well. With DSPs, is there really
anything other than rational interpolation? I mean, if you have a certain
number of input samples and generate a certain number of output samples, you
have a ratio. It may be really nasty like 1343873/1343895, but it's still
rational. Maybe there are some real-time operations that use irrational
> Also, are you wanting to generate a fully interpolated sequence which may
> even go so far as to be almost "continuous" or, more simply, to generate
an
> occasional interim data point from a set of samples?
Good question. This could effect the number of required "phases" in your
poly-phase filter. And if this number was larger than was practical, it may
dictate calculating coefficients on-the-fly, which could in term dictate the
interpolation method. However, it is still possible to do nearly continuous
interpolation with poly-phase FIRs. You can store as many coefficient
phases as practical in a table and compute the rest through interpolation
(usually linear is good enough). Analog Devices uses this "double
interpolation" method (interpolate to get the coeffeicents, then use them to
interpolate the data) in their audio sample rate converter products.
> [1/2 1 1/2] is a typical filter to interpolate between samples and is the
> same as straight line averaging at a midpoint. The filter sample rate is
2x
> the input series. It reproduces the input samples exactly. If you like
to
> think of polyphase implementation, then it's a [1/2 1/2] filter on the
data
> every other output sample and it's a [1] filter on the data every other
> output sample. But, polyphase is simply a way of looking at things and a
> way suggesting how to handle the data and multiplies, etc.
Right. Poly-phase is a implementation method that can be applied to either
FIR or Lagrange interpolation.
> The mid-point between interpolating by a factor of 2 or 3 or 4 .... in all
> this is to conceptually insert *lots* of zeros between the input samples
so
> as to increase the output sample rate by a bunch. In order to generate
> individual output samples then requires a set of weights that appear to
come
> from a long FIR filter but which only have to be selected as in a
polyphase
> output. For example, the [1/2 1 1/2] filter for midpoint straight line
> interpolation can be generalized to something like: [0 .1 .2 .3 .4 .5 .6
..7
> .8 .9 1.0 .9 .8 .7 .6 .5 .4 .3 .2 .1 0] and used for 10x interpolation
ratio
> by applying [0 1 0] or [1] on top of a data point, [0.9 .1] at 1/10th a
> sample interval, etc. Again, it's just straight line interpolation with a
> different discrete interpolation factor. The alternative for arbitrary
> points is to just use a straight line formula but requires that you
compute
> the coefficients for each abritrary point: like [0.777 0.333].
Right on man!
> If you use something like a truncated sinc sort of interpolating filter
then
> the results will be much better than straight line interpolation and the
> filter coefficients will be fixed at least. Otherwise, methods of
> polynomial interpolation require that the function's coefficients be
> generated from the data samples first. So, you'd have to weigh the
compute
> load to do that sort of thing. While the "filter" might be shorter,
> figuring out the coefficients for each output point might be prohibitive.
> And, then you have what amounts to a time-varying filter - which will
likely
> introduce new frequencies.
A few points of disagreement here. As mentioned above, you can pre-compute
the filter for e.g. cubic interpolation and put it in a table if you want,
in the same manner as you described for linear interpolation. Then there is
no additional computational load at run time. Also, this doesn't end up
generating a time-varying filter (except in the sense that every poly-phase
filter is a time-varying filter). No new frequencies are generated except,
except those that result from the aliasing of signals not perfectly
suppressed by the interpolating filter.
> I don't believe that a "goodness" measure for interpolation has been dealt
> with all that much - but maybe so. Somewhere I have a paper that shows
> signal to noise ratio as a function of frequency for different methods.
> Here's a couple of thoughts:
The best "goodness" measure I've seen is the frequency response of the
interpolation filter. If you treat the Lagrange polynomials as filters as
I've been advocating, you can find their frequency responses and evaluate
their pass-band ripple, stop-band attenuation, side lobes, etc. just as you
can with FIR filters.
> 1) Does the interpolation method reproduce the original sample values?
> Many do but some don't. I should think that keeping them unchanged would
be
> a good thing.
Usually keeping the original samples is only relevant when interpolating by
small rational amounts. Keep in mind that if you do want to keep the
original samples, that significantly limits your choice on interpolation
filters which may prevent you from optimizing some other figure of merit
such as frequency response. In the audio world, the effort is almost always
made to optimize the frequency response rather than keep the original
samples.
> 2) Does the interpolation method result in introducing new frequencies?
> That would amount to a type of harmonic distortion and is generally
> undesirable in an engineering context. It seems a good measure. This is
> the signal to noise measure mentioned above.
Considering the frequency domain again, the filtering operation in sample
rate conversion needs to suppress the higher-frequency images of the
original signal. Then, when you change the sample rate, anything not
perfectly supressed aliases to a frequency within the new Nyquist range.
This generates new frequencies. Hence, the frequency response of the
interpolation filter gives you all the information about the amount of
aliasing (new frequencies generated).
The sample rate conversion ratio tells you _where_ the new frequencies land.
The filter's stop-band rejection tells you _how much_ of the new frequenies
there will be. The SNR of the whole process depends on the input signal and
how well or poorly it aligns with the frequency response of the
interpolation filter.
> 3) Perhaps related to (2), does the interpolation method result in
> introducing content that is temporally far removed if a single unit sample
> is interpolated? This is equivalent to measuring the unit sample /
impulse
> response of the interpolator.
I'm not sure I follow this. I guess you are talking about the "length" of
the filter's impulse response? Actually, the theoretically ideal
interpolating filter has an infinite length impulse response. But
controlling this length is sometimes important, usually because you may need
to minimize the group delay of the filter for a particular application.
> So, one needs to apply some kind of measures like this if "sufficiency" is
> to be assessed.
>
> Fred
Agreed.
-Jon