PDA

View Full Version : resample method when L & M are very large


dingke1980
09-20-2005, 08:30 AM
Hi,everybody,

I am doing arbitrary sample rate conversion. It may be very easy to d
when the up factor L and down factor M is very small(e.g. L=3, M=4). Bu
when L and M is very large, it seems difficult to me.
say 8->44.1k L=80, M=441, I have tried two method.

1. windowed-sinc method. The sinc table totally have twenty lobes with 1
in left and 10 in right, each lobe has 512 samples. Then multiplied thi
table with a Kaiser window (beta = 9). Take a 8KHz sweep signal as a tes
stream, whose freq comes from 1hz to 4000Hz (Nyquste frq). When convertin
this stream to 44.1KHz, the resampled stream is pretty good when the fre
is low. But for freq near 4000Hz, thing is bad. there's a "tail" whos
freq is more than 4000Hz.

2. Polyphase method. the straight-forward way is upsample 80 times
filtering then downsample 441 times. I know using polyphase structure ca
save a lot of computation since the upsample is actually zero stuffing
But anyway, how to design the lowpass filter here whose normalized pas
band freq is 1/441*Pi? it's too narrow! Any idea to design this filter??

thanks


This message was sent using the Comp.DSP web interface o
www.DSPRelated.com

Jim Thomas
09-20-2005, 02:32 PM
dingke1980 wrote:
> Hi,everybody,
>
> I am doing arbitrary sample rate conversion. It may be very easy to do
> when the up factor L and down factor M is very small(e.g. L=3, M=4). But
> when L and M is very large, it seems difficult to me.
> say 8->44.1k L=80, M=441, I have tried two method.
>

Break it into stages. For example:
80/411 = 4/7 * 4/7 * 5/9

The filters in all stages should roll off starting at 4KHz (or lower).
That way you can use more relaxed filters, and accomplish the rate
conversion with fewer calculations as compared to doing the whole thing
in one fell swoop.

Rick covers this nicely in his book Understanding Digital Signal
Processing. You can also read the Multirate FAQ at dspguru.com.

--
Jim Thomas Principal Applications Engineer Bittware, Inc
[email protected] http://www.bittware.com (603) 226-0404 x536
A great mind thinks alike.

robert bristow-johnson
09-20-2005, 06:23 PM
in article [email protected], dingke1980 at
[email protected] wrote on 09/20/2005 03:30:

> I am doing arbitrary sample rate conversion. It may be very easy to do
> when the up factor L and down factor M is very small(e.g. L=3, M=4). But
> when L and M is very large, it seems difficult to me.
> say 8->44.1k L=80, M=441, I have tried two method.
>
> 1. windowed-sinc method. The sinc table totally have twenty lobes with 10
> in left and 10 in right, each lobe has 512 samples. Then multiplied this
> table with a Kaiser window (beta = 9). Take a 8KHz sweep signal as a test
> stream, whose freq comes from 1hz to 4000Hz (Nyquste frq). When converting
> this stream to 44.1KHz, the resampled stream is pretty good when the freq
> is low. But for freq near 4000Hz, thing is bad. there's a "tail" whose
> freq is more than 4000Hz.
>
> 2. Polyphase method. the straight-forward way is upsample 80 times,
> filtering then downsample 441 times. I know using polyphase structure can
> save a lot of computation since the upsample is actually zero stuffing.
> But anyway, how to design the lowpass filter here whose normalized pass
> band freq is 1/441*Pi? it's too narrow! Any idea to design this filter??

it's all "polyphase method". the difference between "windowed-sinc" and
something else (like designing your LPF reconstruction filter with remez()
or firls() in MATLAB) is only a difference of what the coefficients are.

is the resampling synchronous or asynchronous? if it is resampling a
soundfile or if it is real-time where the output clock is derived from the
input clock, then it is synchronous and your ratio will always be
44100/8000. but if it is real-time and the two clocks are independent, then
it is asynchronous and your ratio might vary a little around that value.

there is no need for zero-stuffing nor upsampling and throwing samples away.
and it isn't the efficient way to do it. i have posted a little primer on
how to think about this which has a copy at

http://groups.google.com/group/comp.dsp/msg/e9b6488aef1e2580?hl=en&fwc=1

(this time Google didn't mess it up so bad).

what you do is, first normalize the input sample rate to 1 (so that the unit
of time in between samples is 1). then increment your output sample time by
the reciprocal of the SRC ratio. so you increment time by 80/441. then
break that time into an integer part and a fractional part, the integer part
will determine which set of input samples you will combine in an FIR
computation (simple dot-product). the fractional part will tell you which
set of coefficients (there will be 441 sets of coefficients) to use.

this is IMO the most straightforward way to think about SRC with an
arbitrary sample rate ratio.

--

r b-j [email protected]

"Imagination is more important than knowledge."

dbell
09-20-2005, 06:48 PM
Are you always assuming a rational conversion rate? Can it be dynamic?
What is the application?

dirk

09-20-2005, 07:38 PM
robert bristow-johnson wrote:
> > I am doing arbitrary sample rate conversion.
> it's all "polyphase method".

It's all just interpolation. Polyphase is just an optimization of
windowed-sinc reconstruction. This optimization precalculates the
coefficients of multiple and repeated sample interpolations. You
could also just cache coefficients or calculate them on the fly as
needed, depending on your performance needs and the memory available,
etc., or interpolate pre-calculated coefficients if your conversion
ratio is non-rational or time-varying. If you do precalculate, the
table size need not have anything to do with the conversion ratio,
but only with the table resolution or table interpolation error
desired.
If your performance needs are low, the window is simple, and you have
a fast sin/cos library, you might be able to calculate each coefficient
as needed and not use any table memory.

> http://groups.google.com/group/comp.dsp/msg/e9b6488aef1e2580?hl=en&fwc=1

Good source for a formulation of the windowed-sinc reconstruction
interpolation.


IMHO. YMMV.
--
rhn A.T nicholson d.O.t C-o-M

robert bristow-johnson
09-20-2005, 09:34 PM
in article [email protected]. com,
[email protected] at [email protected] wrote on 09/20/2005 14:38:

> robert bristow-johnson wrote:
>>> I am doing arbitrary sample rate conversion.
>> it's all "polyphase method".
>
> It's all just interpolation.

that's true.

> Polyphase is just an optimization of windowed-sinc reconstruction.

i would say it's just an optimization of *interpolation*. not all
interpolation is windowed-sinc. especially interpolation done in SRC. the
most optimum set of interpolation coefficients (by most definitions of the
word "optimum") are not a windowed-sinc.

> This optimization precalculates the
> coefficients of multiple and repeated sample interpolations.

ah! i see. so you're saying that if i fully calculate my coefficients on
the fly, it ain't "polyphase filtering"?

> You could also just cache coefficients or calculate them on the fly as
> needed, depending on your performance needs and the memory available,
> etc., or interpolate pre-calculated coefficients if your conversion
> ratio is non-rational or time-varying.

in the mind of the computer, i think that everything is rational. anyway,
what you're describing is what happens for ASRC.

--

r b-j [email protected]

"Imagination is more important than knowledge."

Jon Harris
09-21-2005, 06:10 AM
"dingke1980" <[email protected]> wrote in message
news:[email protected]...
>
> Hi,everybody,
>
> I am doing arbitrary sample rate conversion. It may be very easy to do
> when the up factor L and down factor M is very small(e.g. L=3, M=4). But
> when L and M is very large, it seems difficult to me.
> say 8->44.1k L=80, M=441, I have tried two method.
>
> 1. windowed-sinc method. The sinc table totally have twenty lobes with 10
> in left and 10 in right, each lobe has 512 samples. Then multiplied this
> table with a Kaiser window (beta = 9). Take a 8KHz sweep signal as a test
> stream, whose freq comes from 1hz to 4000Hz (Nyquste frq). When converting
> this stream to 44.1KHz, the resampled stream is pretty good when the freq
> is low. But for freq near 4000Hz, thing is bad. there's a "tail" whose
> freq is more than 4000Hz.

What is the cut-off frequency of the resulting filter? I'm guessing it is
exactly the Nyquist rate or 4kHz. If so, you can get better performance by
moving the filter cut-off in. In the process you make the trade off attenuating
signals near 4kHz more, but often such a tradeoff is necessary. Otherwise, you
can make your filter longer. Do you have any restrictions on processing speed
or memory? If not, crank up the filter length!

> 2. Polyphase method. the straight-forward way is upsample 80 times,
> filtering then downsample 441 times. I know using polyphase structure can
> save a lot of computation since the upsample is actually zero stuffing.
> But anyway, how to design the lowpass filter here whose normalized pass
> band freq is 1/441*Pi? it's too narrow! Any idea to design this filter??

One approach is to design the filter at a integer multiple of the required
cut-off, then interpolate the interpolation filter by that same factor. Usually
a simple interpolation can be used for this because the filter coefficients are
very smooth (highly oversampled).