<
[email protected]> wrote in message
news:
[email protected] oups.com...
> Randy, first of all I want to thank you for spending some time on this
> problem.
> Yes, by random I mean random for t=0,...,N-1 then these values are
> repeated periodically for t=-\infty...+\infty.
> This is consistent with the fact that I use the FFT to do the sinc
> interpolation (I FFT the data, multiply by e^{-j*2*pi/N*dx}, then
> inverse FFT).
> The original problem that I have is the following:
>
> 1. I generate N random samples x_t \in [0,A], t=0,...,N
> 2. I compute the FFT X_k=FFT{x_t}
> 3. Now I apply a phase shift in the frequency domain:
> Y_k=X_k*e^{-j*2*pi/N*dx}
> 4. I inverse transform X_k: y_t=IFFT{Y_k}
>
> Now I notice that some of the y_t are <0, some are > A.
> So the question is: what kind of filtering/transformation should I
> apply to the x_t between steps 1. and 2. to ensure that all the y_t \in
> [0,A]?
> Now that I think more about it, it seems that y_t should have *more*
> harmonics, so maybe filtering x_t does not solve the problem.
> Now you are wondering what this is all about, huh?
> I want to generate some synthetic images as ground truth to test an
> image registration algorithm. By doing this in the frequency domain I
> can enforce these images to be "perfectly" aligned, if the images are
> also bandlimited. This brings us back to the original question: like
> you pointed out, one cannot generate random samples and expect them to
> be samples of a bandlimited function.
> Is this more clear now?
>
> -Arrigo
Arrigo,
I'm not sure that I agree with some of the responses you've received. Well,
maybe not to disagree completely but perhaps in a practical sense.
Here are some questions:
Q: If you create a sequence of random numbers, what is the bandwidth?
Q: Better stated: if you create a sequence of random numbers and assume a
sample interval T, what is the bandwidth?
A: I think you will find that the bandwidth is 1/(2T) for all practical
purposes.
Q: Can a sequence of random numbers be "reconstructed"?
A: Most probably yes.
Q: If so, what does that mean?
A: I think it means:
1) Start with a sequence of random numbers on an assumed regular grid in
time or space
2) Interpolate them with a suitably large family of shifted sincs of
suitable duration.
3) Sample the result at the same points as before.
Do the samples match? If yes, the sequence was faithfully reconstructed.
If no, then it wasn't.
Q: How might one do this such that the resampling of the reconstruction is
different?
A: If there is temporal aliasing allowed to occur in the interpolation.
Q: How can temporal aliasing be caused by interpolating?
A: If interpolation is done by multiplication in the frequency domain and
the array size in frequency is too short for the resulting temporal cicular
convolution results. Or, equivalently, if the interpolation is done by
temporal convolution using circular convolution and the array sizes are too
short - causing overlap / aliasing.
A: If the interpolation is done in the frequency domain by appending zeros
around fs/2
and if there is substantial energy at fs/2.
If there is substantial energy at fs/2, there will be a sharp
discontinuity at fs/2 when the zeros are added. This will cause higher
"quefrencies" to occur in the time domain - thus aliasing in time.
Q: How can one guarantee the use of dynamic range in reconstruction of an
arbitrary set of samples?
A: You can't. So, you are perhaps stuck with doing some scaling.
Q1: What happens if one generates a sequence of random numbers at 100fs,
perfectly lowpass filters them to fs/2 and decimates the result to a
sequence sampled at fs - i.e. by a factor of 100?
Q2: Why is that different than simply generating the samples at fs in the
first place?
Q3: Why is that different than the result obtained from the lowpassed case
above?
Q4: Why is that different than selecting every 100th sample from the
sequence generated at 100fs?
Q5: What happens if you take any of the sequences from Q2, Q3 or Q4 and
reconstruct them using a sample rate of 100fs? Using reasonable assumptions
about how you go about this, do you get the sequence generated in Q1?
Consider this:
You started with a random sequence that has a limited dynamic range. So,
it's not Gaussian. If it were, it would have some probability of having
very large values.
If you were to generate two independent sequences with the same limited
dynamic range and were to add them together, the new dyamic range limit
would be 2x the original. Three would be 3x and so on.
If we examine the probability distribution of amplitudes of the output, we
see that the distribution becomes more and more like Gaussian. The
probability of hitting the theoretical maximum values gets lower and lower
as more sequences are added. See the Central Limit Theorem.
Similarly, an interpolating filter adds the values of multiple samples. The
only difference from above is that they are weighted by the filter
coefficients. Accordingly, the new dynamic range will be the original
multiplied by the length of the filter (assuming it is a FIR filter) and
multiplied by the sum of the magnitudes of the filter coefficients. Take
some examples:
- All of the inputs are at the maximum of the dynamic range.
...The output is the sum of the filter coefficients multiplied by the maximum
of the dynamic range.
- All of the inputs are at the maximum of the dynamic range and have the
same sign as the coefficients of the filter
...The output is the sum of the magnitudes of the filter coefficients
multiplied by the maximum of the dynamic range .... and, is larger than the
first example.
Obviously, selecting the relative magnitudes of the coefficients of the
filter will impact the result. This is just one method of scaling.
You will notice that I stayed away from the theoretical treatements and
tried to build from practical situations. It bothers me when a practical
solution exists in the midst of a proof that says it might not - without
some mention of the probability of failure or the conditions necessary to
fail, etc.
Fred