[email protected] (Rune Allnor) wrote in message news:<
[email protected]>...
>
[email protected] (Peter J. Kootsookos) wrote in message news:<
[email protected]>...
> > Hi All,
> >
> > I was just flipping through the October issue of JASA and came across
> > the "Aggregate Beamformer" idea from David I. Havelock.
....
> At the outset I would approach this article with the ambition of finding
> out where the flaw is.
I've got a copy of the paper and have browsed very quickly through it.
It did take some time until I understood what the idea is, the article
would be much more readable if figures 9 and 10 came directly after
figures 1 and 2.
Anyway, the case in question is that an N sensor array is sampled using
an analog channel switch and one ADC. Thus, the output signal from the ADC
is effectively a time multiplexed mixture of the individual channels.
Havelock's argument is that if this mixture is systematic (fig. 9), one
have to demultiplex all the channels before processing them (which is what
regular beamformers do), but if the time mux scheme is "random", the mux'ed
time sequence itself resembles a random one (fig. 10) and can be processed
in its own right. So Havelock designs a "random" multiplexing scheme that
scrambles the multiplexed sequence, processes this scrambled sequenci in
the computer, which effectively substitutes SW decimation filters for HW
antialias filters, and finds a discrete representation of the input signal.
So far so good.
The processing scheme relies extensively on intimate knowledge of
non-uniform sampling theory and multirate signal processing. I don't think
I have enough knowledge to comment on these things. What I would like to
see, though, is a more detailed description of the sample scrambling scheme.
The C code segment at the end of section II in the paper, mentions an index
buffer S that keeps information about the channel sampling sequence. Is this
sequence truly "random", i.e. updated and randomized between each array
sampling cycle, or is it merely "arbitrary", in that the channel sampling
sequence is non-sequential but fixed once determined? Apparently, the paper
does not answer this question.
The second unclear question is how rigid the requirements are to the timing
of the sampling of channels. The simulation results in the figures indicate
that the "snapshot sampling rate" SSR and "ADC sampling rate" ASR are
related as
ASR = N*SSR
where N is the number of channels. I could imagine a system being specified
from the required SSR and the *maximum* number of channels, such that one
installation with fewer channels actually show an "ADC Duty Cycle" < 100%.
I don't know how important such questions are, but could a short ADC duty
cycle counter the effects of scrambling the mux sequence? Somehow I think
it could.
Last, the examples shown of the beamformer performance only shows one
signal present. So can see no indications about how well the beamformer
suppresses interference from other directions than the steering direction.
In summary, this could be a very interesting approach to beamforming. It
appears that Havelock chooses and approach to the signal processing that
seems to be somewhat related to dithering that has been discussed in another
thread. The main idea is to reduce the cost of the system hardware related
to signal sampling, possibly at the expence of increased demands on
processing management SW and HW.
Rune