Hello all,

I need some guidedance on how to go about modeling an algorithm before I

design it into an

FPGA.

I'd like to use Matlab to run the simulation. Instead of just sanity

check, I also need to optimize the size (# of bits) of several

parameters. It'll be a balance act between # of bits (hence,

FPGA
performance) and precision of the algorithm output.

FPGA needs to run

150+Mhz, and the parameter sizes may end up being somewhat of an odd

size (48? 56?), perhaps up to 64. The algorithm is made up of mostly

adders and delay elements.

So far, it looks like I need to use Simulink with Fixed-point blockset.

Is that sufficient, or is there a better way to do this? I've been

reading previous posts in different places, and it sounds like modeling

delay isn't as easy as it should be. Is that true? I was able to use

'Triggered subsystem' to model a FF without much problem, but then it

was for a very simple single bit data path.

I'd appreciate any input, feedback, insight, suggestion. I'm sending

this to VHDL/Verilog newsgroups for a wider audience.

Please replace 'hard' with 'easy' in my email addr, if replying via

email.

-moe