This article describes FPGA methodologies and is intended for FPGA designers as well as managers. For it to be useful and relevant we need to calibrate our scope and definition of the topic since it can mean different things to different people.
As a former Sr. Xilinx FAE for 11 years I experienced a wide variety of FPGA projects in terms of size, complexity, duration, successfulness, etc. An example at one extreme is a large company utilizing many high-end FPGAs in a 4G base-station system with three design teams each in a different country. The other extreme might be a small R&D group with a part-time FPGA designer utilizing a low-end FPGA for relatively simple functions.
One thing in common between these extremes is the use of FPGA technology which in turn infers a common, or should we say similar, subset of tools. What’s not in common, or shouldn’t be, is the FPGA development methodology, that is, one size does not fit all. The best methodology for a particular program will probably need to be tailored to a degree based on several factors. So how do you tailor the best methodology?
One of the best ways is to acquire insight into what has worked before, and just as important, what didn’t work and why. This insight can come from your own experiences or from others. In an attempt to make this article useful I’ll share a ‘real-world’ experience by showing several methodology scenarios that evolved during the course of a large, multi-team project.
First, here’re some basic guidelines to follow:
For large, multi-team projects be careful of developing a methodology by committee. What looks good on paper may not be practical. And once you have a reasonable methodology instill discipline so it’s not abandoned at the slightest schedule slip.
Customers should first assess their core competencies for a good impedance match with their main goals. With smaller, less challenging FPGA designs it’s reasonable to expect a single designer to be responsible for both the design and FPGA implementation.
However with larger, higher performance designs typically requiring team-based flows, it’s not reasonable to expect a single person to have and maintain multiple core competencies. In this case some customers designate an in-house FPGA ‘specialist’ to keep up with the latest tools and handle all FPGA implementation for the design team. Equipped with the fastest computers known to man and an ability to login remotely 24/7, these guys are ready to close timing at a moment’s notice.
Initially, this flow looks good on paper and can work but the FPGA expert (with his ‘cloud’ computers) usually becomes a bottle neck due to non-optimum code or insufficient timing constraints generated by the design team. See Figure 1. In general, Place and Route (PAR) is that bottle neck. It can take more than a day to close timing and get a bitfile back into the lab for testing after the designer first found a bug and modified his code. This can bring a large development to its knees.
As the team’s FPGA learning curve progresses midway into the project they may attempt to offload the FPGA expert by doing module level implementation (ie., PAR). This can work but usually comes a bit too late to keep the program on schedule. See Figure 2.
Later in the project, or because teams are expected to do more with less, having a dedicated in-house FPGA expert becomes a faded luxury and they’re pulled into other critical paths. Though the concept of a designating an in-house FPGA ‘specialist’ makes sense, in can break down in practice. See Figure 3.
In hind-sight the best methodology to start with was not on paper but what evolved at mid-project (Figure 2). Understanding the tools and flow is one thing and can almost be learned from on-line training. However understanding the practical issues magnified in large projects is just as important and usually only learned from on the job training.
In this team-based scenario the FPGA specialist should be stitching together multiple sub-blocks, resolving netlist dependencies, integrating sub-level constraints, generating top-level constraints, and running top-level builds. Perhaps a little more or less, but his responsibility should correspond to his knowledge of the design and control of it. As an example, he does not know the lower level design enough to recognize multi-cycle paths so if the designer didn’t provide those they end up with an over constrained design where even the most powerful computers known to man can still spend hours or even days trying to close timing, unnecessarily.
Before the project begins each designer should be provided adequate training on implementationand timing analysis for the targeted FPGA. Besides HDL coding and simulation, their responsibility should include implementing their sub-design and analyzing timing. Implementing only one sub-block may not challenge PAR and will produce overly optimistic timing reports which seem like a waste of time. However costly, downstream log jams can be avoided by detecting their cause at this point which is where it’s quickest and easiest to fix. Keep in mind PAR will run fast, and the designer with intimate knowledge of his design can
modify and re-simulate quickly.
A list of items should be checked by each designer with one of the first and easiest being timing analysis on unconstrained paths. Verify and document legitimate unconstrained paths. Other items include reviewing worst case paths in terms of margin but also in terms of levels-of-logic. Look for paths crossing clock domains. Etc, Etec. Recognizing potential issues is where training and experience comes in.
The bottom line regarding this methodology is that any issues with these or other key indicators get addressed BEFORE passing the design on for top-level implementation.