FPGAs: Field-Programmable Gate Arrays for Configurable Computing

This brief guide will introduce configurable computing in the form of Field-Programmable Gate Arrays, or FPGAs. The point of this web page is to provide an entry-level overview of what FPGAs are, their primary uses with a couple of real-world examples, and a description of the internal workings of an FPGA.

A Brief History, and a Definition

The concept of Configurable Computing was proposed in the 1960's; the first commercially-available FPGA appeared in 1985. An FPGA is a logic chip comprising cells of logic blocks connected with software-configurable interconnections; it is basically a blank slate in which a programmer can build a chip on the fly to implement a finite state machine. Early FPGAs only contained small numbers of logic cells, and took entire seconds to reconfigure; today, an FPGA can contain hundreds of thousands of cells, and can be reconfigured in microseconds.

A useful comparison can be made between FPGAs and ASICs, Application-Specific Integrated Circuits. From the Webopedia: "a chip designed for a particular application. ASICs are built by connecting existing circuit building blocks in new ways. Since the building blocks already exist in a library, it is much easier to produce a new ASIC than to design a new chip from scratch." An integrated chip designer can use an FPGA to dynamically design a chip, test it, reconfigure it, and settle on a design that can then be used to manufacture an ASIC; since an ASIC has a fixed design that is manufactured, this reduces the turnaround time for a chip design without the extra cost of producing protoypes for the chip. There is also the added advantage that the design can be modified in an FPGA as the changing needs for a chip result in evolutions in the chip's design. In fact, a system built with FPGAs can be upgraded by the system manufacturer as chip designs evolve.

FPGAs have several disadvantages compared to fixed chips, however; the programmability comes with the overhead of extra hardware to perform the programming, slower speed resulting from the extra resistance caused by the programming hardware, and additional cost and size because of the extra hardware. An argument can also be made that FPGAs are too easy to fiddle with, leading to design creep. The tradeoff boils down to versatility (advantage: FPGA) versus speed (advantage: ASIC). Costs issues are more difficult to compare, since the cost of designing and producing an ASIC is amortized over the production run, while the cost of an FPGA is the initial chip cost plus ongoing design costs.

A Peek Inside

The basic components of an FPGA are Configurable Logic Blocks (CLBs) containing Lookup Tables (LUTs) to implement combinational logic (with n inputs into the table, there are 2^n lookup values), Programmable Interconnect Points (PIPs) to create circuit connections within and between logic cells, and Mulitplexers to control input and ouput. The chip is built of an array of CLBs surrounded with configurable I/O Blocks for chip input and output. The PIPs control the connection of wiring segments with a programmable pass transistor that is connected or not depending on the value in the memory cell of the PIP.

The chip can then be connected to an external microprocessor and used for specific tasks, generally computationally intensive tasks with little input and output, since the I/O on the chip and the bus connecting the chip to the processor form a bottleneck. This setup allows the FPGA to be reconfigured on the fly to address the specific computing needs of the processor; since the reconfiguration speed has increased greatly, on-the-fly reconfiguration is becoming more viable, with the goal of creating a computer that can reorganize its own hardware to adapt to its computing tasks.

The logical circuit is designed by the programmer in software (a common language for FPGA design is called VHDL), then transferred into the FPGA chip. FPGA manufacturers provide software suites for configuring the chips; Xilinx and Altera are two prominent FPGA manufacturers.

Uses in Computing

Two major computational areas have found a natural home in FPGAs: cryptography and bioinformatics. Since FPGAs are better suited to computationally intensive tasks with little I/O, computer science fields based in mathematical algorithms (like cryptography) or dynamic programming algorithms (like bioinformatics) are instrinsically suited to FPGA use.

For example, an FPGA can be used in an encryption scheme to perform the encryption using whatever encryption algorithm is programmed into it; and the same chip can be used for multiple rounds of encryption that combine different encryption algorithms. Since the chip only needs a key and data to encrypt as input, and only produces an encrypted data stream, but performs extensive computation in between, the FPGA model fits well.

Similarly, dynamic programming algorithms can be programmed into an FPGA. A dynamic programming algorithm solves an optimization problem by solving, remembering, and recombining subproblems to reach a conclusion. A classic example is the Longest Common Subsequence problem: given two strings, find the longest sequence of letters shared by both strings, in order (e.g., "amortization" and "sympathetic" would yield "mtti"). The dynamic programming algorithm to solve the LCS problem creates a table containing the common letters as multiple passes are made through the strings; the final entry in the table is a longest common subsequence (since of course there could be multiples).

In bioinformatics, specifically in the Human Genome Project, one common area of research is comparing databases of protein sequences in order to produce potential matches between different proteins. Since a protein sequence is made up of letters (representing the amino acids), good pattern-matching algorithms are vital to producing valid relationships between proteins. FPGAs are used to implement the dynamic programming aspects of Smith-Waterman and BLAST pattern-matching algorithms to produce protein sequence alignments.

Conclusion

The evolution of FPGAs has followed the general computing model: the number of logic cells embeddable in a chip has doubled every 18 months. Soon, FPGAs should have millions of logic cells, making them incredibly powerful and inexpensive for large-scale computational tasks. Rather than relying on multiprocessing systems, we can develop FPGA-based systems that can be reconfigured on the fly to perform extremely complicated algorithms, removing the intensive computation from the processors.

FPGAs also provide integrated chip emulation, which speeds up the process of producing ASICs; even students can use FPGAs in classes like CMSC 411 to produce their own processors. FPGAs should soon contain even more processor-like capabilites and dedicated memory, further increasing their value in high-throughput computer systems by freeing the processors from ungainly computing tasks with hardware that can adapt itself to the input data as it processes.


Sources, Links and References

Configurable Computing, by John Villasenor and William H. Mangione-Smith, Scientific American, June 1997: http://www.sciam.com/0697issue/0697villasenor.html

Field-Programmable Gate Array Technology, edited by Stephen M. Trimberger. Kluwer Academic Publishers, Boston, MA, 1994.

Field Programmable Gate Arrays (FPGAs), Richard Larry Ukeiley. PTR Prentice Hall, Englewood Cliffs, NJ, 1993.

Compugen Bioccelerator: http://shag.embl-heidelberg.de:8000/Bic/Bicbioc-faq.html

VHDL examples: http://www.erc.msstate.edu/~reese/vhdl_synthesis/icas_sys.html#acsa

FPGA Glossary: http://www.fpgacpu.org/glossary.html

Xilinx: http://www.xilinx.com/index.shtml

IBM Emulation FAQ: http://www.chips.ibm.com/micronews/vol4_no2/emulation.html

TimeLogic Bioinformatics systems: http://www.timelogic.com/BIODATA.HTM#OPG


written August, 2001 by D. Gaasterland for CMSC 411, Computer Systems Architecture, University of Maryland