The Wonderful World of Buses

This page is designed as a clarification for chapter 6.3 in the QA book. I hope that you will be able to understand the importance of the bus to computer design, speed of the machine and how the improvements and ideas have lead to the bus designs of today.

The first things we need to understand is exactly what is a bus. To answer the question we need to separate the different types of busses that may exist in a computer. We have a processor bus and/or memory bus (some processor memory buses are one and the same), the i/o bus, which is probably the most important bus since all data will flow on this bus between CPU and i/o cards and devices.


This is the communication pathway from the CPU and the system bus and possibly an external cache. The idea of this bus is to transfer data to and from the cache or system bus as fast as possible. The bus is running at a speed that is equal to the speed of the CPU. This gives us the idea of a synchronous bus. A synchronous bus is one that is running with a clock and that determines how data goes through the bus. In the CPU bus, data will reach its destination typically in one or two clock cycles (this all depends on the clock speed of the CPU). Disadvantages of the synchronized bus are that they can't be very long due to timing problems and everything must travel at the same speed.

We now know what a processor bus is but we have no way of comparing any processor buses. |To do this we can use an equation to calculate the transfer rate of the bus. We need to know what bandwidth is. This is the amount of data that can travel along the bus in one second. When we multiply the clock speed of the bus by the data width of the bus and divide by 8, we will get the bandwidth or transfer rate of the bus (where the 8 comes from I'm still not sure). This transfer rate represents the maximum value. As with any other formula for Comp Sci, this means that you will probably never see any machine run with that transfer rate on its processor bus (or any other bus!). The speed is limited by signal propagation, and how fast data gets on the bus.


This bus is used to transfer data between the CPU and main memory. As stated earlier, this is either the processor bus itself or another stand alone bus. Well here comes a problem. Since we have just seen that the processor bus runs at high clock speeds and we know (hopefully) that RAM runs at much slower speeds, we may have all sorts of timing problems. This problem was cleared up by using more hardware to maintain the interface between the bus and main memory. It is much harder to judge the clock rate of the memory bus (if it is separate form processor bus). The chips that help control the flow of data, and the type of RAM chips (SIMM, DIMM, SIPP, etc.) have their own speeds. We can clearly state that the memory bus is definitely not as fast as the processor bus. It is limited by the slowness of main memory.


The I/O bus allows the computer to communicate with storage devices, modems printers, and other peripheral devices. Here is where we get to the focus of my page. This is where many of the improvements have been made in bus design. We have three areas that have become the focus for increased performance of the bus: Faster CPUs, Increasing software demands, Greater video requirements.

All require a fast I/O bus. We have several, and they are listed in order of introduction to the world:

ISA -> Industry Standard Architecture

This was the first bus on the first PC in 1981. While it doesn't seem to fit into the rapid technology turnover with every other computer component, this bus is still around and is in fact the basis still of most Pentium type machines.

The ISA bus comes in two flavors, 8 and 16 bits. In most of today's, machines you will not see any 8 bit bus connections on any board since the 16 can handle the 8 bit data. The ISA bus is set to run at a rate of 8MHz. This yields a maximum theoretical speed of 8MHz x 16 bits = 128 megabits/second.

The 128 must be divide by 2 which is the least amount of clock cycles it will take data to travel on the bus, and again by 8 to give us 8 megabits/second.

This I very slow compared to the potential of the processor/memory bus which can run more than 40 times as fast as the max for the ISA. This is why faster buses needed to be developed, to help keep up with the speed of the rest of the machine.

Micro Channel Architecture (MCA)

As CPU speeds and data widths increased buses needed to be developed to handle them. The MCA was designed to replace the slower ISA bus. The differences in the MCA are that the data width is 32 bits, the bus runs asynchronous to the CPU, and it provides Bus Mastering. The asynchronous bus means that the speed of the bus is independent of the CPU rate, which lowers the timing problems but increases the level of hardware to support communication with the CPU. Bus Mastering is a way to speed up the bus by having an adapter that acts like a processor to speed up data. The MCA is faster than the ISA but since the MCA has no clock speed, we cannot use the formula we have been using to get its throughput. The only way to get any measure is to use (arrrrrr!!!!) benchmarks.

The biggest drawback to the MCA bus is it only can devices that are designed for MCA buses. It does not support the ISA device (which is why no one will ever buy a MCA bus machine again).

EISA Extended Industry Standard Architecture

The development of the MCA by IBM led to the development of the EISA bus. The MCA bus introduced a faster and advanced bus than the ISA bus, but lacked any compatibility outside itself. The EISA was the answer to the MCA. The EISA bus has all the features of the MCA and also can support ISA devices. The EISA does run at a clock speed though. Its max transfer rate is (8MHz x 32bits)/8= 33.32 megabits/second. But because the EISA bus can support ISA devices, its full abilities are not utilized. The max transfer rate is only when using a 32 bit device and most machines were not using them at that time. Both MCA and EISA did bring something to the table that gave rise to a popular technology of today - plug and play. Both allowed for jumperless devices to be inserted and the computer would be able to know a device existed, but both require software support to recognize the device.

VESA Local Bus (VLB)

The idea of the VLB was to use a bus that was separate from the regular I/O bus. The goal is to have some part of the bus that could run on part of the processor bus and communicate with I/O devices. The benefits of the local bus are that they allow for data transfers to occur at the same speed as the CPU, faster communication with storage devices. Video and storage device controller cards are greatly enhanced in the VLB form as they can transfer data much faster. The VLB has the drawback that it is limited to the 486 processor. Also while theoretically it can run as fast as the CPU, circuitry problems limit it to a recommended speed of 33MHZ. The design of the VLB has a major problem dealing with more than one device on the local bus. Timing problems will occur, and the problem of the speed limitation doesn't help either. While the VLB introduced a innovative way around the slow clock speeds of the ISA, EISA (and MCA is not as fast as VLB), it suffered from a bad design.

Peripheral Component Interconnect (PCI)

The VLB idea lead to a more successful local bus, the PCI bus. The PCI brought a new bus from the processor bus and bridges by control hardware to the I/O (or device connection). The PCI used a bus that could run at the clock speed of the CPU, but without having to deal with the timing problems of the VLB because the PCI bus is not on the processor bus. The "max" for both PCI (32 bit) and VLB transfer rates was (33MHZ x 32bits) / 8 =132 megabits/second. The PCI is now double that since the implementation of 64 bit Pentium systems. The PCI bus has become the choice of today's high end systems (until something else comes along). As stated before this is a local bus, which means machines also will have another I/O bus, typically the ISA bus. So while bus technology has greatly improved its performance, it still is using an original idea as part of its main feature.

What's next?

It is hard to say what the next step would be. I have not seen any reports , but I assume that the next step would be to get an entire I/O bus that can run at the same transfer rate as the processor bus.

What did we see?

Hopefully you could see that each type of bus brought some type of innovation to the field and was later used to develop more advanced ideas. In each of the three areas from the start of the page, we can see that as CPUs got faster it was necessary to increase the bus speed. The VLB helped with the video as does the PCI bus, and virtually all improvements help with increased software demands. All this was done with fairly simple ideas, increase data width, and get the bus speed to that of the CPU. We also see there exists a formula we can use to calculate the transfer rate (except MCA) to do some modest comparisons.

LINKS and Sources

This page was created by Kyle Chapman