The Berkeley RISC project started in 1980 under the direction of David Patterson and Carlo H. Sequin. Berkeley RISC was based on gaining performance through the use of pipelining and an aggressive use of register windowing. The Berkeley RISC project delivered the RISC-I processor in 1982. Consisting of only 44,420 transistors (compared with averages of about 100,000 in newer CISC (Complex instruction set computing) RISC-I had only 32 instructions, but, totally outperformed any other single-chip design. They followed this with the RISC-II , that had 40,760 transistor, 39 instruction in 1983, and ran over three times as fast as RISC-I.
The MIPS architecture (Microprocessor without Interlocked Pipeline Stages) came from a graduate course lectured by John L. Hennessy at Stanford University in 1981, resulted in a functioning system in 1983, and could run simple programs by 1984. The MIPS system was followed by the MIPS-X and in 1984, Hennessy and his colleagues formed MIPS Computer Systems. (The company was purchased by Imagination Technologies INC). The commercial venture resulted in the R2000 microprocessor in 1985, and was followed by the R3000 in 1988. In the early 1980s, significant uncertainties surrounded the RISC concept, and it was uncertain if it could have a commercial future, but by the mid-1980s the concepts had matured enough to be seen as commercially viable.
The US government Committee on Innovations in Computing and Communications (CICC) credits the recognition of the capability of the RISC concept to the success of the SPARC (Scalable Processor ARChitecture) system. The success of SPARC renewed interest within IBM, which released new RISC systems by 1990 and by 1995 RISC processors were the foundation of a $15 billion server industry. By 2011, a new research ISA (International Standards on Auditing), RISC-V, has been under development at University of California, Berkeley. This emphasizes features such as assorted multiprocessing and dense instruction encoding.
Pipelining is the process of computer instructions and arithmetic stages. The instruction pipeline represents the stages in which an instruction is moved through the processor, including its being fetched, perhaps buffered, and then executed. The arithmetic pipeline represents the parts of an arithmetic operation that can be broken down and overlapped as they are performed.
Pipelines and pipelining also apply to computer memory controllers and moving data through various memory staging places.
The purpose of this document is to provide an overview of pipelining computer processors. The topic will be covered in general with a focus on some special topics of interest. One area of focus is some of the early criticisms of pipelining and why, in retrospect, they