The 8 Great Ideas in Computer Architecture

20 May 2020

When I studied computer architecture, I was introduced to the 8 great ideas that have resulted in much of the incredibly fast growth in computing capabilities over the past 50-ish years. They are the following:

1. Design for Moore’s Law

One of the Intel’s founders, Gordon Moore, predicted that integrated circuit resources would double every 18-24 months. This idea became known as Moore’s Law. Computer architects need to anticipate where their competition will be in 3-5 years when their new computer reaches the market, because if they don’t, then their product will already be lacking in power when it is released.

2. Use Abstraction to Simplify Design

Abstraction allows multiple levels of a design, where the low-level details do not always need to be a concern when interacting with high-level details. The instruction set of a processor hides the details of the activites involved in executing an instruction.

3. Make the Common Case Fast

Improvements to the common case (where the computer spends the most time executing instructions) have resulted in the most significant improvements in computer performance. Amdahl’s law is used to calculate these improvements.

4. Performace via Parallelism

Performing a tasks in parallel takes less time than performing them sequentially. A computer becomes faster if it can run processes in parallel.

5. Performace via Pipelining

This is an extension of parallelism, where instructions may be broken up into stages and multiple instructions may be executing at the same time, but in their different stages. This results in more instructions executed per unit time than what would have been if the full instructions were executed from start to finish sequentially.

6. Performance via Prediction

Computer programs will almost always contain conditional branches. Branches interfere with the smooth operation of a pipeline, where sometimes the pipeline needs to be flushed out and restarted if the program jumps to a different section of code. Performance is improved if the computer can guess if a branch is taken or not and the penalty of wrong guesses is not severe (results in just a pipeline flush).

7. Hierarchy of Memories

The principle of locality says taht memory that has been accessed recently is likely to be accessed again in the near future. To make the common case of re-accessing recently used data faster, a cache is used. The cache will have a shorter data fetch time than going all the way to the main storage (disk). Modern processors use multiple levels of caching.

8. Dependability via Redundancy

Redundancy is very important in data storage technologies. The most commonly used idea is the Redundant Array of Inexpensive Disks (RAID). Using redundancy protects data from being lost if one disk fails because the data can be recovered from the other disks.

Top