FEDERICO FAGGIN

THE FUTURE OF THE MICROPROCESSOR

originally in Forbes Magazine, 1996
http://www.forbes.com/asap/120296/html/federico_faggin.htm

Since the invention of the integrated circuit in 1961, the number of transistors contained in a single chip has increased one millionfold. This incredible feat was accomplished by reducing the area occupied by a transistor by a factor of ten thousand and by increasing the chip area by a factor of one hundred. During the same period of time, both the power dissipation per gate and operating speed have substantially improved. An electronic function that in 1961 required a hundred state-of-the-art chips could be crammed into a single chip by 1971. Ten years later, close to another factor of a hundred was achieved, and so on exponentially to the present day. It was this generic and unrelenting progress that made possible the creation of the first microprocessor and its rapid evolution, and also created much of contemporary microelectronics.

This exponential growth is still going on, although it has slowed down a bit. Today we can produce microprocessors with approximately ten million transistors; in twelve years we will be able to make them with one billion transistors.

How far can we keep on going like this? Certainly we cannot continue indefinitely: Sooner or later the atomic nature of matter will put a limit on scaling. To the best of our knowledge, it will be hard to go to critical dimensions much below ten billionths of a meter (10 nanometers). This dimension is equal to the diameter of a large protein molecule and is a factor of thirty-five times smaller than today's critical dimensions in volume manufacturing (350 nanometers).

This size reduction means that the transistor area can be reduced by another factor of a thousand before reaching fundamental physical limitations. An additional factor of a thousand can be achieved by increasing the chip area and by layering multiple chips one on top of another--up to a few hundred chips are possible--to create a "cubelet" of silicon.

Therefore we can increase the circuit complexity by at least another factor of one million before the silicon-based semiconductor technology, as we know it today, runs out of steam. This means that the practical limit of complexity will be reached about fifty years from now by a chip, actually a cubelet, integrating up to 10 trillion (10,000 billion) transistors.

That's the straightforward technical analysis. But there is a philosophical dimension as well. What are we going to do with chips containing billions of transistors, never mind trillions?

There are two distinct directions that can use this awesome technological capability. The first one is evolutionary: incremental, predictable improvements of the technology we have today, such as integrating more of the entire system electronics onto a single chip, providing more memory, faster processors, more powerful instructions, and so on.

That will be impressive enough. But the second direction is even more compelling: toward revolutionary and highly unpredictable new technologies. This second path is the one that may dramatically transform the future of electronics, commerce, and society in the years to come.

Two features of this revolutionary future that I see appearing in microprocessors are higher parallelism and reconfigurability. Parallelism means that the workload is divided among more than one processor, achieving a speed increase proportional to the number of processors available, as long as all of the processors can be kept busy.

Reconfigurability is much more profound. Imagine having a chip that contains millions of gates that can be rapidly and electrically interconnected in a matter of milliseconds. Such chips would make possible hardware that could specialize and reconfigure itself at the time and place of use to perform all the diverse functions required by a user.

It was hardware reusability that made the computer such a useful and revolutionary tool. Ironically, today we cannot change the basic architecture of any given microprocessor; elements such as its constituent building blocks, the number of registers, and the instruction set are fixed at design time by the chip designers. With hardware reconfigurability, however, we will have the ability to change the way the logic gates are interconnected--on a gate-by-gate basis--enabling the programmer to create his own microprocessor or other specialized hardware with functional characteristics optimized for the task to be performed. Software will then be made up of two parts; the first part will specify the optimal machine for the given task and how to assemble it, while the second part will run on the just-created machine to perform the desired function. This new capability will change the basic design of future computers, ultimately giving us higher flexibility and higher performance at lower cost.

I call this capability run-time reconfigurability, a natural consequence of existing programmable chip technology and our ever increasing need for flexibility and speed. Current field programmable gate array (FPGA) chips are good for only 10,000 to 100,000 gates, but in twelve years they will be capable of handling 1 million to 10 million gates, opening up the doors to fine-grained reconfigurability at the level of a system-on-a-chip. Run-time reconfigurability is like fashioning things out of Tinkertoys; it is like having plastic hardware shaped electrically by software.

This technology initially will be used to create microprocessors specialized for a variety of tasks such as image and sound processing, pattern recognition, communications, and control. Since each of these tasks typically requires a unique and different hardware architecture, it would be costly to dedicate a specialized coprocessor or input-output chip to each separate task. However, since not all tasks are done simultaneously, a cost-effective solution will be to use a reconfigurable chip to optimally perform each required function as needed. Further in the future, general-purpose microprocessors may have a portion of the chip done with reconfigurable gates to add flexibility. Eventually, much or all of the random logic of microprocessors may be reconfigurable.

There are many other revolutionary applications for run-time reconfigurable hardware. Perhaps the most intriguing possibility is to enable the design of fault-tolerant, adaptive, and learning systems that will eventually make possible the creation of autonomous intelligent machines....That will raise philosophical questions that we, as yet, are not ready to answer.

In the first twenty-five years of its existence, the microprocessor has significantly changed our lives. I expect the next twenty-five years to bring even deeper social changes as the level of intelligence of our machines continues to grow exponentially, in step with the capabilities of our semiconductor technology. The impact of something that grows exponentially is difficult to predict because the short-term consequences are generally less than we planned while the long-term consequences are much more than we expected.If we measure the importance of an invention by howmuch change it has generated, the microprocessor scores very well. If we consider that there are fifty more years of exponential improvement in store for us, then the long-term consequences of this technology are quite impossible to imagine.  

Federico Faggin, currently CEO of Synaptics, led the design and development of the world's first microprocessor - the Intel 4004. He then conceived and supervised the design of the landmark 8080, the first modern microprocessor.