Home » Technology » Hadi Esmaeilzadeh on Dark Silicon

Hadi Esmaeilzadeh on Dark Silicon

Luke Muehlhauser, Machine Intelligence Research Institute, here. What if execution pipeline bubbles counted as dark silicon?

Hadi Esmaeilzadeh: I would like to answer your question with a question. What is the difference between the computing industry and the commodity industries like the paper towel industry?

The main difference is that computing industry is an industry of new possibilities while the paper towel industry is an industry of replacement. You buy paper towels because you run out of them; but you buy new computing products because they get better.

And, it is not just the computers that are improving; it is the offered services and experiences that consistently improve. Can you even imagine running out of Microsoft Windows?

One of the primary drivers of this economic model is the exponential reduction in the cost of performing general-purpose computing. While in 1971, at the dawn of microprocessors, the price of 1 MIPS (Million Instruction Per Second) was roughly $5,000, it today is about 4¢. This is an exponential reduction in the cost of raw material for computing. This continuous and exponential reduction in cost has formed the basis of computing industry’s economy in the past four decades.

Two primary enabling factors of this economic model are:

  1. Moore’s Law: The consistent and exponential improvement in transistor fabrication technology that happens every 18 months.
  2. The continuous improvements to the general-purpose processors’ architecture that leverage the transistor-level improvements.

Moore’s Law has been a fundamental driver of computing for more than four decades. Over the past 40 years, every 18 months, the transistor manufacturing facilities have been able to develop a new technology generation that doubles the number of transistors on a single monolithic chip. However, doubling the number of transistors does not provide any benefits by itself. The computer architecture industry harvests these transistors and designs general-purpose processors that make these tiny switches available to the rest of computing community. By building general-purpose processors, the computer architecture community provides a link with mechanisms and abstractions that make these devices accessible to compilers, programming languages, system designers, and application developers. To this end, general-purpose processors enable the computing industry to commodify computing and make it pervasively present everywhere.

Hadi Esmaeilsadeh, et. al. , Power Challenges May End the Multicore Era, here. Interesting but I think the “Really Heavy Shoes Programmers” are already accustomed to getting less than 14% performance improvement per year.

abstract

Starting in 2004, the microprocessor industry has shifted to multicore scaling—increasing the number of cores per die each generation—as its principal strategy for continuing performance growth. Many in the research community believe that this exponential core scaling will continue into the hundreds or thousands of cores per chip, auguring a parallelism revolution in hardware or software. However, while transistor count increases continue at traditional Moore’s Law rates, the per-transistor speed and energy efficiency improvements have slowed dramatically. Under these conditions, more cores are only possible if the cores are slower, simpler, or less utilized with each additional technology generation. This paper brings together transistor technology, processor core, and application models to understand whether multicore scaling can sustain the historical exponential performance growth in this energy-limited era. As the number of cores increases, power constraints may prevent powering of all cores at their full speed, requiring a fraction of the cores to be powered off at all times. According to our models, the fraction of these chips that is “dark” may be as much as 50% within three process generations. The low utility of this “dark silicon” may prevent both scaling to higher core counts and ultimately the economic viability of continued silicon scaling. Our results show that core count scaling provides much less performance gain than conventional wisdom suggests. Under (highly) optimistic scaling assumptions—for parallel workloads—multicore scaling provides a 7.9× (23% per year) over ten years. Under more conservative (realistic) assumptions, multicore scaling provides a total performance gain of 3.7× (14% per year) over ten years, and obviously less when sufficiently parallel workloads are unavailable. Without a breakthrough in process technology or microarchitecture, other directions are needed to continue the historical rate of performance improvement.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: