Moore's Law 101: The Math and Innovation Economics Behind It

Summary : Moore’s Law has morphed into many things over its life. But what is it … Really! Here it is explained in a short simple summary. — G. Dan Hutcheson

Moore’s Law 101: The Math and Innovation Economics Behind It

 

Gordon E. Moore first published his observations which would become known as Moore’s Law in 1965. Since then it’s morphed into many things. But what is Moore’s Law … Really! And how does it work, given the cost and growth constraints of the semiconductor industry. Here it is explained in a short simple summary.  — G. Dan Hutcheson

 

 

Learn: Gordon E. Moore first published his observations which would become known as Moore’s Law in 1965. Later, he mused that “The definition of “Moore’s Law” has come to refer to almost anything related to the semiconductor industry that when plotted on semi-log paper approximates a straight line.”   Indeed, this abuse of the meaning of Moore’s Law has led to a great deal of confusion about what it exactly is.

 

Simply put, Moore’s Law postulates that the level of chip complexity that can be manufactured for minimal cost is an exponential function that doubles in a period of time.  So for any given period, the optimal component density would be:

 

(1)        Ct = 2*Ct-1

 

Where:

            Ct = Component count in period t

            Ct-1 = Component count in the prior period

 

This first part would have been of little economic import had Moore not also observed that the minimal cost of manufacturing a transistor was decreasing at a rate that was nearly inversely proportional to the increase in the number of components.  Thus, the other critical part of Moore’s Law is that the cost of making any given integrated circuit at optimal transistor density levels is essentially constant in time.  So the cost-per-component, or transistor, is cut roughly in half for each tick of Moore’s clock:

 

(2)        Mt = Mt-1

                 2

 

Where:

            Mt = Manufacturing cost per component in period t

            Mt-1 = Manufacturing cost component in the prior period

 

These two functions have proven remarkably resilient over the years. The periodicity, or Moore’s clock cycle, was originally set forth as a doubling every year.  In 1975, Moore gave a second paper on the subject.  While the data showed the doubling each year had been met, he predicted that the integration growth for MOS logic was slowing to a doubling every two years. He never updated this latter prediction.  Since then, the average rate has run close to this rate.

 

How Moore’s Law governs cost growth

 

Another poorly understood fact about Moore’s Law is that it governs the real limit to how fast costs can grow.  Starting with the basic equations from above, the optimal component density for any given period is: 

 

Ct = 2*Ct-1

 

Where:

      Ct = Component count in period t

      Ct-1 = Component count in the prior period

          (Also please note the “-1” here and below is symbolic in nature and not used mathematically)

 

According to the original paper given in 1965, the minimal cost of manufacturing a chip should decrease at a rate that is nearly inversely proportional to the increase in the number of components.  So the cost per component, or transistor, should be cut roughly in half for each tick of Moore’s clock:

 

Mt Mt-1

            2

 

      = 0.5*(Mt-1)

 

Where:

      Mt = Manufacturing cost per component in period t

      Mt-1 = Manufacturing cost component in the prior period

 

What about die cost and wafer cost? Die cost is equal to wafer cost divided by the number of good die. If wafer cost rises, then more good die-per-wafer must be netted to keep cost-per-die the same. Moore said at the first NTRS that he believed industry growth would not be affected if the cost per function dropped by at least 30% for every doubling of transistors. This can be modeled in the following fashion:

 

Mt = 0.7*(Mt-1)

 

Since,

 

Mt = Tdct/Ct

 

 

And,

 

Mt-1 = Tdct-1/Ct-1

 

Where:

      Tdct = Total die cost in period t

      Tdct-1 = Total die cost in the prior period

 

Thus,

 

Tdct = 0.7* Tdct-1

  Ct           Ct-1

 

Tdct   =    0.7* Tdct-1

 2*Ct-1           Ct-1

 

 Tdct  =  2*Ct-1*0.7* Tdct-1

                       Ct-1

Simplified it reduces to:

 

Tdct = 2*0.7*Ct-1* Tdct-1

                       Ct-1

 

 

Tdct = 1.4 Tdct-1

 

If the cost-per-function reduction ratio is different than 0.7, then:

 

Tdct = 2*Cpfr* Tdct-1

 

Where:

      Cpfr = Cost-per-function reduction ratio for every node

               as required by the market

 

So in general, the manufacturing cost per unit area of silicon can rise by 40% per node of Moore's law (or by twice the cost-per-function reduction ratio requirement).  This includes everything from fab cost to materials and labor.  However it does not take yield or wafer size into account.  Adding these two:

 

Twct = 2*Cpfr* Twct-1

 

So,

 

Tdct  =  Twct    =      2*Cpfr* Twct-1

Dpwt*Yt      W*Dpwt-1*Yr*Yt-1

 

 

Where:

      Twct = Total wafer cost requirement in period t

      Twct-1 = Total wafer cost in the prior period

      Dpwt = Die-per-wafer in period t

      Yt = Yielded die-per-wafer in period t

      W = Ratio of die added with a wafer size change

      Dpwt-1 = Die-per-wafer in the prior period

      Yr = Yield reductions due to improvements with time

      Yt = Yielded die-per-wafer in the prior period

You may like this also:

Copyright © 2017 VLSI Research Inc. All rights reserved.