Does the # of transistors have a direct correlation to computing power? (Moore's Law)

alfa147x

Lifer
Jul 14, 2005
29,307
105
106
Moore's Law is, "number of transistors on integrated circuits doubles approximately every two years."

But does the number of transistors directly correlate with the amount of computing power?

My two ideas are:
  • There is some unforeseen bottleneck that has yet to be accounted for
  • Law of Diminishing Return will kick into action


I just read the wikipedia page on this. That's exactly what I was looking for. I'll go ahead and post this because it might generate a decent discussion.
 

Mtt

Member
Apr 22, 2010
64
2
71
I think so. Increasing the number of cores does not increase single threaded performance and it does increase transistor count. And there are things that cant be run in parallel.
 
  • Like
Reactions: CatMerc

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
I'm confident there's a positive correlation between computing power and number of transistors. There are exceptions when transistors are being added/removed not for performance but for power. For the most part, to improve performance, you gotta add/redesign stuff. To do that, you need to throw more transistors at it.

Maybe you can find the correlation similar to the # of popsicle sticks and the strength of a popsicle stick bridge? :p
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Moore's Law is, "number of transistors on integrated circuits doubles approximately every two years."

But does the number of transistors directly correlate with the amount of computing power?


My two ideas are:
  • There is some unforeseen bottleneck that has yet to be accounted for
  • Law of Diminishing Return will kick into action

I just read the wikipedia page on this. That's exactly what I was looking for. I'll go ahead and post this because it might generate a decent discussion.

Actually, hate to burst the bubble that is popular perception of what Moore's Law is about, but Moore's Law is NOT "number of transistors on integrated circuits doubles every..."

Rather, Moore's Law is/was/has always been that the "minimum in the manufacturing cost curve per transistor on an integrated circuit declines by 50% every <insert number of months or years here>..."

Graph1.png~original


What the semiconductor manufacturers chose to do with the cost savings was not Moore's Law.

Graph3.png


Yes, some chose to double the number of transistors per IC such that the IC itself remained at essentially the same production cost.

Figure5.png


And others took advantage of the cost savings that are Moore's Law and just shrank their chips so they cost less to manufacture.

kaigai-02.jpg


kaigai-03.jpg


So you can see what Moore's Law really is about, straight from Moore's paper, versus how it sorta became twisted and misguidedly interpreted as being something about performance doubling or transistors doubling and so on.

Moore's Law is simply about the economies of scale of the underlying process technology and cost structure, nothing more.

Going beyond Moore's Law is where you get into all the extra discussion material, which is a good discussion to have but has little to do with Moore's Law itself per se.
 

Special K

Diamond Member
Jun 18, 2000
7,098
0
76
Why does the relative manufacturing cost per component increase as you move to the left along the x-axis to the left of the minimum? Why would a small number of components per IC ever cost more than a larger number?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Why does the relative manufacturing cost per component increase as you move to the left along the x-axis to the left of the minimum? Why would a small number of components per IC ever cost more than a larger number?

It is a bit of an eye-chart, and I apologize for that, but if you expand the following embedded pic (it is also included in my post above) and interrogate the embedded graph located in the upper-right hand corner you will get an idea of why costs actually increase on a per-component basis for smaller and smaller component-count ICs.

Graph3.png


The purple curve is the culprit.

You have a fixed IC design/development cost based on the size of the team and the length of the development timeline that just doesn't scale down all that well.

You can't, for example, hire a part-time silicon validation engineer to be on hand for just the 6 months you need them. You need them to be on your books the entire time so a new-hire coming up to speed doesn't jeopardize the product timeline itself.

Now what is obviously missing, and intentionally so, is the "volume" aspect.

To keep the analysis simple and digestible, Moore chose to normalize the expected shipping volumes of the ICs in question. So if the price/component curve were to represent say 1m units shipped per year then that is assumed for both the 10mm^2 chips as well as the 500mm^2 chips.

So now you get to why companies like Qualcomm and Apple want/need to produce tens of millions, if not hundreds of millions, of little tiny chips at any given node. They amortize the rather expensive (on a per-component basis) development cost for the small IC across those tens of millions of ICs.

And if the volume demand for the chips fails to materialize, ala Nvidia's Tegra situation, then the cost structure explodes (unfavorably so) raising the cost-per-IC to heights that are simply non-viable in a matter of a year or two.
 

JimmiG

Platinum Member
Feb 24, 2005
2,024
112
106
GPUs seem to scale better with increased transistor counts than CPUs do. Having more than 4-6 CPU cores doesn't really improve performance in most applications, but doubling the number of CUDA cores seems to result in nearly double the pure GPU performance in both games and compute applications.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
GPUs seem to scale better with increased transistor counts than CPUs do. Having more than 4-6 CPU cores doesn't really improve performance in most applications, but doubling the number of CUDA cores seems to result in nearly double the pure GPU performance in both games and compute applications.

Because most GPU loads are essentially 100% parallel. Its Amdahls law all over again :)
 

PrincessFrosty

Platinum Member
Feb 13, 2008
2,300
68
91
www.frostyhacks.blogspot.com
Transistor count does tend to scale with raw computing power, but what we do with that computing power and how accessible it is to applications, and what the application overhead is for using that power, that's all subject to many other factors.

Parallelism is where we're headed at the moment, not more speed per core or more work per core, but more cores, it means going through a transition period where developers learn to write multi-threaded code in order to make use of the available power.

We also have CPUs spending some of that transistor count on integrated graphics, so it's easy to see how performance in real world apps doesn't scale linearly with transistor count.
 

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
Moore's Law is, "number of transistors on integrated circuits doubles approximately every two years."

But does the number of transistors directly correlate with the amount of computing power?

My two ideas are:
  • There is some unforeseen bottleneck that has yet to be accounted for
  • Law of Diminishing Return will kick into action
There's no mystery. The bottleneck is not unforeseen but well known and intractable. There is a limited amount of parallelism in code. Adding more transistors is like adding more workers to a factory. However, unlike a factory, the work in a processor is very dependent on sequence. I think there is a branch every 6-7 instructions or so? And dependencies are even more frequent.
 

Smartazz

Diamond Member
Dec 29, 2005
6,128
0
76
GPUs seem to correlate really well, but I can't imagine it's sustainable due to power consumption/heat.
 

Smartazz

Diamond Member
Dec 29, 2005
6,128
0
76
IDC, while people are often mistaken for what Moore's law states, it's hard to escape the reality that we saw exponential gains before we started using integrated circuits.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
It certainly did up to a certain point. We went through a period of massive compute performance gains where many of the gains came from being able to increase the pipeline length and increase clock speed. Once that avenue got taken away by physics the relationship between CPU performance and transistors seems to have stopped scaling in the way it used to.

People look to cores but you already know most software does not benefit. Amdahl's law and terrible programming tools for parallel programme will continue to keep cores from being really effective in a general purpose CPU. I was more optimistic about the prospects of ever increasing performance a few years ago, now I think its mostly going to come in highly specific areas only, those which can utilise GPU like compute performance.
 

Abwx

Lifer
Apr 2, 2011
11,769
4,684
136
Actually, hate to burst the bubble that is popular perception of what Moore's Law is about, but Moore's Law is NOT "number of transistors on integrated circuits doubles every..."

Rather, Moore's Law is/was/has always been that the "minimum in the manufacturing cost curve per transistor on an integrated circuit declines by 50% every <insert number of months or years here>..."




Figure5.png



So you can see what Moore's Law really is about, straight from Moore's paper, versus how it sorta became twisted and misguidedly interpreted as being something about performance doubling or transistors doubling and so on.

.

Neverless , this graph answer the OP question.

Starting from the pentium transistors count has
increased by about a 1000 ratio while perfs have
increased similarly thanks to better IPCs and cores
aggregation but also thanks to frequency that was
increased by about a 30 ratio.

If we are to isolate the xtors count contribution we will notice that at equal frequency we got 30 x more perfs with 1000 x more xtors , hence it s about a square root law that result in perfs increasing as the square root of the transistors ratio , i.e , doubling the transistor count did increase the perfs by 40% at equal clocks , on the long term , of course.
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
IDC, while people are often mistaken for what Moore's law states, it's hard to escape the reality that we saw exponential gains before we started using integrated circuits.

Don't get me wrong, I'm not saying Moore's Law isn't responsible for the xtor-doubling reality we saw for a few decades...all I'm trying to say is that is just one consequence of the law (i.e. a particular special case), but the law itself is actually more generalized and more broadly applicable (i.e. it can also result in other special cases depending on how it is wielded by business execs and engineers alike).

The special case (xtor doubles every X yrs, performance doubles every X yrs, etc) is of course not generalized enough to adequately capture reality, but that is what makes them special cases of the generalized law itself.

So why not take advantage of the general case and interrogate it to see where things are going, versus getting hung-up on the specialized case which of course will only be applicable over a narrow range of conditions?

Neverless , this graph answer the OP question.

Starting from the pentium transistors count has
increased by about a 1000 ratio while perfs have
increased similarly thanks to better IPCs and cores
aggregation but also thanks to frequency that was
increased by about a 30 ratio.

If we are to isolate the xtors count contribution we will notice that at equal frequency we got 30 x more perfs with 1000 x more xtors , hence it s about a square root law that result in perfs increasing as the square root of the transistors ratio , i.e , doubling the transistor count did increase the perfs by 40% at equal clocks , on the long term , of course.

Yep, and that is exactly what Pollack observed and is why we refer to Pollack's Rule when speaking of the trade-off between increasing xtors (complexity) versus performance.

Pollack's Rule states that microprocessor "performance increase due to microarchitecture advances is roughly proportional to [the] square root of [the] increase in complexity". This contrasts with power consumption increase, which is roughly linearly proportional to the increase in complexity. Complexity in this context means processor logic, i.e. its area.

05.jpg