Transistor for transistor, how fast are today's processors?

yhelothar

Lifer
Dec 11, 2002
18,409
39
91
I'm wondering how much processor performance have expanded due to chip design rather than the increase of transistor count from fabrication technology. I suppose this metric wouldn't be too difficult to determine with all of the small, low powered chips out now with similar transistor counts as 8-10 years ago.

But I'm having difficulty finding benchmarks that compares them.

When Atom came out in 2008 with a 45nm process, it had roughly the same transistor count as a Barton Athlon XP. But to my knowledge, it wasn't nearly as fast?

If so, why not?
 

Fx1

Golden Member
Aug 22, 2012
1,215
5
81
because power consumption is directly related to performance.
 

yhelothar

Lifer
Dec 11, 2002
18,409
39
91
Yes, and the Atom would already have massive power consumption benefits from the smaller scale processing.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Yes, and the Atom would already have massive power consumption benefits from the smaller scale processing.

130nm to 45nm is not enough for that alone. Even if we overstimate at 50% cut per node. The Barton would still use 10W. But most likely 20W in real life.

Not sure if the Barton XP would be faster today. But you are comparing 70-80W CPUs with a 4W Atom 230 (for example.) with 64bit support plus other new instructions and hypthreading.

The Barton is also 54.3 million transistors and the Atom 47 million.

Not to mention the lowest Atom on 45nm could hit a 0.65W TDP.
 
Last edited:

Fx1

Golden Member
Aug 22, 2012
1,215
5
81
also a CPU isnt really a CPU any more. So much more has been put onto the die that you probably cant compare transistor count any more.
 

frowertr

Golden Member
Apr 17, 2010
1,372
41
91
And I would be surprised if you could find in-depth research/performance specs on Intel and AMD transistors. I would think those would be heavily guarded proprietary secrets.
 

Abwx

Lifer
Apr 2, 2011
12,000
4,954
136
At 130nm transistors had Ft , frequency transition , of about 150Ghz ,
current processes are in the vicinity of 300Ghz so we can confidently
assume that at 10nm it wont be more than 500ghz , 2% being about
the ratio of the Ft that is possible with a whole circuit , so we will
barely have more than 8-10Ghz CPUs in the two next decades unless
process are revolutionarized.
 

Piroko

Senior member
Jan 10, 2013
905
79
91
At 130nm transistors had Ft , frequency transition , of about 150Ghz ,
current processes are in the vicinity of 300Ghz so we can confidently
assume that at 10nm it wont be more than 500ghz , 2% being about
the ratio of the Ft that is possible with a whole circuit , so we will
barely have more than 8-10Ghz CPUs in the two next decades unless
process are revolutionarized.
Am I right in thinking that Germanium could increase Ft considerably, or are they already working with Si-Ge alloys?
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Bobcat is about equal to Barton per mhz, and I think it's even a smaller transistor count than Atom. It also sometimes gets beat by Atom, and has higher power consumption.

Although I think back in the day, the mobile Bartons could be downclocked to 300mhz and consumed about 4.5W at idle, and around 15W at load.

http://www.tomshardware.com/reviews/athlonxp-underclocking-a-low,892-18.html

The better question is why Atom over the Pentium M? The Pentium M was doing sub 10W at 90nm.
 

Charles Kozierok

Elite Member
May 14, 2012
6,762
1
0
You have to realize that all of the low-hanging fruit in terms of processor efficiency was picked a decade ago. There just isn't any more blood to squeeze from these rocks. That's why extra transistors are being used on multiprocessing, integrated components and power-efficiency features -- it's because there really isn't anything else to do with them.
 
Last edited:

Abwx

Lifer
Apr 2, 2011
12,000
4,954
136
Am I right in thinking that Germanium could increase Ft considerably, or are they already working with Si-Ge alloys?

Transistors are not much limited frequency wise in respect
of their theorical principle but physical implementation will
forcibly create parasistic capacitors that are the real brakes
of switching speeds and they will be present whatever the
process , actualy only size reduction will reduce said capacitances
but that will be often at the expense of other desirable parameters ,
a shrinked transistors will have less parasistic capacitances
but it will have reduced transconductance (read lower conductivity
when switched on) if gate length is reduced , hence it will charge/discharge slower the said capacitors , negating
the speed increasement amplitude....
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
You have to realize that all of the low-hanging fruit in terms of processor efficiency was picked a decade ago. There just isn't any more blood to squeeze from these rocks. That's why extra transistors are being used on multiprocessing, integrated components and power-efficiency features -- it's because there really isn't anything else to do with them.

If only the AMD apologists on this board would realize this...

Back to vacation for you. This kind of trolling will not be tolerated.
-ViRGE
 
Last edited by a moderator:

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
I don't get how my comment applies specifically to AMD either.
Not specifically, but there is a common sentiment that Intel's small performance gains over the past few generations have been a direct result of AMD not being competitive -- that Intel is purposefully holding back performance increases because they have no incentive to increase performance when they're already ahead.
 

Sleepingforest

Platinum Member
Nov 18, 2012
2,375
0
76
I think it's a little of both: diminishing returns means it takes a great deal of time, money, and personnel just to get a tiny bit of efficiency out--time, money, and personnel that could be better put to other purposes. On the other hand, if AMD were more competitive, I'm sure Intel would be forced to push out more innovations.

An interesting study from Columbia University in 2011 concludes that Intel would actually innovate at a slightly higher rate without AMD due to the relative "durability" of CPUs and AMD's relative weakness--however, the consumer surplus would dramatically decrease (essentially, the price rises).

I'd say a slightly slower rate of innovation is worth keeping the price down.
 

Piroko

Senior member
Jan 10, 2013
905
79
91
If only the AMD apologists on this board would realize this...
...the Hell?
Transistors are not much limited frequency wise in respect
of their theorical principle but physical implementation will
forcibly create parasistic capacitors that are the real brakes
of switching speeds and they will be present whatever the
process , actualy only size reduction will reduce said capacitances
but that will be often at the expense of other desirable parameters ,
a shrinked transistors will have less parasistic capacitances
but it will have reduced transconductance (read lower conductivity
when switched on) if gate length is reduced , hence it will charge/discharge slower the said capacitors , negating
the speed increasement amplitude....
Hm true. Although other semiconductor alloys do have some intriguing characteristics compared to Silicon.
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
I think it's a little of both: diminishing returns means it takes a great deal of time, money, and personnel just to get a tiny bit of efficiency out--time, money, and personnel that could be better put to other purposes. On the other hand, if AMD were more competitive, I'm sure Intel would be forced to push out more innovations.

An interesting study from Columbia University in 2011 concludes that Intel would actually innovate at a slightly higher rate without AMD due to the relative "durability" of CPUs and AMD's relative weakness--however, the consumer surplus would dramatically decrease (essentially, the price rises).

I'd say a slightly slower rate of innovation is worth keeping the price down.
I see the slowdown being almost entirely as a result of the low hanging fruit being completely picked over for Intel. I'm sure Intel would love to cut out AMD out of the equation by crushing them with insurmountable performance, however that is financially unfeasible.

Intel ignored power consumption for so long, that there is a relative abundance of low hanging fruit for them to pick on that side of the fence. There are massive financial incentives for them doing so -- the burgeoning smartphone and tablet markets are prime for filling Intel's coffers. Desktop market? Not so much. Performance is still very important to Intel, however. If it wasn't, we likely wouldn't see any gains worth mentioning. Intel's number one advantage against ARM is performance, and they want to keep that gap as wide as possible.
...the Hell?
You've never heard the argument that Intel's holding back performance because AMD is/was so far behind? I take it that you're new to hardware forums.

Anyways, I dragged us off topic. As far as the OP goes, there are far too many variables to make a valid comparison.
 
Last edited:

fixbsod

Senior member
Jan 25, 2012
415
0
0
a nudge back to the op's question, tho i'm not unfortunately bringing the numbers myself --

while it is true that modern cpus have integrated portions / systems that vintage cpus did not, comparing the perf/transistor is still valid because it will shed light on if the integrations led to efficiencies or bloat
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
If only the AMD apologists on this board would realize this...

I'm not going to miss a perfect opportunity to hammer in a very important point.

The point could just as easily be "hammered in" without stooping to the levels of casting is as some denigrating stereotype by way of "AMD apologists".

Your choice of wording is creating walls and making people defensive, preventing digestion and contemplation of the message you intend to deliver.

If you really just want to insult people then your approach is working, if you really aim to increase understanding and perspective then your approach is failing.

How you change up your approach going forward, in light of this feedback, will be very telling of which ultimate aim your posts have in this thread.

Take the high road and lead by example, you might be surprised how reasonable people are when they are not left feeling attacked and insulted at every turn. Calling them "AMD apologists" is not conducive to productive dialogue.
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
The point could just as easily be "hammered in" without stooping to the levels of casting is as some denigrating stereotype by way of "AMD apologists".

Your choice of wording is creating walls and making people defensive, preventing digestion and contemplation of the message you intend to deliver.

If you really just want to insult people then your approach is working, if you really aim to increase understanding and perspective then your approach is failing.

How you change up your approach going forward, in light of this feedback, will be very telling of which ultimate aim your posts have in this thread.

Take the high road and lead by example, you might be surprised how reasonable people are when they are not left feeling attacked and insulted at every turn. Calling them "AMD apologists" is not conducive to productive dialogue.
You sure like giving people the Come to Jesus talk. I think that's three times this week.

Yes, my goal was to insult people. The truth of the matter, however, is they are AMD apologists by definition. But I suppose the truth hurts.

The "Pro-AMD/Anti-Intel" crowd here stands for the opposite of nearly everything I value. Their intellectual dishonesty is rampant. They cannot see both sides of the coin. They cry persecution, even when they are not being persecuted and when (I'd argue, at least) they make up the majority of those in hardware circles (mostly referring to communities outside of AnandTech).

It's the go-to position for ignorant newcomers. They see only the up front performance per dollar advantage that AMD has/had, and get indoctrinated into believing that Intel is out to get them.

If this were politics in the United States, the AMD folk would be the religious right. I imagine that they are waiting for the second coming of the K8 to banish Intel from the desktop industry. So many of the arguments are faith based and tied to "feelings," rather than being based on facts, logic, and reality.

Like the U.S. political realm, there certainly are those who root for AMD that can be objective and participate in rational discussion, just like there are those that are pro-Intel, from a bystander's point of view, that absolutely cannot. However, these people are the exception, not the norm.

It's really difficult not to look down on these people. Reality has a well-documented Intel bias, yet the AMD crowd is blind to this.

I've recently read Dale Carnegie's How to Make Friends and Influence People. It's a great book, and it backs up what you're saying about insults and criticisms (even when they're objectively true); they don't do a very good job of winning people over to your way of thinking.

It's counterintuitive to pity the fool, but I'm going to give it a shot.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
I'm wondering how much processor performance have expanded due to chip design rather than the increase of transistor count from fabrication technology. I suppose this metric wouldn't be too difficult to determine with all of the small, low powered chips out now with similar transistor counts as 8-10 years ago.
Ah, but it would be.

Those small, low-power chips are designed to be low-power, not high performance.

Current high-performance chips can't use too much power, or not enough people will buy them.

12-15 years ago, the problem was still, for chips being released at the time, anyway, fitting enough functionality into an affordable die. By the time we come to maybe 8 years ago, being able to turn all of it on without using too much power was a problem.

They're not made to work the same way.

Then, there's the big wrinkle in the sheet: integrating more xtors and wires per mm^2 is what has allowed much of the chip design to work.

Let's take a popular example, and one that was innovative by any definition, the P6. At first, it needed around 20% more xtors than the Pentium MMX, without MMX support, and then needed added cache, to boot, to make the design enhancements in it work well. It took around 3 years for the manufacturing technology of commodity CPUs to truly catch up to that CPU design.

Within the CPU cores and caches, at least, much of added transistor count is necessary complexity for the implementation of some feature. They might be able to do good re-implementations smaller with hindsight, or maybe just more time (ARM and MIPS have done exactly that), but that first implementation needed the extra xtors and wires, based on their knowledge and ability at the time.
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
Size for size might be more relevant (if at all). More relevant is performance/ ¥.