How would Bulldozer/Piledriver perform if built in an Intel fab?

Yuriman

Diamond Member
Jun 25, 2004
5,530
141
106
As the title states, how would Bulldozer/Piledriver perform if built in an Intel fab? Suspend reality for a moment and let's assume AMD could migrate their design over to Intel's process, and Intel was willing to rent out their fabs to AMD. How would These chips look on Intel's 32nm process? How about their 22nm? That is to say, how much of Sandy and Ivy Bridge's advantage is in the fab?

Wikipedia said:
Intel reports that their tri-gate transistors reduce leakage and consume far less power than current transistors. This allows up to 37% higher speed, or a power consumption at under 50% of the previous type of transistors used by Intel.

^ It's generally true that Intel's 32nm process is superior in most ways to the other fabs, so imagine Bulldozer built on a process that allows it to clock 37% higher.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,815
1,294
136
You would see more capacity but Bulldozer's issue is architectural. GlobalFoundries says that there is a two year gap between them and Intel. Noting the gap, GlobalFoundries Nodes are superior to Intel Nodes. You have more density and more performance with 32-nm GloFo SHP than 32-nm Intel. 32-nm SHP is carrying Bulldozer not the other away around.
 
Last edited:

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
As the title states, how would Bulldozer/Piledriver perform if built in an Intel fab? Suspend reality for a moment and let's assume AMD could migrate their design over to Intel's process, and Intel was willing to rent out their fabs to AMD. How would These chips look on Intel's 32nm process? How about their 22nm? That is to say, how much of Sandy and Ivy Bridge's advantage is in the fab?

^ It's generally true that Intel's 32nm process is superior in most ways to the other fabs, so imagine Bulldozer built on a process that allows it to clock 37% higher.

Just a note. It says 37% faster transistor speed but unless wire delay scales the same you won't get 37% clock speed increase.
 

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
All things being equal, not much different. I own a 8150 see below along with 2 i5-2500ks so I'm hardly an Intel fanboy or AMD fanboy. When you sift through all of the rhetoric, it appears the Bulldozer design was flawed to begin with and GF didn't have time to correct the design flaws.

CRAP? I love that term. It's in the eye of the beholder. I guess in the eyes of a 3570k owner my 2500ks are CRAP?
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,699
2,623
136
Noting the gap, GlobalFoundries Nodes are superior to Intel Nodes. You have more density and more performance with 32-nm GloFo SHP than 32-nm Intel.

That's not true at all. By latest published IEDM data, the 32nm GF process is more dense, but it has around 7% slower pfets. 45nm was much worse -- the (IBM, Toshiba, Sony, AMD) process was closer to the Intel 65nm than it was to the 45nm one in speed.

The last time AMD had a process that was actually better than Intel's was it's 90nm. Since then, they have lagged badly, both in time of introduction and performance once released.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
It probably wouldn't clock all that much higher but it would probably use less power.

Regarding the "density" aspect - density is a bit of a red herring in this situation because density is a direct tradeoff with clockspeed. Transistors have both length and width, and at the end of the day you need drive current.

If your process delivers low driver-current per micron then your designs will require longer transistors to boost total drive current if you want the circuit to function at a certain target clockspeed. (or you have to boost the operating voltage as a tradeoff to leakage power usage and lifetime reliability)

This is why GPU's can be so much higher density than CPU's but at the same time they end up operating at 1/3 - 1/4 the clockspeed.

That is also why the industry in general compares nodes on the basis of normalized results - drive current per micron at 1V and so forth - because that is the correct way to reduce the electrical parametrics of CMOS for comparison purposes.

This doesn't happen when marketing gets hold of data though because in general (1) marketing people don't have electrical engineering degrees, and (2) the people the marketing people are marketing to don't have EE degrees either so they are all the easier to bamboozle into believing whatever the marketing folks are trying to "sell" in their PR campaign.
 

happysmiles

Senior member
May 1, 2012
340
0
0
"UP TO" are the key words.

I think if Intel made Bulldozer it would be a lot more efficient but mostly because Intel would have some say in the design.
 

Smoblikat

Diamond Member
Nov 19, 2011
5,184
107
106
Just a note. It says 37% faster transistor speed but unless wire delay scales the same you won't get 37% clock speed increase.

That would be a ludicrous clock speed.

I believe it isnt a problem with fabs not working, its an architecture issue. AMD banked on modular half cores and the software doesnt work that way now, add in a horrific IPC and you have a faildozer, soon to be steamfailer.
 

nonameo

Diamond Member
Mar 13, 2006
5,902
2
76
If you took bulldozer as it is now and merely stuck it in an intel cookie factory, as others have said I don't think you would see much extra in terms of anything except power consumption.

However I believe that these companies design their CPUs around available or expected available resources. If AMD had access to intel fabs, I'm sure they would have made some different design decisions that may have affected performance in other ways.

You know, like 10 cores instead of 8!(lol, just playing :p)
 

Ancalagon44

Diamond Member
Feb 17, 2010
3,274
202
106
If you took bulldozer as it is now and merely stuck it in an intel cookie factory, as others have said I don't think you would see much extra in terms of anything except power consumption.

However I believe that these companies design their CPUs around available or expected available resources. If AMD had access to intel fabs, I'm sure they would have made some different design decisions that may have affected performance in other ways.

You know, like 10 cores instead of 8!(lol, just playing :p)

They would have heard that you like processors, so they would put a processor in your processor so you can process while you process!
 

nonameo

Diamond Member
Mar 13, 2006
5,902
2
76
They would have heard that you like processors, so they would put a processor in your processor so you can process while you process!

Hahaha, but who knows... adding extra cores seems like a simple way to give a bump where it can be used, perhaps more would have been beneficial in the server space? Given how strapped AMD is for cash(they are, right?), fabs may not be the business bottleneck at all. I wouldn't know, I haven't looked into it that deeply.

I really don't think bulldozer/piledriver is all that bad where it's at now... in laptops. When I first heard about bulldozer I was a little excited and a little worried at the same time. I was thinking the 4 core bulldozer would be attractive because they'd be able to give you a quad at the dual core price... but it didn't really work out that way. Instead what we got was a quad core at the i3 price, that was comparable in multithreaded tasks and much worse in single threaded tasks.

Then, I was excited about llano and the performance was pretty good considering, but it wasn't in the formfactor I wanted to see it in. I kept thinking how nice it would be to have llano in a small laptop, you know... something with a screen size of 13.3" or less. But we didn't really see much of that at launch and even 14.1" laptops were scarce, all of the better llanos went into 15.6" laptops. It looks like that's happening again, I can't seem to find good 13.3" trinity laptops anywhere. You'd think at 25w vendors could swing something, but nnooooo :-/

And small is clearly popular, otherwise netbooks wouldn't have taken off like they did, and intel wouldn't be pushing the ultrabook initiative, which fixes a lot of issues with netbooks(read: performance) that trinity could tie up quite tidily if vendors would just put them there!

Okay, I digressed. Sorry.
 

Obsoleet

Platinum Member
Oct 2, 2007
2,181
1
0
All things being equal, not much different. I own a 8150 see below along with 2 i5-2500ks so I'm hardly an Intel fanboy or AMD fanboy. When you sift through all of the rhetoric, it appears the Bulldozer design was flawed to begin with and GF didn't have time to correct the design flaws.

CRAP? I love that term. It's in the eye of the beholder. I guess in the eyes of a 3570k owner my 2500ks are CRAP?

Compared to Bulldozer, my Q9450 is crap.

That's fine with me. Q9450 is more CPU power than I, or most people need.
 

Mitch101

Senior member
Feb 5, 2007
767
0
0
www.InteriorLiving.com
I think Intel has a step up on how to wire their architecture for less current leakage and current jump is it? Im not referring to intel redesigning the processor but changing the lines of communication for less cross talk. I certainly believe Intel could get AMD cpu's to overclock higher than AMD can under thier own design. Even then it would probably be like 200mhz.

Add that to Intel 22nm manufacturing ability and It would be nice but not enough.

The problem is AMD's current generation IPC and Branch Prediction are worse than previous generations. To compensate Intel would need to be able to make the AMD chip overclock more than Intel's chips to make them compete.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
I think Intel has a step up on how to wire their architecture for less current leakage and current jump is it? Im not referring to intel redesigning the processor but changing the lines of communication for less cross talk. I certainly believe Intel could get AMD cpu's to overclock higher than AMD can under thier own design. Even then it would probably be like 200mhz.

Add that to Intel 22nm manufacturing ability and It would be nice but not enough.

I think the main take-away message here is that money makes things happen.

Add $6B to GloFo and you probably will get a 22nm node that has fancy 3D xtors.

Add $6B to AMD and you probably will get a much more highly optimized layout and design of bulldozer that could/would clock another 10-15% higher.

Money is magical that way.

20120905pcICinsightsChipR&D519.jpg
 

Xpage

Senior member
Jun 22, 2005
459
15
81
www.riseofkingdoms.com
I think the main take-away message here is that money makes things happen.

Add $6B to GloFo and you probably will get a 22nm node that has fancy 3D xtors.

Add $6B to AMD and you probably will get a much more highly optimized layout and design of bulldozer that could/would clock another 10-15% higher.

Money is magical that way.

20120905pcICinsightsChipR&D519.jpg


I agree completely. Although I went i5 this generation, iwas veyr pleased with AMD's performance back in the day. I think the slow strangulation of AMD is due to budget constraints, they do great work for their budget.

I think the architecture would hit it's speed limit before hitting the limits of 22nm. I'd probably top out around 5.1-5.2 ghz on 22nm without a redesign for more frequency.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,815
1,294
136
I think the main take-away message here is that money makes things happen.
GlobalFoundries is partnered with IBM and Samsung in Research and Development on future nodes.

I'm not a foundry expert ->
Intel 22-nm:
Lower+metals+%2526+transistors.png

20-nm IBM, GlobalFoundries, Samsung:
VLSIIBM22nm6002.jpg


IBM & GloFo & Samsung have hefty R&D budgets for nodes
AMD only needs to wait for the processes from the Common Platform and TSMC to finalize.

Intel $8.35M but the cost of R&D future nodes for the foundry is about $6M of that so really for everything else it is $2.35M.
AMD has less product lines than Intel but has a $1.4M R&D.

Laxed...
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
GlobalFoundries is partnered with IBM and Samsung in Research and Development on future nodes.

I'm not a foundry expert ->
Intel 22-nm:

20-nm IBM, GlobalFoundries, Samsung:


IBM & GloFo & Samsung have hefty R&D budgets for nodes
AMD only needs to wait for the processes from the Common Platform and TSMC to finalize.

Intel $8.35M but the cost of R&D future nodes for the foundry is about $6M of that so really for everything else it is $2.35M.
AMD has less product lines than Intel but has a $1.4M R&D.

Laxed...

I bring up the table again because it might not be all that obvious on first glance what it is showing us:

20120905pcICinsightsChipR&D519.jpg


Firstly take notice of how much money it costs for a pure-play foundry to develop leading-edge process nodes.

TSMC's R&D expense for 2011 was $1.2B. They don't design IC's, no R&D expense goes towards IC design, it is all for process node development.

Now look at the R&D expense for the truly fabless - AMD, Qualcomm, Broadcom, Nvidia - they spend billions per year on just IC design alone (no fab related R&D costs whatsoever).

AMD alone spent more in 2011 designing IC's than TSMC spent developing their 20nm and 14nm nodes in 2011.

And that is the reality. Process node development gets the mainstream story because it is easily packaged and sold to the layperson as being the costly deathknell of Moore's Law but the economic reality is that the economic deathknell of the semiconductor industry is the costs associated with designing/verifying/validating the ever-more-complex IC itself.

The reality is that GloFo and IBM and Samsung could team up and outspend Intel to push out 14nm before Intel does...but they would literally have to pay their customers (AMD) in order for their customers to have enough resources to design the IC's that would be manufactured on that advanced 14nm node.

Look at Intel's budget. It doesn't cost $8B to develop 22nm and 14nm, the 2011 portion of that budget for CMOS R&D was probably only $2B (2x that of TSMC's)...the other $6B is what it cost Intel to design IC's for those nodes.

Give AMD access to advanced nodes and it does them no good unless you also give them the billions of dollars needed to develop the IC's on those advanced nodes. (look at Nvidia's R&D expense, and all they design are GPU's and a tegra IC or two)

Making process nodes is the cheapest input to producing a chip on an advanced node (that's not the story you get told though). Getting the chip designed, functioning correctly, and validated is very expensive.