Rumour: Bulldozer 50% Faster than Core i7 and Phenom II.

Page 28 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

hamunaptra

Senior member
May 24, 2005
929
0
71
People that no longer work for AMD, lol. If you really want me to I'll find the source saying that Bulldozer was not designed for top performane, and AMD was not perusing being the performance king. If I had to guess I would say it was in a 2010 earnings call.

But let me ask this, have you ever stated something simular to the above? Something about being the fastest doesn't matter?


Its right to a point, they dont want to be fastest clock for clock, theyve stopped trying to attain that, probably because they always lag behind intel.

So, there are otherways to achieve higher throughput though, thats through higher clocks, a slimmer designed core all which equate to higher performance.

IE: if AMD designs BD to run at 4ghz+ frequency but it falls short clock per clock of intel say 5-10% per core performance, you cant call it a failure because its not as fast as intel at the same clock.

You cant under clock the damn thing and call it a failure, a not so great performer. Sure you can do this for shits and giggles, but by no means call it a defunct uarch design. Because in the end its DESIGNED to run those higher clocks.

Therefore it would be 4ghz stock vs whatever ifail cpu.
 
Last edited:

hamunaptra

Senior member
May 24, 2005
929
0
71
No they aren't. Power consumption scales with clock speed, unless AMD has found a way around physical laws.

Well, I mean from an uarch design standpoint. If you can nail an extremely effecient power design then you have more frequency scaling headroom!....
given that you also designed it for high frequency...
 

Mopetar

Diamond Member
Jan 31, 2011
8,533
7,799
136
They also say Llano will be over 3GHz... because of the expected higher clocks of the bulldozer cores, its natural to assume thats because of design... theyre simply designed to run at higher clock rates.

From what I've heard Llano won't use BD cores/modules. Llano is supposed to be using K-10 cores. Trinity (H1 2010 release) is the first Fusion part slated to use BD cores. Either way, K-10 has been clocked over 3 GHz already so either way it
 

Mopetar

Diamond Member
Jan 31, 2011
8,533
7,799
136
No they aren't. Power consumption scales with clock speed, unless AMD has found a way around physical laws.

While that's true, it is possible to design an architecture capable of running at 4 GHz on the same amount of power that it takes to run another architecture at 2 GHz. Both will require additional power to further increase the clock rate, but they don't start in the same place.
 

podspi

Golden Member
Jan 11, 2011
1,982
102
106
People that no longer work for AMD, lol. If you really want me to I'll find the source saying that Bulldozer was not designed for top performane, and AMD was not perusing being the performance king. If I had to guess I would say it was in a 2010 earnings call.

But let me ask this, have you ever stated something simular to the above? Something about being the fastest doesn't matter?

I'm pretty sure this is referring to IPC. Honestly, I doubt AMD is going to be able to capture the single-threaded speed throne, and I'm not sure its worth it for them to try. Intel has more resources than AMD, and it focuses on IPC. It pairs this strength with its world-leading manufacturing process.



AMD has done the smartest thing, and instead of trying to out-brute the brute, it has (hopefully, for them) designed around Intel. Client-side, single-threaded performance is "fast-enough" for most of the things people are going to be doing. For multi-threaded apps, Bulldozer should be a beast. (If AMD can sell 8-core CPUs for the same price as Intel sells 4 and 6 core CPUs, more power to them, and this is what the Bulldozer design appears to me to allow).
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
While that's true, it is possible to design an architecture capable of running at 4 GHz on the same amount of power that it takes to run another architecture at 2 GHz. Both will require additional power to further increase the clock rate, but they don't start in the same place.

Sure, but we are talking high performance X86 here. Nobody has a starting point that's at half the power consumption of the other guy. The only way to achieve a 50% power advantage would be fundamental changes in transistor design. And AMD isn't going to be the guy to do that, they don't have the cash - or the development fab.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
Nobody has a starting point that's at half the power consumption of the other guy

You havnt seen the 175mhz supercomputers IBM made have you? you dont need 4GHz CPUs.... not if you have very high ICP, even a 175mhz cpu can be MUCH MUCH more powerfull than a 4ghz sandybridge cpu. Those cpus use a ton of power, even though their not running so high in the mhz department.

Yes you can "design" cpus to have differnt ICP / run at differnt MHz, at differnt power consumption levels.

Sure, but we are talking high performance X86 here. Nobody has a starting point that's at half the power consumption of the other guy. The only way to achieve a 50% power advantage would be fundamental changes in transistor design. And AMD isn't going to be the guy to do that, they don't have the cash - or the development fab.

How do ARM proccessors use so little power even though they run upto 2ghz?
Why is that Phynaz? is it because their "transistor design" is differnt? no its about the cpu design. Also your going off to talk about "transistor designs" is wacky :p
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
You havnt seen the 175mhz supercomputers IBM made have you?

No I haven't, can you point me to them?

you dont need 4GHz CPUs.... not if you have very high ICP, even a 175mhz cpu can be MUCH MUCH more powerfull than a 4ghz sandybridge cpu.

I'd love to see a CPU of any architecture that has > 23X the IPC of SB.

How do ARM proccessors use so little power even though they run upto 2ghz?
Why is that Phynaz?

Notice where I said high performance X86? ARM is neither high performance nor X86.

is it because their "transistor design" is differnt? no its about the cpu design. Also your going off to talk about "transistor designs" is wacky :p

Really? You know when they design a chip the type of transistor used in each circuit is part of the design. Some transistors are fast, some are slow. Some have high leakage, some have low leakage. Some have high drive current, some have low drive current. There are many other variables, and there are benefits and trade offs to each choice. One of the primary choices is speed vs power consumption.

This is also one reason why ARM designs are low power. They are built using low power, low switching speed transistors. If they weren't, why can't we just crank the clocks up on them until they hit 150w and we all have 100Ghz cpu's? You really don't think ARM has some special sauce that no other chip designer in the world has, do you?

Edit:
Just to clarify the above, here is ARM's A9 page - http://www.arm.com/products/processors/cortex-a/cortex-a9.php. Scroll down a bit and you can where you get to pick a low power design, or a "high performance" design. You don't get both at the same time.
 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
8,533
7,799
136
Sure, but we are talking high performance X86 here. Nobody has a starting point that's at half the power consumption of the other guy. The only way to achieve a 50% power advantage would be fundamental changes in transistor design. And AMD isn't going to be the guy to do that, they don't have the cash - or the development fab.

The numbers were more for illustrative purposes. I don't expect AMD to come out with a 6.6 GHz x86 chip, but something around 4 GHz isn't out of the question.

Think back to the days of the P4. Intel had chips clocked at 3.4 GHz back then on their 130 nm process. AMD's chips at that time were clocked around 2.2 GHz, but both had the same TDP.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
The numbers were more for illustrative purposes. I don't expect AMD to come out with a 6.6 GHz x86 chip, but something around 4 GHz isn't out of the question.

Think back to the days of the P4. Intel had chips clocked at 3.4 GHz back then on their 130 nm process. AMD's chips at that time were clocked around 2.2 GHz, but both had the same TDP.

We are talking 3.4 Ghz Prescott's vs 2.2Ghz K7 or K8, right? They weren't anywhere near each other in power, it's why Intel abandoned Netburst - they hit a thermal wall.

Edit:
Prescott 3.2 Ghz - 103w
Athlon 64 3500+ (2.2Ghz) - 67w
 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
8,533
7,799
136
We are talking 3.4 Ghz Prescott's vs 2.2Ghz K7 or K8, right? They weren't anywhere near each other in power, it's why Intel abandoned Netburst - they hit a thermal wall.

Edit:
Prescott 3.2 Ghz - 103w
Athlon 64 3500+ (2.2Ghz) - 67w

Prescott was the 90 nm part.

Northwood had a 3.4 GHz part on 130 nm with a 89 W TDP released in early 2004. AMD's 130 nm (Clawhammer) Athalon 64 CPU released about that time was a 3400+ running at 2.2 GHz with a 89 W TDP.

When AMD moved to their 90 nm process they used they used the die shrink to let them make the roughly the same chips at a lower TDP. Intel crammed in more cache, higher clock speeds, and their own x86-64 implementation.

Just because Intel didn't do well with Netburst doesn't mean AMD can't design an architecture that runs at higher clock speeds. Unlike Intel, they can look at the reasons Netburst failed and figure out a better design. They also don't have to chase such an insanely large difference in clock rate either. If the first BD chips launch at 4 GHz with the same TDPs as SB chips, the percentage difference in clock rate per amount of power wouldn't even be close to that between the P4/Athalon chips at the time.
 

drizek

Golden Member
Jul 7, 2005
1,410
0
71
The difference is that AMD can't strongarm out garbage processors the way Intel could in the Netburst days. AMD was giving better performance with lower power consumption at half the price, and they still couldn't make much of a break in the market.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
CPU Power draw is effected by a lot of things like micro architecture design, transistor design, manufacturing process and more.

As of now, we know that Bulldozer will power gate each module to save energy and on the manufacturing process of 32nm AMD will implement both SOI (Silicon On Insulator) and HKMG (High-K Metal Gate). Both of this technologies at 32nm reduce Transistor leakage making a more efficient CPU.
 

chucky2

Lifer
Dec 9, 1999
10,018
37
91
The difference is that AMD can't strongarm out garbage processors the way Intel could in the Netburst days. AMD was giving better performance with lower power consumption at half the price, and they still couldn't make much of a break in the market.

That's because the market was rigged against them, courtesy of Intel; and having to dependon VIA's sh1tty chipsets didn't help either...

Chuck
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
Northwood had a 3.4 GHz part on 130 nm with a 89 W TDP released in early 2004. AMD's 130 nm (Clawhammer) Athalon 64 CPU released about that time was a 3400+ running at 2.2 GHz with a 89 W TDP.
Yeah and now if both competitors defined TDP the same way that'd be awesome wouldn't it? Talk about apples to apples..

Anyway the default formula for power draw is usually defined as 1/2 C * Vdd^2 * f + Idd*Vdd (IDC had some interesting post where he showed that the actual scaling for Vdd is even worse than quadratic), so factors like size, transistor design and the actual manufacturing process do also play a role even if you keep Vdd and frequency for both chips identical..
 

maddie

Diamond Member
Jul 18, 2010
5,205
5,618
136
No they aren't. Power consumption scales with clock speed, unless AMD has found a way around physical laws.
Yes, BUT.

Surely you're not saying that all similar speed (Ghz) processors consume identical power?
Each design has a different speed for a given energy use.

I find your arguments rather myopic.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
That's because the market was rigged against them, courtesy of Intel; and having to dependon VIA's sh1tty chipsets didn't help either...

Chuck

And yet during this time was when AMD made its highest profits in its history, and was actually supply constrained - they couldn't make cpu's fast enough.

Sounds to me that when they had an exceptional product they did exceptionally well.
 
Last edited:

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Yes, BUT.

Surely you're not saying that all similar speed (Ghz) processors consume identical power?
Each design has a different speed for a given energy use.

I find your arguments rather myopic.

Jeez, people can't read.

I'll say it again.

Within the context of high performance X86 CPU's.

Also, I believe you took my quote out of another context. That was the context that regardless of design, increasing the clock will increase power consumption proportionately.
 
Last edited:

hamunaptra

Senior member
May 24, 2005
929
0
71
I just hope that AMD isnt making a "weak" process. What I mean, is intels transistors are known to be extremely sensitive to overvolting, easily killing the chip.
While AMD's current 45nm can take a ton of voltage and hardly sneeze at it.
I hope by going hkmg and 32nm, AMD's transistor will still maintain a strong tolerance to higher voltages...all in the name of OC'n of course!


And yes Im a fanboi, because I root for the underdog, I always have. But I also take a realistic approach to both sides, hence why Im runnin an i7920 right now. I switch back and forth to get the feel for both offerings.

Next will upgrade will most likely be an iteration of the BD.
 

Mopetar

Diamond Member
Jan 31, 2011
8,533
7,799
136
Yeah and now if both competitors defined TDP the same way that'd be awesome wouldn't it? Talk about apples to apples..

They can't play too terribly loose with their definitions. Any chip that regularly exceeds it's rated TDP is going to burn itself out or shorten its lifespan if it's paired with a cooling system that can't disperse the heat. TDP is usually given as the most heat the chip will ever produce at stock settings.

It's possible that some chips commonly operate closer to their rated TDP than others, but now that both companies have Turbo Boost/Core, the chip can use up remaining gap if the chip is running below maximum TDP and could take advantage of the higher clock rate.

One of JFAMD's blog posts mentions that AMD rates TDP with special testing software that stresses every part of the chip. I imagine that Intel does similar tests. The only possible difference between AMD and Intel is how close their architectures operate to their defined TDP on average. Since both chips can boost, they can both hit that ceiling as well.
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
What I mean, is intels transistors are known to be extremely sensitive to overvolting, easily killing the chip.
The flaw in this argument is that overclocking is not about "how much more volts can I give this chip", but more about "how much more GHz can I squeeze out of this chip". And with that laid out all plain and simple, it is without a doubt that Intel's offerings since i7 came out are all better overclockers, even if AMD chips can take more voltage.

So wishing for more volts is insane (like taking an indirect route to your goal), when you can just wish directly for more OC headroom.
 

Mopetar

Diamond Member
Jan 31, 2011
8,533
7,799
136
What I mean, is intels transistors are known to be extremely sensitive to overvolting, easily killing the chip.

You're aware recent Intel chips over-clock fairly well right? Here's a link to a Tom's Hardware contest where two people got an i5 to over 7 GHz. You need insane amounts of voltage to reach those speeds.
 
Status
Not open for further replies.