AMD wattage

PansitPalabok

Member
Feb 13, 2006
59
0
0
I'm looking at this CPU HERE.

IT says the thermal design power is 140W. Does this mean it draws 140W for normal operation? So then does it generate lots of heat? Would it be better to opt for CPUs with lower wattage?
 

Modular

Diamond Member
Jul 1, 2005
5,027
67
91
TDP is really just the top end of the power spectrum that this CPU could pull. In real life situations it will definitely draw less power than that. There are also features (called Cool N' Quiet) that will decrease the clock speed and voltage of the processor to ensure that it throttles the power draw when it's not being used to it's full potential.

 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
it's not as efficient as intel or the lower-TDP AMD quads, but still pretty tame @ idle/normal usage. under full synthetic load such as linpack, it will approach 150 watts in consumption, and that is kind of a big deal.


read this article and learn about manual Cool n Quiet configs, you can really help the efficiency of the processor


http://www.anandtech.com/mb/showdoc.aspx?i=3621
 

Aluvus

Platinum Member
Apr 27, 2006
2,913
1
0
AMD uses TDP to represent the maximum power draw at full utilization. Actual day-to-day power draw will be considerably less.

Confusingly, Intel uses it as an indication of typical power draw.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
lost circuits represents the typical "100%" load power draw, by using a real world application.

xbit uses synthetic loads like linpack and they saw 147 watts under load. so it's all a matter of real vs. synthetic.

both data points are important for your consideration because you get an "average" number and a "worst case scenario" number to think about.
 

cusideabelincoln

Diamond Member
Aug 3, 2008
3,275
46
91
And synthetic doesn't really mean much. Real world approach is far more practical and relevant. Which is why it's just as important to know the method in which a reviewer uses as it is to know what their results are.
 

bradley

Diamond Member
Jan 9, 2000
3,671
2
81
If I'm not mistaken, the most tangible difference between the Xbit and Lost Circuits results would be points of measurement. I know that Lost Circuits uses the DC power draw at the Aux12v. This method isolates the cpu from any system power inefficiencies with a very high degree of accuracy. So a 140W TDP likely assumes sub-par VRM and power supply efficiencies (converting AC to DC) of the average system.

I can't find information on how Xbit measures, but I will assume it's total power draw on the AC side based on the PII 965 cpu power consumption numbers at load: 147.6W vs. 92W .... unless someone knows otherwise. Kill-a-watt measurements can also be very instructive. Of course, you can also dramatically reduce power draw by undervolting.
.


 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
xbit measures AC and DC. they synthetically load the system with linpack to show the highest possible consumption.

http://www.xbitlabs.com/articl...ii-x4-965_4.html#sect0

thats 147.6 watts to just the CPU.


the 60+ watt difference in measurement is due to the different benchmarks run when the measurements were performed. lost circuits uses a multithreaded 3D modeling benchmark, and as a result there are many data dependencies involved because the benchmark is processing useful information, and periodically some parts of the processor are waiting on a cache miss, waiting on a result, or data from memory or whatever.

linpack doesn't deal with any of that and the processor is constantly and truly fully loaded, which is why it's called synthetic performance. it's processing data in a way that isn't the least bit useful in order to show us the maximum amount of energy the processor could consume for any reason.
 

bradley

Diamond Member
Jan 9, 2000
3,671
2
81
Didn't see where Linpack was mentioned. I'm also not an avid reader of xbit. There are very few sites I trust as much as Lost Circuits ATM

As a stress test, Linpack is probably overkill. You'd probably see even higher consumption from LU and FT. I don't see many sites running Linpack for anything other than server products. Just like FLOPS or SPECpower doesn't represent much real-world info to the average consumer.

I could see a site running server benches on consumer products merely for the wow factor. Although the results are interesting, as I don't have personal hands-on experience with Linpack.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
Originally posted by: bradley
Didn't see where Linpack was mentioned. I'm also not an avid reader of xbit. There are very few sites I trust as much as Lost Circuits ATM

As a stress test, Linpack is probably overkill. You'd probably see even higher consumption from LU and FT. I don't see many sites running Linpack for anything other than server products. Just like FLOPS or SPECpower doesn't represent much real-world info to the average consumer.

I could see a site running server benches on consumer products merely for the wow factor. Although the results are interesting, as I don't have personal hands-on experience with Linpack.


from the link i posted above:

During our tests we used 64-bit LinX 0.5.8 utility to load the systems to the utmost extent.

LinX

it will load a CPU as much as any other stress test you can come up with.

no one is trying to say that linpack represents a typical load or typical power consumption, and no one is trying to use linpack to represent that. they're just saying that it's the maximum, and they're verifying AMD's claim. they're saying "people, this is why the TDP is 140 W."

if you want a real-world stress test and real world power consumption, run an x264 encoder or cinebench.
 

bradley

Diamond Member
Jan 9, 2000
3,671
2
81
I wasn't questioning you at all, just the impracticality of review sites. These reviews have become an afterthough to pages of ads. Server benchmarks are fun as long as no one takes them too seriously. The average user doesn't realize that part life gets reduced under such intense stress and heat, especially with cheaper consumer boards. I would still rather see tests that stress the CPU and surrounding parts in a real-world way.

My point is that modern CPUs are so immensely powerful that it's impossible for the average user, much less a business user, to hit full load across every core. And a server hitting full load simultaneously 24/7 on every core is probably a poorly designed one.

I believe the way AMD defines TDP as the amount of power needed to be handled by the VRM and dissipated by the CPU. OEM's would use something like Linpack or Prime 95 for validation, I believe Intel uses their own MaxPower. They likely wouldn't use it for typical power draw though, or anything above tcasemax.
 

richierich1212

Platinum Member
Jul 5, 2002
2,741
360
126
Posted by John Freuhe @ SemiAccurate:

http://www.semiaccurate.com/fo...d8&p=7949&postcount=86

" It's ok to call me a liar, but keep in mind that since I work in the industry, I know a lot more than the average person when it comes to these things. I don't make things up because I everything that I do can be tracked back directly to me. The internet is anonymous but I post as myself, so I need an extra level of scrutiny.

You are mistaken about TDP, because both companies deal with it differently.

Intel has 2 power levels: TDP and max power, (and now a third, "sustained power").

Take a look at the X5570 to see:

http://www.cpu-world.com/CPUs/Xeon/I...602X5570).html

Maximum power dissipation (W) 197.97; 155.56 (sustained)
Thermal Design Power (W) 95

So the way Intel always measured it in the past, Max power is the maximum power that a CPU can draw. Every transistor firing in a worst case scenario.

TDP is a de-rated value (used to be 80%, but it has been creeping down which is bad). Intel would take the maximum power, assume that the processor would throttle the clock down and then take that measurement (of a throttled processor) as the "TDP".

Since that time they have added a maximum sustained, maybe you can ask them what that means. I am assuming that max power is a spike power and that sustained is something that lasts more than a few milliseconds.

Regardless, the maximum power that the processor could conceivably draw is 197W.

Our TDP is equivalent to their max power, it is the maximum power the processor could draw, every transistor firing in a worst case scenario.

Our ACP is average CPU power. We run standard workloads, run them at 100% utilization and measure power.

Intel is not real open about max power. They used to post it online, but when they started getting pressure from AMD about those max power ratings, they stopped showing up online.

I'd love to have someone from Intel come here to debate this topic, because at this point, the specs (which they try to keep private) are not in their favor.

In designing a system to max power (which you have to do), we are not 42w disavantaged, we are actually 60w advantaged.

We do release max power. It is called TDP. http://en.wikipedia.org/wiki/Thermal_Design_Power

The reason the thermal design sheet lists TDP is because that is what you use to design systems. TDP is designed for system builders and OEMs. ACP is designed for customers in figuring out what they need for the data center.

ACP came into being a few years back because our TDP was 95W and it was rare that we ever had a part that even got above 50W. Customers were complaining that they were budgeting their racks for a certian amount of power, assuming 95W, and then ending up heavily under utilized. We were getting a lot of complaints from customers that we were too conservative and that this was leading to too much inefficiency in their data centers. I was on the receiving end of a lot of these conversations and they were not pleasant as data center floor space was the most expensive real estate in the building.

If you want a simple rule of thumb, use the following.

Most power a system can draw:
Intel = Max power
AMD = TDP

Typical power draw for standard applications:
Intel = TDP
AMD = ACP

If you are asking "why doesn't AMD just use TDP like the rest of the world" then you are on to something. We actually do. If you bother to go back to the wikipedia link above, you'll see TDP defined as:

Quote:
The thermal design power (TDP), sometimes called thermal design point, represents the maximum amount of power the cooling system in a computer is required to dissipate
That sounds a lot like how AMD defines TDP, but that also sounds like how Intel defines max power. So, in reality, the "hoky" measurement is actually Intel's TDP because it does not represent what the rest of the industry means when they say TDP.
__________________
While I work for AMD, my posts are my own opinions.

http://blogs.amd.com/work/author/jfruehe/ "
 

lopri

Elite Member
Jul 27, 2002
13,327
708
126
Originally posted by: richierich1212
TDP is a de-rated value (used to be 80%, but it has been creeping down which is bad). Intel would take the maximum power, assume that the processor would throttle the clock down and then take that measurement (of a throttled processor) as the "TDP".
Wouldn't that be the crux of this issue? An intel processor can and will throttle before they exceed the given TDP, and that is probably the reason they can get away with their TDP. That has been the impression of mine. AMD CPUs may not be as power hungry as their TDPs may suggest, but under those rare, academic circumstances they may lack such sophisticated throttling mechanisms as Intel CPUs'. Otherwise why can't AMD get away with just ACP? According to your explanation, that's what Intel does anyway.

Again it's a theory/impression of mine, and I don't have a proof.
 

bradley

Diamond Member
Jan 9, 2000
3,671
2
81
Maybe a good way to test that theory is examine how a PII scales with clockspeed or performance per measured watt. I believe every CPU beyond the Athlon 64 supports processor throttling without a driver. Although thermal throttling should also be a part of ACPI management using CPU core diode temp.

Obviously power and thermal throttling are two separate things though. I certainly don't know how AMD and Intel's thermal throttling differs. Throttling began to become a necessity by the P4. I also remember that now infamous Tom's Hardware video of an Athlon bursting into flames. I believe the Athlon 64 would crudely shut down under too much heat.

We've come a long way since then.
http://www.youtube.com/watch?v=BSGcnRanYMM

 

Accord99

Platinum Member
Jul 2, 2001
2,259
172
106
Originally posted by: richierich1212
Posted by John Freuhe @ SemiAccurate:

Take a look at the X5570 to see:

http://www.cpu-world.com/CPUs/Xeon/I...602X5570).html

Maximum power dissipation (W) 197.97; 155.56 (sustained)
Thermal Design Power (W) 95

So the way Intel always measured it in the past, Max power is the maximum power that a CPU can draw. Every transistor firing in a worst case scenario.

There aren't any power measurements that support this. For example:

http://images.anandtech.com/gr...070209104050/19457.png

248W system power consumption for a dual X5570, making it unlikely the CPUs even consume 95W.

If you want a simple rule of thumb, use the following.

Most power a system can draw:
Intel = Max power
AMD = TDP

Typical power draw for standard applications:
Intel = TDP
AMD = ACP

Power measurements suggest Intel Nehalem TDP roughly equals AMD TDP.
 

Soleron

Senior member
May 10, 2009
337
0
71
John Fruehe is an AMD employee, just in case it's not clear from the above.

So AMD's ACP and TDP represent numbers based in reality: under typical server workloads at 100% utilisation, it will use ACP at worst; under worst-case conditions, it will use TDP.

Intel's max power (which is crazy for Nehalem, 198W) is equivalent to AMD TDP. Intel's TDP isn't based in reality at all, varying between 80% of max power for P4 CPUs and getting lower over the years so it's now ~60% of max power.

But that is SERVERS. Which is the division Fruehe works in, that's why he talks about it. Desktop power use will be very different since sustained 100% utilisation is not typical. AMD tends to be more power-efficient that Intel with server workloads as you can see from the recent Anandtech server article.
 

Accord99

Platinum Member
Jul 2, 2001
2,259
172
106
Originally posted by: Soleron
But that is SERVERS. Which is the division Fruehe works in, that's why he talks about it. Desktop power use will be very different since sustained 100% utilisation is not typical. AMD tends to be more power-efficient that Intel with server workloads as you can see from the recent Anandtech server article.
The recent Anandtech article shows that Nehalem is better in performance/watt and with a 2S X5570 system consuming around 250W at full load, in no way comes close to the fanciful numbers used by John Fruehe. In fact, it would indicate that Nehalem TDP is roughly equivalent to AMD TDP.

Intel's max power (which is crazy for Nehalem, 198W) is equivalent to AMD TDP. Intel's TDP isn't based in reality at all, varying between 80% of max power for P4 CPUs and getting lower over the years so it's now ~60% of max power.
So where are the power measurements showing this? If Intel's TDP was so poor, why doesn't Linpack, which the closest thing to a power virus you can get with publicly available software, push Nehalem past its TDP limit?

http://www.xbitlabs.com/articl...ay/core-i7-870_14.html
 

PansitPalabok

Member
Feb 13, 2006
59
0
0
thanks for all the feedback. I didn't know it was going to garner this many responses. Basically it is just the amount of e- it draws under full load.

Thinking about a build that will see 20% gaming, 45% video, 15% office stuff, 20% other junk, would this processor be overkill? I would rather know that I have lots of power on reserve even though I only use a little of it.
 

Soleron

Senior member
May 10, 2009
337
0
71
Since you're doing far more non-gaming work, the 965 BE is overpriced compared to the i5 750. Get that instead. But, you're right, it's probably overkill, so going with an X3 720 or Q8200 would be better value.

--

@Accord99

Agreed, Nehalem has better perf/watt at 0% and 100%, but Fruehe is claiming better perf/watt at typical utilisation, which is 15-20% for most of the day with brief spikes of 100% utilisation. Apparently Nehalem experiences a rapid rise in power at anything above idle whereas Shanghai/Istanbul are a gradual increase.

The 'max power' figure is Intel's official figure, in technical documents. I'm not saying it ever gets there except under an internal Intel power virus, but server OEMs have to design systems around that figure. That 60% or 80% is official, not experimental data.