Godavari throttles to 1.6 GHz after 20 seconds

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
Nope, try again.

I'm pretty confident that the manufacturer of said CPU (AMD / Intel) knows more about its power delivery characteristics than a third party vendor (MSI). Especially a lousy one like MSI.

Very bold assumption, especially when AMD is hiding the thermal datasheet from the end users.
 
Aug 11, 2008
10,451
642
126
I am sure they *know* about the power delivery and usage, but that does not mean they dont use some "creativity" when making public claims (both AMD and Intel).
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Nope, try again.

I'm pretty confident that the manufacturer of said CPU (AMD / Intel) knows more about its power delivery characteristics than a third party vendor (MSI). Especially a lousy one like MSI.

Considering my experience with MSI boards -- they've got terrible credibility IMO. I can't tell you how many MSI motherboards I've purchased that officially supported a particular CPU -- only to find they still didn't work correctly after the latest BIOS flash. Several MSI boards fired right up with a Trinity CPU -- but are dead to the world with a Richland installed.... despite their web site indicating full support for that specific motherboard for both CPU's. MSI is very sloppy at getting the details right.

And yet MSI one of the largest, if not the largest, supplier of motherboards to OEMs.

So, do you have any facts to counter MSI's statement that AMD has understated the power consumption of their CPUs? Please provide links as I have.

You have also conveniently ignored IDC's results.
 

DrMrLordX

Lifer
Apr 27, 2000
23,201
13,289
136
When has AT ever done this for a CPU review?

I'm too lazy to go sifting through old AT CPU reviews to answer your question, but they have done some articles specifically on LLC and its effects on overclocking. They also normally explore overclocking in general on chips (such as in their 7650k review), which is something that seems to be absent from AT's 7870k review. Maybe I didn't read the right page? Any reviewer that tries overclocking is going to notice the stock settings. Hell even running CPU-z would have revealed the stock vcore. Who doesn't run CPU-z when reviewing a modern chip?

Other review sites seem to have figured out that a problem is there. AT just whiffed on it completely.

Manually messing with voltage is not something the average user does.

Buying a retail box CPU and installing it on a retail motherboard is not something the average user does.

I can see this playing an important role in a article discussing under or over clocking, but the retail CPU should be reviewed as it stands. If there is a true BIOS bug or glitch, thats different. That said, adjusting from 1.48 to 1.45 is minimal, at best.

There is a UEFI "glitch". The CPU vid reports one value, and the motherboard sets something completely different while running the base clock. This has been happening since the first Kaveri launch. Newer board UEFIs are ramping up default voltage numbers even on old chips that have been running just fine for some time on lower default values.

And yet MSI one of the largest, if not the largest, supplier of motherboards to OEMs.

Really? I though Foxconn was the biggest supplier of OEM motherboards.

So, do you have any facts to counter MSI's statement that AMD has understated the power consumption of their CPUs?

Only the disparity between reported vid and default board voltage . . . if I had a 7870k on-hand and a newer UEFI, I'd document the process and post screenshots, though that would only shed light on the phenomenon as manifested on an Asus motherboard. I could do the same for my 7700k and show the disparity between vid and stock vcore using an older UEFI rev, if that would help.
 

Abwx

Lifer
Apr 2, 2011
12,004
4,966
136
I'm too lazy to go sifting through old AT CPU reviews to answer your question, but they have done some articles specifically on LLC and its effects on overclocking.

Dont bother, indeed this whole thread should be dedicated to the incompetence of reviewers starting with CPUworld.

This thread should be rightfully renamed "Godavari expose CPUworld incompetence", heck they got the good info from one of their reader...

As for Ian Cutress expertise in matter of TDP it looks like he dont even know what kind of gear he s using, yet he stipulates that a 117W delta at the main is prove of 95W being exceeded...

Let s see what is his PSU :

OCZ ZX Series 1250 W
http://www.techpowerup.com/reviews/OCZ/ZX_1250W/5.html

At the tested power efficency is barely 80%, out of the 117W delta at the main only 93.6W are exiting the PSU and entering the MB where they have to pass through the VRMs and get another 10% losses that yield 84.24W at the CPU level, lets count 3-4 Watt at idle and we are still roughly within the 95W TDP, and that s assuming that the CPU was the cause of 100% of the delta..


Other than that there was a review on minute one of the launch, but seems that troll sites a la CPUworld are of bigger interest for some people, certainly not for technical curiosity or interest...

Ironicaly computerbase.de didnt resist to the pleasure of pointing CPUworld incompetence, that s why they started their review with matters about, yes, you guessed it....bioses....:D

http://www.computerbase.de/2015-05/amd-a10-7870k-test-apu/
 
Last edited:
Dec 30, 2004
12,553
2
76
That's where the critical, investigative reporting I called for comes into play. If they did a proper look into AMD APU voltages, power draw and throttling, across different motherboards and APUs, that would be a quality read. What if ut turned out that >80% of APUs could run at stock speeds at 0.2V lower than stock? That would have the makings of a scandal, and really warrant looking into who has defined these stock voltages, why, and how AMD could have done differently by, say, lowering voltages and binning slightly more aggressively. Although the scale of this research would be huge, it's not much more than I expect from a site like AT.

why would they be doing this though, shooting themselves in the foot like that? Trying to devalue company to make them look like a good takeover; then new management 'suddenly gets TDP under control with magic management sauce' and AMD finds a second breath?
 

Abwx

Lifer
Apr 2, 2011
12,004
4,966
136
That's all well and good, but I don't see any reference to the actual stock voltages for the tested chip. In light of this discussion, that would be very interesting - as has been said here, more than 1.4V at stock is quite ludicrous. I really, really want AMD to be successful with these chips, but some decent critical journalism seems to be warranted here. And, of course, any attempt at undervolting and re-running the same tests would be great to see. From the (wildly) inconsistent results of various APUs, I'm tempted to believe that there is something fishy with regards to stock voltages and the related power consumption/temperatures and the consequences of these. At 117W delta, the 7870K doesn't seem power limited, but how about temeprature? How else would one explain the A8-7650K beating the A10-7870K in a few tests? Perhaps AT should start logging core speeds along with the tests, adding them to the graphs (or just a 'Throttling: Y/N' mark)?

Probability with AT review is that the reviewer didnt bother to make a elementary investigation about the CPU and eventual bios compatibilities, this is likely the cause of the random results displayed in the graphs.

As i explained above the 117W delta amount to 84W delta at the CPU level at most, so TDP is well within specs, as for temp it should be trivial in this case as the cooler is the FXs ones.

On the voltage issues that s quite a critical issue when looking at things on an enginering point of view.

Contrary to what people think perfs and perf/watt are not the most critical parameters when designing about any product, of course those parameters must be kept under control but they will be always traded to ensure stability of the device, be it a CPU, and amplifier or a car, as such stability is more important than anything else in all enginering fields.

Now a CPU stability margin is dependent of the voltage margin, that is, the excess voltage above the minimal value that guarantee stability if there s 0% voltage variation.

The minimal margin, applied generaly in consumer products, is 10% voltage margin, that means that the CPU must work reliably at 90% of its rated voltage and this imply that 21% power is added to guarantee stability.

Now for whatever reason AMD has sticked to this rule, certainly that it increase the TDP, sometimes drasticaly like the 7650K wich has 22% voltage margin at stock, or eventualy 13% for the early 7850K, but it guarantee a good stability with respect of the specced TDP.

Now one can prefer an i7-5930K sold with barely 6% voltage margin, no wonder that the MB manufacturers systematicaly overvolt thoses CPUs, they just dont want to be accused of instable MBs when it s actually the CPU voltage that is under specified, this help reduce the minimum TDP requirement at the expense of stability, at 6% voltage margin never a chip should pass a quality control, it s extremely unprofessional in enginering terms.

http://www.hardware.fr/articles/924-6/overclocking.html
 

Valantar

Golden Member
Aug 26, 2014
1,792
508
136
why would they be doing this though, shooting themselves in the foot like that? Trying to devalue company to make them look like a good takeover; then new management 'suddenly gets TDP under control with magic management sauce' and AMD finds a second breath?

I'm not saying that this is intentional by any one party - that's too conspiratorical for me. But is it a possibility? Well, why the hell not, in an (basically) unregultated, global, hundreds-of-billions-of-dollars business? Industrial espionage is rampant in the tech industries, why not also sabotage? Again, I'm not saying this is what I think is happening, but that it's a possibility? Sure.

My speculation is that this is done by AMD though, based in economics: higher voltages allow more dies to be binned for higher-end chips, increasing margins. If 80% of A10-7850Ks could run at .2V less, then 20% couldn't - and would therefore have to be binned for lower-end (and thus lower margin) chips if they lowered the stock voltages. These numbers are pulled out of thin air, as is this theory, but is it possible? Absolutely.

Also, it might be caused by inefficient testing, binning and labelling practices - if dies were better tested for quality and labeled thus, motherboards could have more extensive voltage lookup tables and more accurately supply voltages. If these processes are badly executed or otherwise lacking, then voltages go to the lowest common denominator, i.e. the lowest voltage at which all chips can run at their intended clocks.


Probability with AT review is that the reviewer didnt bother to make a elementary investigation about the CPU and eventual bios compatibilities, this is likely the cause of the random results displayed in the graphs.

As i explained above the 117W delta amount to 84W delta at the CPU level at most, so TDP is well within specs, as for temp it should be trivial in this case as the cooler is the FXs ones.

From the review you linked, using that PSU for testing CPU-only power deltas at the wall seems completely idiotic. 79% efficiency at 90 watts output? Heck, that's around the maximum power draw of an efficient desktop today. They should really get a low-wattage platinum unit for this kind of testing.

Still, though, if the 7870K undervolts as well as people say, then it seems likely that it's a far more efficient chip than AMD is usually credited with having.

Also, if the reviewer didn't look into BIOS issues, that's just tragically bad. Where's the journalistic integrity? I get that pre-computex reviews get rushed, but jeez, that's low. Looking at the 7650K review as well, the results are all over the place - but this is not discussed or looked into whatsoever. And given that the 7650K actually beats the 7870K in a few metrics, that's even more odd. Bios issues, voltage issues, throttling - something's definitely off here.

On the voltage issues that s quite a critical issue when looking at things on an enginering point of view.

Contrary to what people think perfs and perf/watt are not the most critical parameters when designing about any product, of course those parameters must be kept under control but they will be always traded to ensure stability of the device, be it a CPU, and amplifier or a car, as such stability is more important than anything else in all enginering fields.

Now a CPU stability margin is dependent of the voltage margin, that is, the excess voltage above the minimal value that guarantee stability if there s 0% voltage variation.

The minimal margin, applied generaly in consumer products, is 10% voltage margin, that means that the CPU must work reliably at 90% of its rated voltage and this imply that 21% power is added to guarantee stability.

Now for whatever reason AMD has sticked to this rule, certainly that it increase the TDP, sometimes drasticaly like the 7650K wich has 22% voltage margin at stock, or eventualy 13% for the early 7850K, but it guarantee a good stability with respect of the specced TDP.

Now one can prefer an i7-5930K sold with barely 6% voltage margin, no wonder that the MB manufacturers systematicaly overvolt thoses CPUs, they just dont want to be accused of instable MBs when it s actually the CPU voltage that is under specified, this help reduce the minimum TDP requirement at the expense of stability, at 6% voltage margin never a chip should pass a quality control, it s extremely unprofessional in enginering terms.

http://www.hardware.fr/articles/924-6/overclocking.html

If what you're saying is correct as far as accepted industry standards, would that suggest that a good i5-4690K could be stable at stock speeds at ~0.9V? Given a 20% margin and stock voltages around 1.1V, that would be logical.

Also, isn't a 20% safety margin a bit much when the overall usable voltage range is as low as it is with CPUs? Given that a 20% margin on top of 1.2V is 1.44V, which is getting close to CPU-frying voltages, that seems pretty excessive.

A 22% voltage margin for the 7650K would explain its wildly inconsistent performance somewhat, though - adding that much voltage would surely lead it to run into some limit or other and introduce throttling, no?
 

Abwx

Lifer
Apr 2, 2011
12,004
4,966
136
From the review you linked, using that PSU for testing CPU-only power deltas at the wall seems completely idiotic. 79% efficiency at 90 watts output? Heck, that's around the maximum power draw of an efficient desktop today. They should really get a low-wattage platinum unit for this kind of testing.

That s quite low standard in this department, the less cluless generaly adopted 400W PSU like the BeQUIET Straight 400 wich has exceptional efficency (for an ATX PSU) at low power, others like THG improved further by mimicking Hardware.fr wich provide more accurate measurements and purely technical infos by downvolting and underclocking.


Still, though, if the 7870K undervolts as well as people say, then it seems likely that it's a far more efficient chip than AMD is usually credited with having.

It depend of the chip, i m curious to see what these will bring in this respect but so far for downclocking the lower the SKU model the higher the margin as they do not decrease the frequencies accordingly to what would be possible while keeping a same margin


Also, if the reviewer didn't look into BIOS issues, that's just tragically bad. Where's the journalistic integrity? I get that pre-computex reviews get rushed, but jeez, that's low. Looking at the 7650K review as well, the results are all over the place - but this is not discussed or looked into whatsoever. And given that the 7650K actually beats the 7870K in a few metrics, that's even more odd. Bios issues, voltage issues, throttling - something's definitely off here.

I wonder if the reviewers are aware of their own graphs weird results...


If what you're saying is correct as far as accepted industry standards, would that suggest that a good i5-4690K could be stable at stock speeds at ~0.9V? Given a 20% margin and stock voltages around 1.1V, that would be logical.

Dont know for the 4690K but a 4670K can be kept at stock frequency at about 0.9V, although with only 2 or 3% voltage margin, that s still enough to run Prime 95 with a MB of good quality,

Also, isn't a 20% safety margin a bit much when the overall usable voltage range is as low as it is with CPUs? Given that a 20% margin on top of 1.2V is 1.44V, which is getting close to CPU-frying voltages, that seems pretty excessive.

The Intel exemple i gave work also for AMD, their higher clocked SKUs have less margin than the lower clocked parts, although they didnt get as low as their competitor, we ll know more once someone does a few tests.


A 22% voltage margin for the 7650K would explain its wildly inconsistent performance somewhat, though - adding that much voltage would surely lead it to run into some limit or other and introduce throttling, no?

It does explain nothing since at stock voltage (1.35V) the 7650K consume only 46W in X264 encoding and about the same amount in games, the 95W official spec has nothing to do with its actual dissipated power, it s branded so because it is unlocked and as such it would be ridiculous to assign it a 65W TDP rating.
 

superstition

Platinum Member
Feb 2, 2008
2,219
221
101
My 8320E on a UD3P 2.0 defaults to 1.15 or something. The voltage was too low to pass Prime as far as I recall. Then the BIOS switched the default to 1.225 by itself at one point as the default.

I've had the same thing happen with my P55 UD4P Gigabyte board and an i5 750 Lynnfield. The BIOS would fluctuate between a low voltage and something like 1.225, depending upon when I booted it.

As for APUs... I really don't see the point in them. I am not a fan of having GPUs inside chips at all. It's better, in my view, to have the heat from GPUs kept separate for ease of cooling and the supplying/control of power. A guy from ExtremeTech said chip designers put in GPU cores to use otherwise empty space but it seems to me that there should be a way of designing a chip to not waste that space and put in CPU logic or cache instead. In my book the GPU cores are waste. It's not like a person can't get a decent gaming GPU for a low price that will kill an APU's GPU (especially Intel's) and non-gamers can get an even cheaper one (although they're likely to just buy an OEM computer in the first place).

What would be nice, though, is to have a socket for the GPU in close proximity to a CPU so you could use a single highly-efficient/effective cooler to cool both, like a big vapor chamber with heatpipes and big fans. The current ATX design is really obsolete. GPUs are positioned poorly and all sorts of hacky workarounds are being used, like case side fans, to try to deal with their poor effect on airflow and their restriction to the use of tiny fans.

Sometimes I wonder if the designers of enthusiast products purposefully make poor designs in order to incrementally "upgrade" them to less flawed designs (planned obsolescence via intentional inclusion of flaws), with things like having radiators mounted at the front of the case to blow hot air directly into the case. I guess things like that are designed to sell more expensive fans.
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
23,201
13,289
136
My 8320E on a UD3P 2.0 defaults to 1.15 or something. The voltage was too low to pass Prime as far as I recall. Then the BIOS switched the default to 1.225 by itself at one point as the default.

I've had the same thing happen with my P55 UD4P Gigabyte board and an i5 750 Lynnfield. The BIOS would fluctuate between a low voltage and something like 1.225, depending upon when I booted it.

I'm not sure why your FX and i5-750 had problems running stock @ vid, but I'll hazard a guess by saying that vdroop could have been an issue. The boards likely compensated for it with higher "default" vcore settings than required by vid.

As for APUs... I really don't see the point in them. I am not a fan of having GPUs inside chips at all. It's better, in my view, to have the heat from GPUs kept separate for ease of cooling and the supplying/control of power. A guy from ExtremeTech said chip designers put in GPU cores to use otherwise empty space but it seems to me that there should be a way of designing a chip to not waste that space and put in CPU logic or cache instead. In my book the GPU cores are waste. It's not like a person can't get a decent gaming GPU for a low price that will kill an APU's GPU (especially Intel's) and non-gamers can get an even cheaper one (although they're likely to just buy an OEM computer in the first place).

It's actually easier to cool the entire package with the GPU integrated into the die, in my opinion. Aftermarket dGPU cooling is comparatively a pita to deal with, especially since you are basically changing mount points nearly every time you change video card generations. Sometimes mount points for dGPU cooling varies between vendors.

In contrast, AMD has used the same mounting hardware for AM2, AM2+, AM3, AM3+, FM2, and FM2+. It might have done so on FM1 as well, but I don't know.

In any case, I find it very easy to cool the entire APU package with a cooler designed for bigger, hotter CPUs, which is a wonderful benefit of owning an APU.

Also, if you look at the evolution of the APU (particularly Carrizo and Broadwell-C), you will find that the iGPUs are becoming more-and-more powerful. The i7-5775C is an impressive chip, and next year's Skylake Iris Pro equivalent should be even better. The 800-pound gorilla in the room is that the i7-5775C, post-overclock, probably has more raw 32-bit fp capability than an overclocked i7-5960X! The problem, of course, is using all the iGPU power for something other than graphics, but thanks to OpenCL 2.0 and DirectX 12, it shouldn't be impossible for developers to do that, at least in a few limited cases.

What would be nice, though, is to have a socket for the GPU in close proximity to a CPU so you could use a single highly-efficient/effective cooler to cool both, like a big vapor chamber with heatpipes and big fans. The current ATX design is really obsolete. GPUs are positioned poorly and all sorts of hacky workarounds are being used, like case side fans, to try to deal with their poor effect on airflow and their restriction to the use of tiny fans.

Vapor chambers can be very expensive. I agree that dGPU cooling is . . . problematic. The system of using a board socket for the GPU is essentially what AMD originally envisioned with their Fusion initiative: they planned to offer full sockets connected via HT links to the other sockets that could handle CPUs or GPUs (or they were going to use HTX slots to accomplish the same thing). That would have instantly granted direct memory access to GPUs among other things. That never made it off the drawing board, and HTX slots were only used for stuff like Infiniband connections on server-class hardware.

Sometimes I wonder if the designers of enthusiast products purposefully make poor designs in order to incrementally "upgrade" them to less flawed designs (planned obsolescence via intentional inclusion of flaws), with things like having radiators mounted at the front of the case to blow hot air directly into the case. I guess things like that are designed to sell more expensive fans.

It seems to be more of a problem of standards than anything else. The PC market has been very, very bad about that lately. While the PC design is meant to be a collection of industry standards to guarantee interoperability of hardware from multiple vendors so that the end-user and/or OEMs may choose from whichever vendor they like, the reality is that the standards that are established do not necessarily represent the best technology that one or more vendors have to offer.

Consider Nvidia's NVLink. If it's so much better than PCIe, why not present it as an industry standard and allow other expansion card manufacturers to use it for their devices? They would rather keep it proprietary (apparently). On the other side of the coin, consider Intel's BTX form factor: to the best of my knowledge, it was not proprietary at all, though case vendors seemed reluctant to adopt it. It is ironic that the BTX form factor, which was largely introduced as a response to the high temperatures/high power consumption of Netburst chips, would be of great benefit to AMD today, at least for their FX processors.

Also, to revisit dGPUs, the main reason why they are still expansion cards is that we're still getting faster (for GPUs, anyway) memory on-card than we are in system memory slots on motherboards. If/when we switch to motherboards with fixed amounts of memory soldered onto the board, the need for on-card memory may diminish to the point that the main advantage of the dGPU finally vanishes.
 

MiddleOfTheRoad

Golden Member
Aug 6, 2014
1,123
5
0
So, do you have any facts to counter MSI's statement that AMD has understated the power consumption of their CPUs? Please provide links as I have.

Oh yeah, sure.... I'm sure there are a lot of magazine articles that can document "my (personal) experience" or an IMO. Reading comprehension is clearly a huge asset of yours......

My Kill A Watt meter has shown that my Intel, AMD and even VIA CPU's have sometimes exceeded their rated power figures under certain conditions. It really isn't anything to call home about -- and I consistently run these CPU's at 100% usage on the various projects on the World Community Grid. My i7 4790K CPU by itself pulls around 130 watts under full load -- when its official TDP is 88 watts.

Considering MSI's terrible default motherboard BIOS settings (from personal experience) -- if AMD CPU's are drawing too much power on MSI boards then the fault lies with the lousy MSI engineers.
 
Last edited: