Observations with an FX-8350

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

inf64

Diamond Member
Mar 11, 2011
3,697
4,015
136
What is the recommend value for the voltage?
Vids vary among various chips but I think it's ~1.4V as stock for many cases I have seen so far. What it is under real stress with LLC compensation ON =I have no clue .
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Vids vary among various chips but I think it's ~1.4V as stock for many cases I have seen so far. What it is under real stress with LLC compensation ON =I have no clue .

Hmmm...everything so far seems to be checking out then. Only thing left that could be amiss is if the mobo is supposes to be throttling the chip because of current draw (TDP governor) and my mobo has that disabled by default for some reason.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
I'm not sure if this has been mentioned before, but I've heard that Active PFC PSU's make Kill-A-Watt readings inaccurate. Supposedly, connecting the PC to a UPS then the UPS to a Kill-A-Watt is one workaround. If you do this, you'll first want to connect the UPS on it's own to see it's power draw and subtract that from the total.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
I tend to think that kill-a-watt readings aren't so accurate due to the nature of AC and power calculations (although they provide some rough approximations I suppose). It'd be much better and alot more accurate (by dealing with DC not AC) if we could measure the actual current in the 12V rail via current transformer or a cut in the 12V rail with a current meter..

Subscribing to this thread, very interested in the outcome and the findings regarding power consumption!
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
I'm not sure if this has been mentioned before, but I've heard that Active PFC PSU's make Kill-A-Watt readings inaccurate. Supposedly, connecting the PC to a UPS then the UPS to a Kill-A-Watt is one workaround. If you do this, you'll first want to connect the UPS on it's own to see it's power draw and subtract that from the total.

I'm using the same PSU and the same kill-a-watt for measuring power for all three CPU (2600k, 3770k and fx8350).

To whatever extent your concerns are true, the extent of the error should be present (and nearly the same) in all cases.

That said, I'll do the test as you suggest just to put an absolute number to it.

I tend to think that kill-a-watt readings aren't so accurate due to the nature of AC and power calculations (although they provide some rough approximations I suppose). It'd be much better and alot more accurate (by dealing with DC not AC) if we could measure the actual current in the 12V rail via current transformer or a cut in the 12V rail with a current meter..

Subscribing to this thread, very interested in the outcome and the findings regarding power consumption!

Kill-a-watt claims an accuracy of 0.2%.

My PSU is the Corsair Professional Series Gold AX850 (cmpsu-850ax) which has ~89% efficiency across much of the output range, and only varies by about 2% efficiency across the entire range:

ax850-efficiency.png

(^ Corsair's technical data)

As you can see, we are looking at loads on my PSU that range from ~220W to ~380W, meaning the PSU efficiency is ranging from ~90% to ~91% across these tests.

That means the DC losses are at most 10%, 288W (±0.2% = ±0.6W) at the wall means the system (sans PSU) is really drawing 260W. And at idle the 87W at the wall means the system (sans PSU) is really drawing 78W.

That still means we have a situation here in which the loaded FX-8350 at stock conditions (stock HSF, stock Vcore, etc) results in 180W more power consumption when running LinX versus when the system is sitting idle.

The same kill-a-watt, same PSU, same ram, same video card, same OCZ V3, same LinX settings, etc but using a different ASUS ROG mobo (the MIVE-Z) and either an i7-2600k or i7-3770k results in (1) higher Gflops (no surprise) and (2) substantially lower power-consumption (by >100W lower).

Now either the CPU is truly burning through 180W of power at stock settings while running LinX or my Crosshair V Formula-Z motherboard is burning through 60W on its own. (seems unlikely)

This brings me back to the CPU.

Now my FX-8350 is burning through power at a rate no one else seems to be able to replicate at stock clockspeeds but it also seems to be turning in a GFlops number that no one else is getting either. My higher power usage brings higher Gflops, stands to reason if performance/W is conserved.

Which leaves us with two questions - why 200W at stock settings, and why do other people's FX-8350 not turn in the same performance? Do these FX-8350 really hit their current throttle running a program like LinX even at stock clocks and volts?

I suppose it is possible, the GPU makers had to implement hardware limits that triggered when OCCT GPU torture tests were activated. Maybe my Crosshair V motherboard is somehow circumventing a set of current restrictors that everyone elses motherboards are leaving in place when LinX launches?
 

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
IDC, i think its HPC Mode in the BIOS of AM3+ boards, its a vrm overload/heat protection mechanism.

http://www.xbitlabs.com/articles/mainboards/display/amd-fx-mainboards-roundup_3.html

The nominal operation mode with default settings also didn’t please us too much. By default the mainboard set the memory timings to extremely high values of 9-9-9-24-1T, all power-saving Cool’n’Quiet and C1E technologies were disabled. The nominal frequency of our AMD FX-8150 processor is 3.6 GHz, but even when all cores are utilized, the CPU can increase its clock rate up to 3.9 GHz, and under lower loads – up to 4.2 GHz. These were publicly known facts, but no one could explain why under heavy load the CPU frequency would drop down to 3.3 GHz not only during overclocking, but also in the nominal mode. During our experiments we discovered that enabling “HPC Mode” parameter in the “CPU Configuration” section prevents the frequency from dropping like that. Although, I have to admit that this parameter works in a very unique way. Even when it has been enabled, the processor frequency may still drop, so you will need to cut off all power to prevent this from happening. The opposite is also true. Once we have enabled the “HPC Mode”, we could successfully complete our tests in overclocked mode and there was no frequency drop of any kind.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
I'm using the same PSU and the same kill-a-watt for measuring power for all three CPU (2600k, 3770k and fx8350).

To whatever extent your concerns are true, the extent of the error should be present (and nearly the same) in all cases.

Lets assume for a second that it's true... That a Kill-A-Watt isn't accurate when an APFC PSU is connected to it. We don't know what that inaccuracy curve looks like. It may not be linear at all. We all know the FX is going to consume more power than SB/IB, that's a given. It is entirely possible that the inaccuracy may increase as load increases, thus compounding the inaccurate readings.

Either way, I'm interested in seeing more data as you harvest it.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
IDC, i think its HPC Mode in the BIOS of AM3+ boards, its a vrm overload/heat protection mechanism.

http://www.xbitlabs.com/articles/mainboards/display/amd-fx-mainboards-roundup_3.html

I'm going to look for that in my BIOS, and enable (or disable it as it were) when I find it.

Still though, if the xbitlabs findings are correct, then basically they are saying that at stock, when the mobo is operating as AMD intended it to operate, these CPU's are expected to throttle and reduce clockspeed/performance in order to fit inside their TDP rating.

Isn't that just a little bit disingenuous? I expect to lose the turbo-bins when the chip becomes fully loaded, but if the chip is sold as being a 4GHz 8-core chip then I darn well expect to be able to use 8 cores at 4GHz without triggering some over-current protection on a motherboard that cost $30 more than the CPU itself :(

I don't know...I'm still not convinced I've got this set up right. If the power-draw really was like this then I'd think there would be more noise made about on the internet. And this thread shows the opposite, most folks are saying the numbers are bullocks for one reason or another. So which is it then?
 

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
I think its a way for cheap motherboards with 4+1 or plain sucky vrms to cope with the chips high requirements and not blowing off, i.e. MSI has put out some real crappy vrm implementations in their AM3+ motherboards and already had taken the top spot for blownup vrms and defunct boards, take a look at the list.

http://www.overclock.net/a/database-of-motherboard-vrm-failure-incidents
http://www.overclock.net/t/946407/amd-motherboards-vrm-info-database
http://www.overclock.net/t/943109/about-vrms-mosfets-motherboard-safety-with-125w-tdp-processors

How to Enable "HPC-mode" to Achieve up to 6% Improvement in HPL Efficiency

http://en.community.dell.com/techce...ve-up-to-6-improvement-in-hpl-efficiency.aspx
 
Last edited:

SPBHM

Diamond Member
Sep 12, 2012
5,056
409
126
power usage/perf is probably on par with nehalem+x58 (full load at least)?

I would expect lower (at least 20w lower) idle power usage, but perhaps your MB is a bit inefficient?
as for the power usage under load, I really doubt that the VGA (probably using less than 30w while running linx/idle), or the chipset (probably always less than 20w for NB+SB), so it can only be the CPU, maybe the inefficient(?) MB is not helping...

as for throttling under full load, that wouldn't really be absurd, most softwares will never load anything near as much as linx the CPU, and the GPUs from NV/AMD have been doing the same for a few gens (with furmark, occt gpu) no?

I would be curious to see some power numbers from cinebench, and maybe even something like super pi....
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
IDC, go to Digi+VRM/power Control

Then on "CPU Load Line Calibration" choose Regular and on "CPU PWM mode" choose T-Probe. (only for default CPU frequency)

CPU voltage to auto.

Also, change HPC mode to Disable.

a-bios1.png
 

inf64

Diamond Member
Mar 11, 2011
3,697
4,015
136
@ IDC

This whole ordeal with your sample brings me back to your original post when you said that you think somebody may have tinkered with your CPU BOX. I know it sounds crazy but what if someone already tortured this CPU with insane Vcore and clocks and then returned it in its box and back to newegg?

BTW I asked a user over AMD section @ XS to run his FX8320 setup thru that exact version of Linx with same problem size. He has a lot of peripherals attached to his killawatt but still we can use his power draw as reference point as 8320 is also 125W part. His system pulls 228W at stock with the Linx settings you used. Only when he pushed the NB @ 2.8Ghz with crazy high NB VID of 1.5V he had hit 260W. His 8320 has stock Vcore of 1.4V. If he touched only the multiplier and went to 4Ghz than roughly his CPU socket power should increase linearly with clock speed(roughly). That remains to be seen though.
 

sequoia464

Senior member
Feb 12, 2003
870
0
71
@ IDC

This whole ordeal with your sample brings me back to your original post when you said that you think somebody may have tinkered with your CPU BOX. I know it sounds crazy but what if someone already tortured this CPU with insane Vcore and clocks and then returned it in its box and back to newegg.

Thinking the same thing, the green sticker on my box is not easily peeled back. Looks to me like IDC,s was purposely peeled back. I have seen people at some sites admit to returning working units if they didn't clock as well as they had hoped
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
@ IDC

This whole ordeal with your sample brings me back to your original post when you said that you think somebody may have tinkered with your CPU BOX. I know it sounds crazy but what if someone already tortured this CPU with insane Vcore and clocks and then returned it in its box and back to newegg?

BTW I asked a user over AMD section @ XS to run his FX8320 setup thru that exact version of Linx with same problem size. He has a lot of peripherals attached to his killawatt but still we can use his power draw as reference point as 8320 is also 125W part. His system pulls 228W at stock with the Linx settings you used. Only when he pushed the NB @ 2.8Ghz with crazy high NB VID of 1.5V he had hit 260W. His 8320 has stock Vcore of 1.4V. If he touched only the multiplier and went to 4Ghz than roughly his CPU socket power should increase linearly with clock speed(roughly). That remains to be seen though.

Thinking the same thing, the green sticker on my box is not easily peeled back. Looks to me like IDC,s was purposely peeled back. I have seen people at some sites admit to returning working units if they didn't clock as well as they had hoped

I've got a few more tests to finish up and then I'll be hitting my BIOS to see what settings I can change.

I appreciate the effort to get more power-usage data but power numbers without the accompanying GFlops numbers are going to be meaningless in this situation.

If the cpu isn't churning through the calculations as fast as mine is (for whatever reason) then of course the power-usage will be lower.

I am mostly just interested in ensuring that my CPU is in proper working order. I could not care less whether it uses 200W or 100W when it is in proper working order though. And I don't really want to turn my 200W CPU into a 100W CPU and reduce performance along the way just so I can have a 100W CPU, those kinds of "solutions" are the uninteresting outcomes (albeit interesting to explore) because you can get there simply by undervolting or underclocking your CPU from the start.

At this time I just want to know how this CPU performs at stock as intended/sold by AMD. Once I've got that nailed down then I can start taking the chip off-road so to speak and start changing voltages, clockspeeds, cooling, etc.

The "HPC mode" situation is intriguing to me because it does imply that AMD knows their CPU's will readily exceed the TDP rating when running certain applications (even at stock) and as such the motherboards are designed to intentionally throttle back the CPU even though it is running at stock.

I could understand needing a bios option like "HPC mode" when OC'ing is involved, I have to disable the TDP and current-restrictions on my Intel rigs too when I OC them. But I don't have to do that just to get them to perform as advertised at stock.

Regarding the sticker - I peeled that back, it didn't come like that. I should have made it more clear, I was just saying I was surprised that the only thing sealing the retail box was one little circular sticker because it doesn't look nearly as tamper proof as the Intel retail boxes.

But I went and googled for FX-8350 unboxings and you can see video after video of the exact same sticker being peeled off, just as I did with mine, by the owners during unboxing.

So I don't think my retail box was tampered with, there are no physical tell-tale signs of malfeasance anywhere on the package or with the contents.

Instead the only thing that seems amiss is the power-consumption results, as well as the performance. Both are higher in ways that nobody expected, unless this "HPC mode" really is a factor even when the chip is at stock. :confused:

I still don't trust my motherboard to be honest. I had issues with my 2600k and the MIVE-Z when it first came out, with the MIVE-Z intentionally over-volting the ram way over-spec as its default "auto" value. It also over-volted my 3770k, by a LOT too (some 0.2V too much).

So this Crosshair V mobo is still on my prime suspect list at the moment.

I'll be finding out today though, whatever the culprit may be, that is for sure.
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
Regarding the sticker - I peeled that back, it didn't come like that. I should have made it more clear, I was just saying I was surprised that the only thing sealing the retail box was one little circular sticker because it doesn't look nearly as tamper proof as the Intel retail boxes.

I havent seen that you mentioned this before,
The little green circular sticker IS NOT the only thing that seals the box. There mast be a white sticker on the top of the box with the specs of the CPU that seals one side(Back) of the lιd.

Was the white sticker broken on your BOX or not ??

BOX Sealed
fxboxsealed1.jpg


BOX White Sticker Broken
fxboxsealedbroken1.jpg


BOX White Sticker Broken
fxboxsealedbroken2.jpg
 

Abwx

Lifer
Apr 2, 2011
10,937
3,440
136
@ IDC

This whole ordeal with your sample brings me back to your original post when you said that you think somebody may have tinkered with your CPU BOX.

I made the remark in another thread but it seems that my thoughts
were not understood.

In short IDC s CPU ID is the one of an OEM SKU not from a regular boxed CPU.

Might well that his CPU was simply exchanged since it s likely
that official Black Editions are binned CPUs that have slightly
better overclock headroom.






 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
OK guys,

FX8350 + ASUS M5A97 R2.0
Default Cooler
2x 4GB Kingston 2133MHz (running at 1866MHz 9-11-9-27)
ASUS HD7950 @ 1GHz core 1500MHz Memory
HDD : 1TB Seagate 7200rpm 64MB cache
PSU : Be-Quite Dark Power Pro 1000W
Win 8 Pro 64bit

HPC disabled

Idle 69W (desktop)

LinX 6890MB 279W
fx8350linx11280.jpg


LinX 2048MB 274W
fx8350linx21280.jpg


Edit: watch that CPUz is at 4100MHz, Turbo works fine ;)

I will say that IDCs FX8350 is fine. ;)

fx8350m5a97r21.jpg
 
Last edited:

inf64

Diamond Member
Mar 11, 2011
3,697
4,015
136
I'm sorry but there is no chance that FX8350 draws 200+ watts from the socket under any workload. That total system power draw difference has to be the effect of combined power draw of CPU and the board(and power supply efficiency).

They would have to be be outright mad to spec the chip as 125W TDP (maximum) part and design the cooling and socket specs and then launch a product that draws 60% more power.
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
I'm sorry but there is no chance that FX8350 draws 200+ watts from the socket under any workload. That total system power draw difference has to be the effect of combined power draw of CPU and the board(and power supply efficiency).
We've already discounted those possibilities (PSU efficiency, board). How can you remain defiant in the face of facts? Idontcare and AtenRa have both shown the power consumption difference - ~200W extra power consumption when running a CPU-intensive program.

They would have to be be outright mad to spec the chip as 125W TDP (maximum) part and design the cooling and socket specs and then launch a product that draws 60% more power.
They dealt with this (rather shadily) by the "HPC mode" setting. It is probably "off" by default so that 125W cooling and power delivery will be enough because the chip will then throttle itself. There is no risk to AMD since OEM cooling solutions would not need to handle 200W.

It seems the Idontcare's ASUS ROG board, however, defaults to HPC mode on, probably due to the target audience (there is little value for overclockers to have HPC mode disabled, given that they will certainly want to surpass the stock TDP limit).
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
@AtenRa

I seem to have mistakenly understood that your posted results were HPC on. I see now that you actually said HPC was off. That's rather surprising. What happens when you run those tests and enable HPC mode?
 

sequoia464

Senior member
Feb 12, 2003
870
0
71
I'm sorry but there is no chance that FX8350 draws 200+ watts from the socket under any workload.

Probably not related, but - I have an 8320 pretty much at stock - CPUID Hardware Monitor gives me an output for "Powers" under the CPU, something that I had never seen before with a Deneb or Thuban. This is giving me a current, minimum and a maximum value in watts.

Any idea if I can attach any value to those readings as far as accuracy? This is on an Asus Sabertooth.

I'm not really stressing anything much right now but, running LinX, the max wattage is around 55 watts.
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
Any idea if I can attach any value to those readings as far as accuracy? This is on an Asus Sabertooth.
I see that, in both my Deneb and Thuban rigs. They are completely unreliable and useless (at least in my rigs), as they do not move at all, unless I am at stock and CnQ is on (and even when they do move at stock with CnQ, their movement is only "fixed" - I see only three wattage values ever).

When overclocked, even in zero load conditions, the Powers row simply display a fixed wattage "117.xx" watts, no matter what I am doing. If I choose to underclock and undervolt, it still remains static, but displays a lower wattage (something around 80W), and remains at that value no matter what I'm doing - zero load, or loaded by a stress test.
 

sequoia464

Senior member
Feb 12, 2003
870
0
71
I see that, in both my Deneb and Thuban rigs. They are completely unreliable and useless (at least in my rigs), as they do not move at all, unless I am at stock and CnQ is on (and even when they do move at stock with CnQ, their movement is only "fixed" - I see only three wattage values ever).

I am only mildly clocked up - 3700MHz from 3500, Turbo at 4200 - this is with CnQ enabled.

My values do fluctuate, the current wattage bounces between 15 watts and 40 watts at idle- with the value not fixed at all (it can change as often as - roughly - twice per second). Doesn't mean they are accurate though - the 55 watts max running LinX seems quite low.

Appreciate the input.