(gamersnexus) AMD: "FX is Not EOL" & Why What We Need in a CPU is Changing

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

inf64

Diamond Member
Mar 11, 2011
3,884
4,692
136
Wow so it turns out web browsers apparently run slow on FX6300? D: Dat heavy heavy workload for poor FX :(.
That has to be the quote of the year on AT forum.
 

NaroonGTX

Member
Nov 6, 2013
106
0
76
I routinely have many tabs open at once on my AMD processor, guess I got a golden chip or something since Firefox never bogs down?

Lol, what a ridiculous statement.
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
I don't think it's a ridiculous statement to claim that web browsing would run smoother on Intel vs. AMD. Whether you'd notice it or not is a separate issue.
 

NTMBK

Lifer
Nov 14, 2011
10,449
5,832
136
I don't think it's a ridiculous statement to claim that web browsing would run smoother on Intel vs. AMD. Whether you'd notice it or not is a separate issue.

Hell, 85% of my web browsing runs smooth as butter on my 1GHz Brazos netbook. Saying that you would notice the difference between an i3 and the FX-6300 when browsing the net is laughable.
 

NaroonGTX

Member
Nov 6, 2013
106
0
76
I don't think it's a ridiculous statement to claim that web browsing would run smoother on Intel vs. AMD. Whether you'd notice it or not is a separate issue.

You cannot be serious. I've done "serious" browsing on craptops with horrible CPU's and still didn't have issues. So to say it would be "smoother" is completely moot if it's totally imperceptible.
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
You cannot be serious. I've done "serious" browsing on craptops with horrible CPU's and still didn't have issues. So to say it would be "smoother" is completely moot if it's totally imperceptible.
Yes, I am serious. A computer that runs faster than other computer is still faster regardless of whether or not you can perceive it. Computer science is an objective science.
 

Spawne32

Senior member
Aug 16, 2004
230
0
0
Yes, I am serious. A computer that runs faster than other computer is still faster regardless of whether or not you can perceive it. Computer science is an objective science.

If your talking about the time it takes from clicking on the browser icon to when it actually opens, then your talking about hard drive performance, not CPU's. And if your referring to how fast the performance is AFTER the browser is open, then your talking about how fast your internet is. Regardless it has no bearing on this discussion and is a ridiculous statement for arguing one CPU over another.
 

NTMBK

Lifer
Nov 14, 2011
10,449
5,832
136
Yes, I am serious. A computer that runs faster than other computer is still faster regardless of whether or not you can perceive it. Computer science is an objective science.

Scientifically measurable, sure. But if it's totally useless in a real world situation I wouldn't factor it into my buying decisions at all.

I don't mind whether my CPU renders Notepad at 1000fps, or 20,000fps. One could literally be an order of magnitude slower than the other at that particular task, and it wouldn't bother me. It's only when one is noticeably slower than the other that I care in the slightest.
 

NTMBK

Lifer
Nov 14, 2011
10,449
5,832
136
If your talking about the time it takes from clicking on the browser icon to when it actually opens, then your talking about hard drive performance, not CPU's. And if your referring to how fast the performance is AFTER the browser is open, then your talking about how fast your internet is. Regardless it has no bearing on this discussion and is a ridiculous statement for arguing one CPU over another.

Actually, webpage rendering does need an at least semi-decent CPU. My parents still have a 1.6GHz Pentium 4 kicking around that they used as a daily driver until last year- and oh my goodness, that thing was a dog in internet browsing! As in completely unusably diabolically bad. They thought it was their internet connection being slow, but when I showed them that my laptop on a slow wifi connection loaded webpages far, far quicker than their PC on a 100MB/s ethernet connection, I think they realised it was time to put the P4 out to pasture!
 

Spawne32

Senior member
Aug 16, 2004
230
0
0
Actually, webpage rendering does need an at least semi-decent CPU. My parents still have a 1.6GHz Pentium 4 kicking around that they used as a daily driver until last year- and oh my goodness, that thing was a dog in internet browsing! As in completely unusably diabolically bad. They thought it was their internet connection being slow, but when I showed them that my laptop on a slow wifi connection loaded webpages far, far quicker than their PC on a 100MB/s ethernet connection, I think they realised it was time to put the P4 out to pasture!

Im talking about modern technology, like say c2d and newer. No one actually benchmarks a modern CPU vs a cpu from 10 years ago and says "hey, my pentium 3 struggles to load notepad, so the core i7 4770k must be that much better". No one turn around and says well look at the browser performance between the FX-8320 and the 4770k either. lol Because its irrelevant.
 

TerryMathews

Lifer
Oct 9, 1999
11,464
2
0
The problem is, the i3-4130 trashes the 6300 in single-core performance, which means any non-game application like a web browser, etc, is going to run significantly smoother.

Just sayin', Google Chrome with 1 tab open currently has 6 processes running and a total of 130 threads.

Facts. Use them.
 

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
Actually, webpage rendering does need an at least semi-decent CPU. My parents still have a 1.6GHz Pentium 4 kicking around that they used as a daily driver until last year- and oh my goodness, that thing was a dog in internet browsing! As in completely unusably diabolically bad. They thought it was their internet connection being slow, but when I showed them that my laptop on a slow wifi connection loaded webpages far, far quicker than their PC on a 100MB/s ethernet connection, I think they realised it was time to put the P4 out to pasture!

This.

My 2010 Macbook Pro 13" with a 2.4Ghz C2D is noticeably slower now on many web pages than my desktop FX-8320 (OC/locked at 4Ghz).

On the OP, it's pretty clear to me that AMD, while not completely abandoning AM3+ / FX, is not doing serious development of it. Their roadmap reflects that - they are going to keep the FX Vishera series for almost 2 1/2 years.

Longer cycles start to make sense on the desktop given what we've been seeing the last 3 years though. I mean really, nothing exciting has happened with CPUs since Sandy Bridge.
 

inf64

Diamond Member
Mar 11, 2011
3,884
4,692
136
Hell, 85% of my web browsing runs smooth as butter on my 1GHz Brazos netbook. Saying that you would notice the difference between an i3 and the FX-6300 when browsing the net is laughable.
Frankly I cannot believe we are even discussing this D:.
If you ask me, it's pretty amazing how low the quality of discussions has gone lately.
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,143
136
My Core i7-3517U (Ivy Bridge ULV, up to 3GHz Turbo) ultrabook is noticeably faster than my older 2007 Core 2 Duo 2GHz ''Merom'' laptop. Both running fast SSDs (Vertex 4 and Samsung 840), same browser.
 
Last edited:

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
Scientifically measurable, sure. But if it's totally useless in a real world situation I wouldn't factor it into my buying decisions at all.

I don't mind whether my CPU renders Notepad at 1000fps, or 20,000fps. One could literally be an order of magnitude slower than the other at that particular task, and it wouldn't bother me. It's only when one is noticeably slower than the other that I care in the slightest.
You're relying on some pretty strong hyperbole to make your argument. Web browsing is far more intensive than Notepad.

It's becoming more intensive every year, too. There are some crazy flash games these days... I can't even run some of them on my girlfriend's 4(?) year laptop, because the framerates are unplayable.

If an i5 renders a modern webpage in half the time it took an FX, then it wouldn't be all that far-fetched to state that an i5 would be relevant for twice as long as the FX. Buying a video card that renders video games at 1200 FPS isn't a terrible idea, as it'd last you longer than one that renders at 60 FPS.
 
Last edited:

inf64

Diamond Member
Mar 11, 2011
3,884
4,692
136
My Core i7-3517U (Ivy Bridge ULV, up to 3GHz Turbo) ultrabook is noticeably faster than my older 2007 Core 2 Duo 2GHz ''Merom'' laptop. Both running fast SSDs (Vertex 4 and Samsung 840), same browser.
2Ghz Merom is rather slow, you gotta admit it. FX6300 on the other hand, even @ stock, is rather fast chip. It might not be as fast as 3Ghz IB in this particular "task", but I doubt one could notice any difference in web browsing performance between the two. Between FX6300 and Merom you would also notice similar difference. The thing is when you are past certain point and you have so much performance in the core it doesn't matter anymore if one or the other is 20 or even 30% faster when you cannot possibly feel that in real life usage scenarios.
 

d3m

Junior Member
Jun 5, 2013
23
0
66
You're relying on some pretty strong hyperbole to make your argument. Web browsing is far more intensive than Notepad.

It's becoming more intensive every year, too. There are some crazy flash games these days... I can't even run some of them on my girlfriend's 4(?) year laptop, because the framerates are unplayable.

If an i5 renders a modern webpage in half the time it took an FX, then it wouldn't be all that far-fetched to state that an i5 would be relevant for twice as long as the FX. Buying a video card that renders video games at 1200 FPS isn't a terrible idea, as it'd last you longer than one that renders at 60 FPS.

You need a GPU for flash games.
 

NTMBK

Lifer
Nov 14, 2011
10,449
5,832
136
You're relying on some pretty strong hyperbole to make your argument. Web browsing is far more intensive than Notepad.

It's becoming more intensive every year, too. There are some crazy flash games these days... I can't even run some of them on my girlfriend's 4(?) year laptop, because the framerates are unplayable.

If an i5 renders a modern webpage in half the time it took an FX, then it wouldn't be all that far-fetched to state that an i5 would be relevant for twice as long as the FX. Buying a video card that renders video games at 1200 FPS isn't a terrible idea, as it'd last you longer than one that renders at 60 FPS.

Yeah, Flash stuff can certainly be pretty intensive. I need to run Adblock on my netbook in order to get a usable internet experience- some of the Flash ads are obnoxiously performance killing, and make the whole browser chug. And HD Youtube is completely off limits for it (though it only has a crappy 1024x600 screen, so I'm not missing anything ;) ).

I was thinking about your typical text-with-some-images webpage -probably because a large proportion of my internet time is spent arguing with people on Anandtech. And most of the rest of it is spent reading text heavy articles- I tend to prefer text to video, and hate the recent trend towards video reviews! But the rest of it comes under the 15% I mentioned in my post- that's the points where I think "wow, this processor sucks". But hey, it was stupidly cheap, its portable, and the battery life is great.

But if I can get a perfectly fine experience most of the time on a 40nm Netbook processor, then I think we can safely say that either the 6300 or i3 will be completely fine.

As for measuring performance and comparing 1500fps vs 20000fps- the issue I have with that is that comparing workloads that both processors are completely overqualified for is not really useful to me, as it is rarely indicative of how things will perform under the real stress cases. F'rinstance the binary for Notepad (to go back to my silly example) will fit entirely into a modern CPU's cache- it's less than 200kB! So the memory subsystem performance is almost entirely excluded from the performance measurement- it turns into a test of how fast the L2 cache can be accessed, and how fast the integer ALUs can run. That tells us very little about overall performance characteristics! For instance if I wanted to run a heavyweight rendering app, I would be limited by FPU performance and overall memory bandwidth- the Notepad benchmark would tell me next to nothing.

An interesting example is the "Equalizer" article that Anand did a while back. (http://www.anandtech.com/show/6877/the-great-equalizer-part-3/2) This compared the performance of newer integrated GPUs, phone/tablet GPUs, and older desktop discrete GPUs. In certain synthetics the old GPUs came out on top with raw arithmetic crunching capability, and in some they were absolutely destroyed by even a Bobcat APU. The story was just wildly inconsistent, depending on which tiny corner of the GPU you stressed with each synthetic. I've been a lot more wary of unrepresentative benchmarks ever since that- I want to see benchmarks running realistic workloads. (And obviously my daft example of a Notepad benchmark is not a realistic workload!) And yeah, heavyweight Flash is definitely a realistic workload worth benchmarking- I'm amazed that nobody does a "4K Youtube playback" benchmark, measuring average framerate/frametime consistency on a set video. Though I guess that the issue with websites is that they change their code every week, making reproducability an issue.

Sorry for the ramblings, just thought I'd try to share my thoughts on this.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Don't you remember the testing that IDC did with the 8350 which showed that it sucked down close to 200W at stock speeds? Or the reviews elsewhere which showed FX cpus using more power than expected.

This is a failure to understand what TDP means. It does not mean "how much power the system, as a whole, pulls from the wall socket".

Anandtech's review found that the FX-8350 system had a total power consumption of 195.2 watts. So does that mean AMD is lying? After all, it's way more than the specified 125W TDP. But wait a minute - Intel's i7-3770K had a total system power consumption of 119.8 watts in that same chart, and its TDP is only 77W.

It's not hard to figure out why. First of all, even though this Anandtech review does not say what power supply was used, you'd need a very good one to get more than 90% efficiency at that rating. (And a really big PSU is less likely to get its highest efficiency numbers at relatively low draws like this.) So from the 195.2 watts of AC power being drawn from the wall, we get roughly 175 watts of DC power coming out of the PSU. And remember, that's for the whole system. The first page says they used a Radeon HD 5870 video card, and that could easily take up 20W-30W just displaying the Aero desktop while Prime95 or Linpack or whatever runs (we're assuming the "total system load" implies loading down the CPU, not the GPU). So we're down to about 145W. Then we've got the secondary DC-to-DC power conversion done by the motherboard. VRMs are not 100% efficient (why do you think they get so hot?) If VRM conversion if 85%-90% efficient, then we wind up with roughly 125W being delivered to the CPU itself, in line with spec.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
This is a failure to understand what TDP means. It does not mean "how much power the system, as a whole, pulls from the wall socket".

Anandtech's review found that the FX-8350 system had a total power consumption of 195.2 watts. So does that mean AMD is lying? After all, it's way more than the specified 125W TDP. But wait a minute - Intel's i7-3770K had a total system power consumption of 119.8 watts in that same chart, and its TDP is only 77W.

It's not hard to figure out why. First of all, even though this Anandtech review does not say what power supply was used, you'd need a very good one to get more than 90% efficiency at that rating. (And a really big PSU is less likely to get its highest efficiency numbers at relatively low draws like this.) So from the 195.2 watts of AC power being drawn from the wall, we get roughly 175 watts of DC power coming out of the PSU. And remember, that's for the whole system. The first page says they used a Radeon HD 5870 video card, and that could easily take up 20W-30W just displaying the Aero desktop while Prime95 or Linpack or whatever runs (we're assuming the "total system load" implies loading down the CPU, not the GPU). So we're down to about 145W. Then we've got the secondary DC-to-DC power conversion done by the motherboard. VRMs are not 100% efficient (why do you think they get so hot?) If VRM conversion if 85%-90% efficient, then we wind up with roughly 125W being delivered to the CPU itself, in line with spec.

You havent even count the Motherboard power consumption, the Memory Modules power Consumption and the HDD power consumption. ;)
 

inf64

Diamond Member
Mar 11, 2011
3,884
4,692
136
This is a failure to understand what TDP means. It does not mean "how much power the system, as a whole, pulls from the wall socket".

Anandtech's review found that the FX-8350 system had a total power consumption of 195.2 watts. So does that mean AMD is lying? After all, it's way more than the specified 125W TDP. But wait a minute - Intel's i7-3770K had a total system power consumption of 119.8 watts in that same chart, and its TDP is only 77W.

It's not hard to figure out why. First of all, even though this Anandtech review does not say what power supply was used, you'd need a very good one to get more than 90% efficiency at that rating. (And a really big PSU is less likely to get its highest efficiency numbers at relatively low draws like this.) So from the 195.2 watts of AC power being drawn from the wall, we get roughly 175 watts of DC power coming out of the PSU. And remember, that's for the whole system. The first page says they used a Radeon HD 5870 video card, and that could easily take up 20W-30W just displaying the Aero desktop while Prime95 or Linpack or whatever runs (we're assuming the "total system load" implies loading down the CPU, not the GPU). So we're down to about 145W. Then we've got the secondary DC-to-DC power conversion done by the motherboard. VRMs are not 100% efficient (why do you think they get so hot?) If VRM conversion if 85%-90% efficient, then we wind up with roughly 125W being delivered to the CPU itself, in line with spec.
Nicely said :thumbsup: