Go Back   AnandTech Forums > Hardware and Technology > CPUs and Overclocking

Forums
· Hardware and Technology
· CPUs and Overclocking
· Motherboards
· Video Cards and Graphics
· Memory and Storage
· Power Supplies
· Cases & Cooling
· SFF, Notebooks, Pre-Built/Barebones PCs
· Networking
· Peripherals
· General Hardware
· Highly Technical
· Computer Help
· Home Theater PCs
· Consumer Electronics
· Digital and Video Cameras
· Mobile Devices & Gadgets
· Audio/Video & Home Theater
· Software
· Software for Windows
· All Things Apple
· *nix Software
· Operating Systems
· Programming
· PC Gaming
· Console Gaming
· Distributed Computing
· Security
· Social
· Off Topic
· Politics and News
· Discussion Club
· Love and Relationships
· The Garage
· Health and Fitness
· Merchandise and Shopping
· For Sale/Trade
· Hot Deals
· Free Stuff
· Contests and Sweepstakes
· Black Friday 2013
· Forum Issues
· Technical Forum Issues
· Personal Forum Issues
· Suggestion Box
· Moderator Resources
· Moderator Discussions
   

Reply
 
Thread Tools
Old 11-10-2012, 01:44 PM   #126
boxleitnerb
Platinum Member
 
Join Date: Oct 2011
Posts: 2,475
Default

The point is, AMD is only somewhat competitive in one partial area (multithreading) due to lowering prices, not due to a better product. And even then, the follow-up costs could sum up pretty quickly depending on the usage scenario.
boxleitnerb is offline   Reply With Quote
Old 11-10-2012, 02:20 PM   #127
JumpingJack
Member
 
Join Date: Mar 2006
Posts: 59
Default

Quote:
Originally Posted by AtenRa View Post
2600K has 8MB of cache and FX8350 has 16MB of cache and as a consumer i dont care even if the die size is the size of Crete Island.
Indeed, Intel's cache implementation far exceeds what is found in the Bulldozer/Piledriver core.

Consumers couldn't careless about die size, but AMD sure does.... Being competitive has two faces, the consumer side (which you are ineffectually arguing) and the cost side (which you are ignoring), on costs AMD loses tremendously... If their die size (I.e. cost) performance on against the competitive price stack don't mesh, AMD loses money....extended over time and it begins to affect future revisions as less can be reinvested in R&D and so forth.... Bottom line is that the 8350 really isn't competitive to the 2600k overall.
JumpingJack is offline   Reply With Quote
Old 11-10-2012, 02:23 PM   #128
mrmt
Platinum Member
 
Join Date: Aug 2012
Posts: 2,270
Default

Quote:
Originally Posted by AtenRa View Post
2600K has 8MB of cache and FX8350 has 16MB of cache and as a consumer i dont care even if the die size is the size of Crete Island.
You, a vanilla desktop consumer, yes, it does not matter.

For OEMs who must save every penny they can, the smaller SNB TDP is a given as they must spend less on cooling and can go to smaller form factors.

For AMD, which designs and sells the processors you seem to blindly adore, the bigger die area does matter, as their cost is directly tied to the size of the processor, and according to their financial statements they are losing money.

Ironic you mentioned Greece in a post related to AMD. Both are imploding but some die hards on both sides still think that there is easy salvation or that someone will pull a miracle for them.
mrmt is offline   Reply With Quote
Old 11-10-2012, 02:51 PM   #129
AtenRa
Diamond Member
 
AtenRa's Avatar
 
Join Date: Feb 2009
Location: Athens Greece
Posts: 5,779
Default

Quote:
Originally Posted by Idontcare View Post
If AMD did that then that would mean they are continuing to do exactly what they said they don't want to be doing (i.e. the stuff on the lefthand side of the slide):



I doubt they went to all the trouble of assembling the slide above, making analysts aware of it, only to then turn their back to it and slog straight ahead into pursuing the development of high performance products like steamroller that require adopting bleeding edge technology nodes.

That would be pretty dumb of them to stand up and say "we know what the problem here, but we are going to keep doing it for years and years to come!"
1: Focus on power-performance optimized cores.
This one comes from design and manufacturing. 20nm will allow them to raise the power-performance of steamroller cores.

2: Agile and Flexible SoC methodology
This is about SoCs and has nothing to do with Server and high end Desktop Chips.

14nm-XM will make their SoCs more agile and Flexible(lower power consumption is the key for SoCs)

3: Realize heterogeneous computing through HSA value proposition.
Again 20nm will make APU(HSA) products more competitive (smaller dies, higher performance per watt etc).

4: Focus on disruptive play in data center
20nm will help them create a disruptive technology (SteamRoller and beyond) for the Cloud and the Data centers.

Two resent and most interesting links about the subject of Disruptive Technologies in Data centers and the Cloud.

http://www.cisco.com/en/US/solutions...leMktPulse.pdf

http://www.cloudbook.net/resources/s...ive-technology

5: Adopt industry partnership approach
Split the R&D cost among partners and more.

There is nothing disrupting their new strategy if they will start producing SteamRoller at 20nm in 2014. On the contrary, they will need new and improve manufacturing processes like 20nm and 14nm-XM in the next years in order to be able to realize their new strategy.
__________________
Thief : Mantle CPU Scaling and Power evaluation
(10 CPUs at default and Overclock, including Power Consumption)
AtenRa is offline   Reply With Quote
Old 11-10-2012, 03:59 PM   #130
Haserath
Senior Member
 
Haserath's Avatar
 
Join Date: Sep 2010
Posts: 722
Default

Quote:
Originally Posted by AtenRa View Post
There is nothing disrupting their new strategy if they will start producing SteamRoller at 20nm in 2014. On the contrary, they will need new and improve manufacturing processes like 20nm and 14nm-XM in the next years in order to be able to realize their new strategy.
Given AMD's track record. Don't bet on it.

Funny how they released the From/To propaganda, and now they have an architectural update with a die shrink?
Quote:
2600K has 8MB of cache and FX8350 has 16MB of cache and as a consumer i dont care even if the die size is the size of Crete Island.
I care... wasteful cache= higher power. The 2600k has lower latency and higher bandwidth.

Steamroller is looking good, but they need a lot of energy efficiency gains.
Haserath is offline   Reply With Quote
Old 11-10-2012, 04:13 PM   #131
AtenRa
Diamond Member
 
AtenRa's Avatar
 
Join Date: Feb 2009
Location: Athens Greece
Posts: 5,779
Default

Quote:
Originally Posted by mrmt View Post
Ironic you mentioned Greece in a post related to AMD. Both are imploding but some die hards on both sides still think that there is easy salvation or that someone will pull a miracle for them.
I mentioned Crete not Greece, its the largest Greek Island.

There is nothing easy on Greek situation, but yes we are a lot of us here that do believe, that WE(Greeks) can change things for the better. WE(Greeks) have done it before, we sure can do it again.
__________________
Thief : Mantle CPU Scaling and Power evaluation
(10 CPUs at default and Overclock, including Power Consumption)
AtenRa is offline   Reply With Quote
Old 11-10-2012, 05:15 PM   #132
mrmt
Platinum Member
 
Join Date: Aug 2012
Posts: 2,270
Default

Quote:
Originally Posted by AtenRa View Post
I mentioned Crete not Greece, its the largest Greek Island
I know, I just found too ironic, and your behavior regarding everything AMD reminds me of the behavior of some Greeks regarding the crisis, which is delusional to say the minimum.

Whenever I listen to an AMD EC or read any transcript from any event, no matter my opinion on how bad they are managing the situation, I can sense they have a sense of urgency, and the message they invariably try to pass is something like "we know that our previous strategy didn't pan out, give is a bit of time and we'll work a new one".

Every single decision this year was to move away from their current mainstream business. Delay x86 projects? Check. Burn cash acquiring others companies? Check. Announce future partnership that does not include x86? Check. In essence, they are trying to overhaul the company, to disrupt its current business model that, in the words of the executive team, does not work.

But what's your interpretation of the facts?

Quote:
There is nothing disrupting their new strategy if they will start producing SteamRoller at 20nm in 2014. On the contrary, they will need new and improve manufacturing processes like 20nm and 14nm-XM in the next years in order to be able to realize their new strategy.
Sound like George Papandreou campaigning in 2009 telling their voters that there was nothing wrong with the country, that the crisis was global and that the solution was to open the state coffers to keep things as they were.

I don't know if you do that out of sympathy for AMD because you think that everyone here is trying to unjustly bash the company or because you can't grasp market fundamentals, but either way I suggest you to review your premises. Nobody is here to gratuitously bash AMD and the market isn't pounding AMD stock just for the fun. People bash AMD because the company isn't executing well since the Athlon 64, and that was almost 10 years ago, and market is pounding AMD because nobody can really see a viable company in the near future.
mrmt is offline   Reply With Quote
Old 11-10-2012, 05:24 PM   #133
frozentundra123456
Diamond Member
 
Join Date: Aug 2008
Posts: 5,017
Default

Quote:
Originally Posted by cytg111 View Post
I cant find fault in his reasoning?
I can. He continually uses GPU limited scenarios to evaluate CPU performance.

And even if you accept that flawed testing methodology, why would you use a more power hungry processor for a GPU limited scenario?
frozentundra123456 is offline   Reply With Quote
Old 11-11-2012, 01:20 PM   #134
AtenRa
Diamond Member
 
AtenRa's Avatar
 
Join Date: Feb 2009
Location: Athens Greece
Posts: 5,779
Default

Quote:
Originally Posted by frozentundra123456 View Post
I can. He continually uses GPU limited scenarios to evaluate CPU performance.

And even if you accept that flawed testing methodology, why would you use a more power hungry processor for a GPU limited scenario?
Let me show you how wrong it is to make a gaming CPU conclusion benchmarking ONLY at low resolutions.

AMD Vishera FX-6300 & FX-4300 Review
http://www.hardwarecanucks.com/forum...00-review.html



At 720p Core i3 3225 is faster than FX8350. People will make the conclusion that FX8350 sucks for gaming. They will also say that because the FX8350 cannot output more frames at lower resolutions it will not be faster than Core i3 in higher resolutions.

Lets see what happens at 1080p



Well look at that, FX8350 is not only faster than the core i3 but it is faster than Core i5 2400 and within 5 fps of Core i5 3570K. Suddenly core i3 is not better at 1080p in this game.

Lets have another one,



FX6300 is slower even than Core i3 3225 at 720p. People would say that the FX6300 will not be faster than Core i3 3225 at 1080p with a high end GPU because its slower at low resolutions.

Lets see what happens at 1080p



Hmm, FX6300 is faster even than Core i5 at 1080p. According to the people screaming about low resolution gaming Benchmarks the FX6300 would be slower even at 1080p than core i3 with a high end GPU.

I have been saying over a year now that IT sites should also benchmark at 1080p in order to really have a Whole picture of the performance of a CPU for gaming. FX6300 could be slower than Core i3 at 720p but it is on par or better in 1080p gaming.

Different games will exhibit different performance characteristics in different resolutions, making a conclusion only from low resolution gaming benchmarks is wrong plain and simple.
__________________
Thief : Mantle CPU Scaling and Power evaluation
(10 CPUs at default and Overclock, including Power Consumption)

Last edited by AtenRa; 11-11-2012 at 02:21 PM. Reason: Whole not Hole ;)
AtenRa is offline   Reply With Quote
Old 11-11-2012, 01:32 PM   #135
Haserath
Senior Member
 
Haserath's Avatar
 
Join Date: Sep 2010
Posts: 722
Default

^Couldn't the performance difference be attributed to low settings vs. high settings, due to a setting increasing cpu load and using extra threads, instead of the resolution increase?

What would be the most useful test would be all GPU settings(including resolution) to be lowered, and the CPU settings raised to see which is better under high loads.

Even we know the 6300+ is better than Intel dual cores if it can be used to its fullest though. It depends heavily on game threading and load.
Haserath is offline   Reply With Quote
Old 11-11-2012, 01:45 PM   #136
AtenRa
Diamond Member
 
AtenRa's Avatar
 
Join Date: Feb 2009
Location: Athens Greece
Posts: 5,779
Default

When you have a high end graphics card like the GTX670, you going to play at 1080p or above with high image quality settings. It is at those resolutions and settings that you will have to know if one CPU is faster than the other or if your CPU bottlenecks your GPU.

I really dont see how benchmarking at low image settings will help with your gaming.
__________________
Thief : Mantle CPU Scaling and Power evaluation
(10 CPUs at default and Overclock, including Power Consumption)
AtenRa is offline   Reply With Quote
Old 11-11-2012, 02:13 PM   #137
inf64
Platinum Member
 
inf64's Avatar
 
Join Date: Mar 2011
Posts: 2,028
Default

It's funny how year after year reviewers try to "isolate" CPU performance in game tests by forcing low quality image settings and resolution. The story is that GPUs will become stronger and match the SLI products of the yesterday so they use SLI/CF with low IQ/resolution to simulate this effect. New GPUs come and good old Deneb/X6 run GTX670/7970 without any issues or "bottlenecks"(Skyrim excluded,it is "special").

What reviewers fail to notice is that platform SLI performance does not equal next gen single GPU performance. Newest games,when put on best IQ and high resolution settings run practically the same on any modern 3.3Ghz+ QC x86 chip. Sure there are some differences,but mostly they are specific to certain games and scenarios(BF3 MP 64slot anyone?)

Anyway,I'd say that even "lowly" FX4000 at ~4.2ghz which basically has 2FP units(4 threads) can run almost any game on highest end card with good fps. Surely 3960x or 3770K@4.8Ghz would output more fps on average,but when you push the high IQ modes in todays(and future) games, performance is just getting capped by GPU while CPU can help only so much.
__________________
ShintaiDK:"There will be no APU in PS4 and Xbox720."
ShintaiDK:"No quadchannel either.[in Kaveri]"
CHADBOGA:"Because he[OBR] is a great man."
inf64 is offline   Reply With Quote
Old 11-11-2012, 02:20 PM   #138
boxleitnerb
Platinum Member
 
Join Date: Oct 2011
Posts: 2,475
Default

Low detail settings are stupid because they can affect the CPU too. It would be too much work and effort to check what settings are affecting what component. It is resolution and AA/AF that should be lowered (but still 16:9 for maximum CPU load) for most thorough and informative CPU gaming benchmarks. To benchmark at 1080p is not okay since too much depends on a proper selection of a benchmark scene and personal tastes regarding fps count, too. Thus better play it safe and benchmark at 720p.

And of course savegames have to be used, timedemos and integrated benchmarks are mostly nonsense and usually produce much higher fps values than actual gameplay.

This has been explained countless of times, of course.

Quote:
Originally Posted by inf64 View Post
It's funny how year after year reviewers try to "isolate" CPU performance in game tests by forcing low quality image settings and resolution. The story is that GPUs will become stronger and match the SLI products of the yesterday so they use SLI/CF with low IQ/resolution to simulate this effect. New GPUs come and good old Deneb/X6 run GTX670/7970 without any issues or "bottlenecks"(Skyrim excluded,it is "special").

What reviewers fail to notice is that platform SLI performance does not equal next gen single GPU performance. Newest games,when put on best IQ and high resolution settings run practically the same on any modern 3.3Ghz+ QC x86 chip. Sure there are some differences,but mostly they are specific to certain games and scenarios(BF3 MP 64slot anyone?)

Anyway,I'd say that even "lowly" FX4000 at ~4.2ghz which basically has 2FP units(4 threads) can run almost any game on highest end card with good fps. Surely 3960x or 3770K@4.8Ghz would output more fps on average,but when you push the high IQ modes in todays(and future) games, performance is just getting capped by GPU while CPU can help only so much.
No, you cannot say that. The CPU can make a quite healthy contribution even at 1080p or even beyond. There are enough examples out there, just go look. And what "good" fps is, is very subjective. Depending on the game engine and the player that may range from 30 to 120fps or even higher. I wouldn't want any reviewer telling me what "good" is, I will decide myself and so will everyone else.

Last edited by boxleitnerb; 11-11-2012 at 02:27 PM.
boxleitnerb is offline   Reply With Quote
Old 11-13-2012, 04:10 PM   #139
wenboy
Junior Member
 
Join Date: Oct 2012
Posts: 12
Default

I was planing on having an High END kavari APU. If AMD doesn't publish this on time, they are dead to me
wenboy is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 08:34 PM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.