I find it funny you use the term destroy when you're already above 150fps minumums. Who gives a flying turd at that point?
No one does, but we aren't discussing playable framerate, we're discussing games taking advantage of more cores.
Hmm so optimizations from the consoles are being carried over. This is a good sign...now I'm waiting to be surprised by dx12 support.BTW if you were wondering, the PS4 beta runs at 1080p and 50-60 fps.
Nice showing for the AMD CPU's.
So? When you are above 150fps I don't care if it uses more cores. There is nothing there to justify the extra cores in my view. There is nothing getting destroyed. It is just your choice of wording that doesn't make sense when you look at the performance of both. I don't deny that one is faster but I think it's a little overzealous to point to this as an example of why cores matter. If you get what I'm saying.
Show me a game that goes from borderline unplayable to 60fps just based on the CPU and you are on to something.
I'm not sure an 8 thread cpu at 4Ghz being beaten/tied by a 4 thread CPU at 3.4Ghz is a nice showing.
Or the 6 thread CPU at 3.5 being beaten by the 4 thread CPU at 3.5.
if they performed like this on most games it would be pretty good for AMD, but in that case they would also probably cost more, but for this game specifically it's quite good for AMD, I think performance per $ is more important here than performance per/clock/thread...
Nice showing for the AMD CPU's.
The particular numbers are irrelevant, in the next game it may be an even more ridiculous figure say 240 vs. 310, or it may be very relevant to the user experience say 58 vs. 86. We don't even know in this particular case if this was the lightest map in the game or a worst case scenario. The point is games are most likely to increasingly take advantage of more cores.
About the "destroyed" wording...Say AMD next week came out with Zen (or whatever) and it bested Intel by 20% in h264/265 encoding, we'd all mostly be in agreement it "destroyed" the Intel in that comparison. Someone enters the discussion and states "I've viewed both output files and they are identical. Thus the user experience is the same!" We'd all do a collective face palm. I was discussing the performance difference, despite the fact the 5960 is at a significant clock speed disadvantage. The fact you can't see it in this particular example doesn't mean the performance difference isn't there, you may very well be able to see it next time.
Wait is this crysis?
To me it seems like you're looking for any excuse to try to justify to everyone that more than 4 cores is worth the extra cost. No amount of reasonable arguments will sway that view.
The facts remain that you said one CPU got destroyed by another when neither was below 150fps minimums that is not destroyed.
I personally don't give a crap if the lot of you are running Pentium 4s. I'm not even trying to justify a price difference, I couldn't care less if someone wants to save a buck. I do find it a tad odd though that the very same people that comment "but games", "but games" while championing their 4 cores chips will discount every single instance of the 6/8 cores out performing them. Bad port, game sucks anyway, framerate too high to matter, etc., etc.
Umm...I think you might have directed that at the wrong person. I'm not the one saying the benchmark isn't relevant.It's a benchmark shown to show what core scaling is.... you don't actually say the framerate is too high so the bench isn't relevant. Not everyone has 980Ti.
A person with a single GPU will will see some benefit and increased minimums compared to others with weaker CPUs.
Do we seriously not understand how to interpret this benchmark now?
I personally don't give a crap if the lot of you are running Pentium 4s. I'm not even trying to justify a price difference, I couldn't care less if someone wants to save a buck. I do find it a tad odd though that the very same people that comment "but games", "but games" while championing their 4 cores chips will discount every single instance of the 6/8 cores out performing them. Bad port, game sucks anyway, framerate too high to matter, etc., etc.
It strikes me much the same way as all of the comments I saw to the effect of "Sandy Bridge to Skylake = Meh..Sidegrade" immediately after the 6700K reviews. Now, is it worth the money to go from SB to SL?? That's up to the individual, but it most definitely is not a sidegrade.
Umm...I think you might have directed that at the wrong person. I'm not the one saying the benchmark isn't relevant.
I'm not sure an 8 thread cpu at 4Ghz being beaten/tied by a 4 thread CPU at 3.4Ghz is a nice showing.
Or the 6 thread CPU at 3.5 being beaten by the 4 thread CPU at 3.5.
Umm...I think you might have directed that at the wrong person. I'm not the one saying the benchmark isn't relevant.
I'm not sure an 8 thread cpu at 4Ghz being beaten/tied by a 4 thread CPU at 3.4Ghz is a nice showing.
Or the 6 thread CPU at 3.5 being beaten by the 4 thread CPU at 3.5.
Not really sure if that's a good thing, 90% of console ports runs like crap on PC, BO3 beta on PC has mouse input lag and some textures look like crap, and doesn't run well higher than 1080p (sluggish on 1440p).Hmm so optimizations from the consoles are being carried over. This is a good sign...now I'm waiting to be surprised by dx12 support.
Not really sure if that's a good thing, 90% of console ports runs like crap on PC, BO3 beta on PC has mouse input lag and some textures look like crap, and doesn't run well higher than 1080p (sluggish on 1440p).
This game running ~120-150fps even on a moderate PC while barely 60fps on a PS4 (there's a few laggy spots on some maps)...
I was talking more from a perf/$ point of view. Gonna take you awhile to make up that difference on your electric bill. I'm running an i5 4670k myself, but if the 8350's performance was this consistent in all games I probably would have gone AMD.
