BenchZowner
Senior member
- Dec 9, 2006
- 380
- 0
- 0
Intel's current plans for another 6-core processor for the LGA1366 platform in 2010 is the i7 970 gufltown ( MSRP should be around 550$ )
Well, you cant be serious into making us believe that Intel will do such a thing,are you? or is that wishful thinking, selling a piece of blue sky?
Take a look here:
http://www.anandtech.com/show/2901/13
http://www.anandtech.com/show/2901/12
According to Anandtech, the x4 965BE beats every Clarkdale in all but two gaming benchmarks (Dawn of War II, World of Warcraft), and that's at stock. Given how much i3s thrash past around 4-4.5ghz due to poor scaling, I'd take the OCed 965 any day of the week for games.
You can't really extrapolate any performance numbers for a 920 from Clarkdale anyway . . . different memory controllers, different QPI implementation, etc.
Actually, a single Nehalem core with HT does have close to 50% more throughput than a single Thuban core at the same clockspeed for well multi-threaded applications.Same situation with a 6 core AMD vs a 4 core Intel. Intel is faster clock for clock, but not 50% faster clock for clock, which is what they would have to be for the quad core Intel to beat it in a thread intensive test.
It's in Chinese but I think it's the first (p)review that compares a Thuban with other AMD and Intel processors in a variety of apps:I really want to see gaming benchmarks with this processor, when comparing to an i7 930. Clock for clock gaming benchmarks at 4.0ghz for the 1055T and the 1090T. I also would like to see a general CPU performance comparison between the two.
The Turbo speed when 3 or more cores are idle?I don't get the 3.6Ghz
Another question I pose is why is the 1090T posted as
AMD Phenom II X6 1090T 3.2GHz/3.6GHz Turbo Core 9MB Cache - Six Core Processor - Socket AM3 (45nm) ??
I don't get the 3.6Ghz
The Turbo speed when 3 or more cores are idle?
3.6 is the clock speed of the turbo mode when only 3 cores are active.
Ah, gotcha, thanks.
The one noticeable improvement that stands out in numbers with the phenom X6 1055/1090T is the 9MB cache, also the 125TDP as opposed to Intel's 130TDP. These new x6's should perform faster clock for clock than a phenom x4 for sure.
9 MB is not actually an improvement L2+L3=9MB
L3 cache is same at 6MB.
L2 is 6 x 512 kB = 3MB
I see, I completely forgot that owners of a 6 core phenom will be getting higher 3d Mark Vantage scores as opposed to a i7 930, am I correct?
According to Anandtech, the x4 965BE beats every Clarkdale in all but two gaming benchmarks (Dawn of War II, World of Warcraft), and that's at stock.
No it doesn't. You are only comparing average framerates. When comparing high end processors, the focus should be on minimum framerates, not just averages, because that's where the architectural differences are most prevalent in contributing to a smooth gaming experience. If you just look at average framerates, then you are more GPU limited since more or less every modern CPU can provide sufficient averages.
No it doesn't. You are only comparing average framerates. When comparing high end processors, the focus should be on minimum framerates, not just averages, because that's where the architectural differences are most prevalent in contributing to a smooth gaming experience. If you just look at average framerates, then you are more GPU limited since more or less every modern CPU can provide sufficient averages.
No it doesn't. You are only comparing average framerates. When comparing high end processors, the focus should be on minimum framerates, not just averages, because that's where the architectural differences are most prevalent in contributing to a smooth gaming experience. If you just look at average framerates, then you are more GPU limited since more or less every modern CPU can provide sufficient averages.
Umm, yeah. I have an old game that always reports the minimum frame rate of 0 fps because it starts measuring immediately after the game starts up. (IE while things are still loading, the game is Black and White for the inquisitive). Does that mean we have had no progress is hardware since the release of that game?
Your observation would be correct if we could always say that the CPU is solely responsible for the minimum frame rate. However, that just isn't true. There are just as many factors to minimum frame rate as there are to average and maximum frame rate.
If you only change one components, or at very least keep component changes to a minimum, then average framerate for multiple systems is a pretty good indicator of the component's speed compared the other tested components. It is only when you make dramatic changes that things start to get hairy.
In other words, Ideal testing keeps the GPU, ram, OS, and if possible motherboard, constant while changing the CPUs out. And since there are, sometimes significant, differences between average speeds, we can safely conclude that CPU speed does in fact affect the average.
Uhm... real-life gaming tests with various CPUs/frequency ( click here )
Before you say those are just 4 games, I have other measurements from the past and also some more for a forthcoming review.
http://www.anandtech.com/show/2972/...-pentium-g6950-core-i5-650-660-670-reviewed/7
Ahem.. Real life games with average FPS...
http://www.guru3d.com/article/core-i5-650-660-661-review-test/17
And another if you don't believe me.
Heck, just about every trusted reviewer out there uses average FPS as their measurement of choice. Lower resolutions especially show CPU disparity. Not minimum FPS.
Minimum FPS is a particularly bad measurement because any number of system anomalies can happen which would cause the benchmark to report a lower then expect FPS value. It is 1 measurement verses 1000s.
The test you posted was invalid because the review chose maximum resolutions and graphical settings. That taxes the video card, forcing the CPU to wait on the video card. To properly review the cpu's capabilities, you want it the other way around. Minimum FPS doesn't do anything to solve that problem. All it shows is that most CPUs are fast enough to feed graphics overtaxed graphics cards.
well there needs to be a graph to show just how long or how many times those minimums were reached. even with a wimpy 4670 I had better playability in many games with my E8500 as opposed to using my 5000 X2. for instance in Warhead there were a few spots that just tanked the framerate in the low teens when using the 5000 X2 while never going below mid twenties with the E8500. now just doing an average framerate would not have made it apparent even though it was quite clear when actually playing the game.
http://www.anandtech.com/show/2972/...-pentium-g6950-core-i5-650-660-670-reviewed/7
Ahem.. Real life games with average FPS...
http://www.guru3d.com/article/core-i5-650-660-661-review-test/17
And another if you don't believe me.
Heck, just about every trusted reviewer out there uses average FPS as their measurement of choice. Lower resolutions especially show CPU disparity. Not minimum FPS.
Minimum FPS is a particularly bad measurement because any number of system anomalies can happen which would cause the benchmark to report a lower then expect FPS value. It is 1 measurement verses 1000s.
The test you posted was invalid because the review chose maximum resolutions and graphical settings.
To properly review the cpu's capabilities, you want it the other way around.
Dodge what? Your review of a system at maximum settings trying to be used as a CPU benchmark, which you try and use as proof that minimum FPS should be the be all, end all, benchmark tool for gamers? Heck no, it is unreliable, run those bench marks a second time and I can almost guarantee you'll get different results.I was pretty sure that you'd try to dodge this, but unfortunately for you I don't like dodgeball.
Lower resolutions with low/medium graphics settings move most of the stress to the CPU since the GPU tasks are pretty easy for a modern GPU to process, thus you'll see the minimum/average/maximum frame-rates scale with better CPU architectures, higher L2/L3 cache, higher frequencies, etc.
Those tests when it comes to reality, real-life are as worthy as dust inside your computer case.
The settings aren't the point of the test, the point of the test is to see which CPU performs better. Those settings more then anything indicate the overall performance of a CPU for games.Unless you like spending 400$ on high end VGAs and run your games at low resolutions along with low or medium graphics details and no AA/AF.
Seriously, who games at those settings ? ( except the unlucky people who can't afford a decent graphics card )
That's why you make sure the testing environment is kept as stable as possible, and equip the testing rig with the best/appropriate components and OS setup & configuration to make sure you don't have any anomalies when running a benchmark.
For testing a CPU, no, those tests are worthless. If your GPU is bottle necking performance then the benchmark is worthless for CPU speed. It is quite conceivable to see a slower CPU perform better then a faster CPU in those types of benchmarks (Which, BTW, happens in a couple of those benchmarks you posted earlier).And of course you run the benchmark at least 3 times and average the frame rates, check for any inconsistencies or anomalies, and if you see any anomalies check the FPS graphs to see if it's a "glitch" and re-run / exclude that minimum rate.
You have no freaking clue what Synthetic means, do you. Synthetic benchmarks are those that run a bunch a pre-programmed functions for non-real applications, such as sandra. They are programs that don't particularly do anything useful except for benchmarking (which is questionable as who is to say which instruction or function is more valuable then another). Games, in any setting, are NOT synthetic as they are real products that do real things besides just benchmarking.It's invalid for who ?
For the gamer ? Effin no!
"Synthetic" or theoretical system power as we can call it ( running games at very low resolutions and graphics details ) doesn't matter for a gamer because a gamer doesn't run his games at that resolution and especially with low details and no Anti-Aliasing & Anisotropic Filtering ( insert duh emoticon here!)
What are you measuring? If your tests are measuring GPU performance, then I agree with you, those tests are perfect for that. However, for measuring CPU performance, the only thing that slew of tests proves is that the CPU isn't causing a bottleneck for those games.The tests that really matter are the ones run with settings we normally use in gaming.
And at those settings, the differences are the ones you can see in the charts of the linked page.
From 3.6GHz or so and higher the processors aren't bottlenecking any modern GPU, and hence there are no differences.
[/quote]Number-wise surely that kind of measurements look better, for the manufacturers, the shops and some people who aren't aware of what these measurements mean and what they'd get in real-life gaming.
And since you were so quick at invalidating my review because you didn't like the real-life gaming tests, you can go to the previous page, and check the nonsense numbers from CPU Limited resolutions/settings with nice blingy charts
CPU Limited Scenario - Gamers look away page! ( click click! )
let me get things clear here. I said Thuban might challenge lower end i7. What is wrong with that????
If that pisses people off than you're a FANBOY period, because i only said that AMD could challenge (never said it would beat i7)
Question: How the hell do you know it won't challenge a i7 920, are you benching both now?
Uhm... real-life gaming tests with various CPUs/frequency ( click here )
Before you say those are just 4 games, I have other measurements from the past and also some more for a forthcoming review.