Sandy Bridge is 12-15% faster than Lynnfield: http://www.computerbase.de/artikel/prozessoren/2011/test-intel-sandy-bridge/47/
http://www.tomshardware.com/reviews/processor-architecture-benchmark,2974-15.html
You'll likely never get to see it in person beyond synthetic benchmarks like Sandra and LinX, but its used outside of the consumer market. And HPC will definitely gobble it up when Xeon E5 launches.
PrimeGrid Distributed Computing project is AVX enabled for a 20-50% performance gain on Intel CPUs. Pushes CPU power consumption up noticeably, the 2500k in my signature was running the AVX code path during our last race. It changed the time it took to crunch a WU from 1400 seconds to sub 900 seconds.
When did they update it? (what version?) I need to check it out.
I beta tested the AVX client for Seti@home and it only gave about a 5% performance gain. I do not think it was ever officially released.
Ok, fair enough, I should have worded things differently...Sandy Bridge is 12-15% faster than Lynnfield: http://www.computerbase.de/artikel/prozessoren/2011/test-intel-sandy-bridge/47/
So can you name any technology which would offer a 15% increase in IPC over Sandy Bridge without jeopardizing other metrics?
Cool. Let us know what overclocking is like with this CPUs.
Apparently there is no overclocking support. BCLK straps do not work on the Xeons. It's stuck at stock speeds, barring a small bclk adjustment...
I have, but none of them explained how a 15% increase in IPC could be achieved.Have you not read any of the Haswell posts on these forums?
Correct me if I'm wrong, but Sandy Bridge primarily achieved that speedup over Nehalem by improving the cache latencies. Nehalem had obvious flaws on that front.
I don't think they did. Ideally it was all about Turbo Core related gains and clock + extra cores. It would have been fine, had their idea worked. Rest of the hype and claims were all marketing at work and fanboy fantasies.AMD probably wishes it had a 15% increase in IPC, but that simply wasn't possible without making the chip even bigger and more power hungry.
Well, duh, Core 2 added a third arithmetic execution port!Mind you, Core 2 Duo didn't do much better than Sandy Bridge did. Advancement over Core Duo was 15-20% per clock.
Alright, but what's left for Haswell then? You still haven't answered that question.Sandy Bridge's Performance gains are likely split around the doubled load port, improved cache architecture, and branch prediction/better OoO.
Ivy Bridge is slightly more than a mere shrink. It executes MOV instructions at the register renaming stage. And since it can rename 4 registers per clock and doesn't occupy any of the 3 execution ports this removes a potential bottleneck. It also dynamically partitions Hyper-Threading resources. So we know what offers the increase in IPC for Ivy Bridge.Even Ivy Bridge, a mere shrink to 22nm, brings 4-6% IPC increase. There's no reason to think Haswell, a significant architectural change, will benefit less than that.
With all due respect, expecting a 15% increase in IPC for Haswell sounds like the same kind of fanboy fantasy to me, unless you can back it up with plausible ways to achieve that...I don't think they did. Ideally it was all about Turbo Core related gains and clock + extra cores. It would have been fine, had their idea worked. Rest of the hype and claims were all marketing at work and fanboy fantasies.
Well, duh, Core 2 added a third arithmetic execution port!
Alright, but what's left for Haswell then? You still haven't answered that question.
Ivy Bridge is slightly more than a mere shrink. It executes MOV instructions at the register renaming stage. And since it can rename 4 registers per clock and doesn't occupy any of the 3 execution ports this removes a potential bottleneck. It also dynamically partitions Hyper-Threading resources. So we know what offers the increase in IPC for Ivy Bridge.
What we don't know, is how Haswell could possibly achieve 15% on top of that. In the past there were always people who pointed out some technology which has the potential to realize such an increase, but now that performance/Watt is a crucial metric that's not so obvious any more. So if you expect something as substantial as 15% then I'd like you to put your money where your mouth is and take a stab at how it can be done.
With all due respect, expecting a 15% increase in IPC for Haswell sounds like the same kind of fanboy fantasy to me, unless you can back it up with plausible ways to achieve that...
I'd rather be surprised that they do pull it off while nobody with technical insight expected it, instead of getting disappointed because people expected it solely based on Intel's past achievements. Unlike wine, technology doesn't improve over time by sitting on your hands. There have to be hard-earned advances.
I have, but none of them explained how a 15% increase in IPC could be achieved.
There were some doubts about whether Haswell would be capable of two 256-bit FMA instructions per clock, but it was interesting to see how that could be achieved relatively easily from Sandy Bridge's existing execution cluster. Likewise the explanation for doubling the cache bandwidth is very convincing. But unless I missed it, nobody has given a reasonable explanation for a 15% increase in IPC for legacy code.
Haswell sounds pretty much like Sandy Bridge + AVX2. If we assume that Sandy Bridge was tuned to near perfection, then there are no obvious ways to increase IPC. So anyone expecting a 15% increase must think otherwise, and all I'm asking is to point out this substantial room for improvement which can be fixed without jeopardizing other design goals.
The architects at Intel making >$150K/yr to figure this out will figure it out. And they've been working on Haswell since 2007/2008...
Remind me, how long did AMD work on Bulldozer?![]()
Remind me, how long did AMD work on Bulldozer?![]()
Since the Pentium days, Intel, on their mainstream x86 side, has had only a single misstep (netburst).
I guess that's a clever attempt to skip Atom![]()
No, they've had problems with their forays in to other areas. (Larrabee, Atom, arguably Itanium). I wasn't trying to argue that. That's why I was clear about mainstream x86.
Since the Pentium days, Intel, on their mainstream x86 side, has had only a single misstep (netburst). Not living up to what they market is the exception, as opposed to the norm. If they say 10% across the board, you can bet that for most things it will be around a 10% improvement. It's not like they say 10%, and then the real improvement is +/- 5% depending upon the application.
As long as they keep meeting their claims for the next releases, we have no reason to disbelieve. Now, any 5+ year forcasts, take with a grain of salt.
Sigh. If you don't have a clue, then just say you don't have a clue but that's what you hope will happen. No shame in that.The architects at Intel making >$150K/yr to figure this out will figure it out. And they've been working on Haswell since 2007/2008...
Why not bring at least 8 core to the Desktop Enthusiast.
Weve been on 4 cores for over 5 years. "Ok fine HT 8 threads... Technology in the cores department is no good. ok they made them faster but for people that do DAW , more cores more RAM 128GB boards.
Haha, it wasn't like Netburst was a one and done thing. It drug out over what, five or six years? From ~1.3Ghz to 3.8? SDRAM & RDRAM to DDR3?
