[THG]Core i7-4770K: Haswell's Performance-Previewed

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SammichPG

Member
Aug 16, 2012
171
13
81
It still won't play these games well though...

All these comparisons are only true for selected mainstream games. As soon as you leave mainstream, you hit walls like this one:
http://www.anandtech.com/show/5771/the-intel-ivy-bridge-core-i7-3770k-review/17
That even got worse, current Minecraft 1.5 will not run above 15 fps on an i7 3517-u even with optifine and everything tuned for speed. That's ridiculous considering that Trinity will run it fine even with Sonic shaders.

Granted, I ranted about a single game only, but I'm really dissapointed so far. Pick something not covered by media and performance is a gamble at best. Also, while you can squeeze another 10-30% out of Trinity with some slight oc, you're dead in the water if a game doesn't run well on your IB processor.

I agree as well, Intel drops support for older graphic chips too fast for my taste.
People are in for a surprise when they'll try playing less mainstream or older games with their intel gpu.

Some older games don't run perfectly even on amd hardware (not ehm.. the way it's meant to be played), guess how they'll behave with intel under the hood... at best you'll have graphic glitches.
 
Aug 11, 2008
10,451
642
126
Only if you're silly enough to define everything by performance increase in existing applications. Thankfully for the rest of us, semiconductor firms are smarter than that.

A bird in hand is worth 2 in the bush, dont count your chickens before they're hatched, etc. etc.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Being cynical here: At least it does hit its rated speed for a short time. That is more than what I can report.

You do know that previous mobile Apu's (llano) barely had a working turbo boost at all? And that running prime 95 on a mobile a10 will actually result in downclocking no matter how much cooling it is given, let alone running something on the gpu as well. Intel Turbo boost is much better than the amd equivalent.

Intel rates their ULV mobile igp for 350 mhz with turbo to 1 Ghz plus under specific conditions (thermals and power). Your ULV is hitting its rated speed (350mhz) anything more than that doesn't count as throttling. Intel can run the igp at full clocks or the cpu at full clocks but not both at the same time. Its not wrong for them to list separately the maximum gpu and cpu clocks.

Intel's ULV's, provided proper cooling, have no trouble hitting max (or very close to max) cpu turbo clocks. Amd apu's have a little more trouble.

The A10-4600M has a base clock rate of 2.3 GHz and can automatically overclock all the way to 3.2 GHz when called for. With a TDP of 35W, the power consumption is also pretty reasonable. Earlier we mentioned that it is a “pseudo-quad-core” processor; what we mean by that is that it actually only includes two modules with four integer-cores and two floating-point cores. Moreover, the Turbo Core 3.0 functionality isn’t as effective as Intel’s Turbo Boost, though single-threaded performance is considerably better than the Llano precursors.

Throttling

We leverage Prime95 and Furmark to impose maximal load on a system. Our prescribed testing here includes a CPU stress test, a GPU stress test, and finally, a combined stress test where both CPU and GPU are subjected to maximal load.

One of the improvements Trinity introduces is the ability to scale not only the CPU clock rate based on performance needs (which was already done in Llano), but also the GPU clock rate, provided there is thermal headroom available. It stands to reason that this should result in improved performance in many graphics-heavy applications, but the question remains, how effective is the performance when the system is under heavy load?

The Turbo Core 3.0 functionality seems to work most of the time as intended, but some of the observations were a bit puzzling. When the CPU only was stressed, only one of the four pseudo-cores actually remained near the base clock rate of 2.3 GHz; the others hovered closer to 2 GHz. Rarely did the clock rates even brush up against 2.7 GHz. When the GPU only is under stress, we observe all four cores running at 2.7 GHz (though in subsequent testing the clock rates jumped around more often). Meanwhile, the GPU clock rates fluctuate about as wildly as the CPU clock rates did in our CPU stress test. Thermals for both tests hovered between 65°C and 70°C.

Finally, under full system stress (GPU and CPU), we once again witnessed wild fluctuations in clock rates, with the CPU cores ultimately resting around a value of 1.6 GHz each. The GPU clock rate occasionally jumped to its maximum Turbo Core frequency of 686 MHz, but mostly remained at 497 MHz, which is the base clock rate. As you can see from the GPU-Z graph in the screenshot we posted, there were also periods where the GPU clock rate dropped below its base clock rate to around 335 MHz, which qualifies as throttling. However, again, this only seems to occur under very heavy system stress. Thermals once again remained near 70°C throughout all of this, occasionally rising slightly higher (at one point to nearly 75°C).

So the a10 can't even run prime 95 without throttling.
 
Last edited:

cytg111

Lifer
Mar 17, 2008
26,182
15,597
136
:-( .. the biggest sting to me is the absence of TSX on unlocked chips.
I want both goddamnit ..
 
Nov 26, 2005
15,194
403
126
Well, i had my i7 920 relaxin at 3.8Ghz I guess I'll bump it back up to 4.2GHz & 1.42Vcore till 14nm
 
Last edited:

Sweepr

Diamond Member
May 12, 2006
5,148
1,143
136
Richland won't be that much faster than Trinity but Kaveri with its 512SPs and GDDR5/DDR4/DDR3 support will ;). No more memory BW bottlenecks,hybrid CF will finally scale as it should with "apus" and 3 modules/6 threads should be fast enough for about everything(~MT throughput of 83xx series). Haswell will have a lead in mobile though, the whole H2 of this year which is a lot of time.

Care to share some of your benches? Oh, you're probably under NDA. If you're not, I guess its just wishful thinking like your K10/Bulldozer predictions. Those predictions are not facts until we actually see some numbers.
Ps: Did you finally gave up on the idea of a new CPU-only (no IGP) Vision FX line based on Steamroller (that doesnt appear in any roadmap) after that 6-core APU rumour?

@ Haswell: Nice IPC improvement, and theres probably a clock advantage (OCs better? on desktops and higher stock/turbo clocks on mobile vs IB). AVX2 will give them a long term advantage over existing CPUs too.
 
Last edited:

Haserath

Senior member
Sep 12, 2010
793
1
81
Huh... Upgrade itch completely vanished. Anticipation trumps actual, I suppose.

If unlocked chips had GT3 and TSX, it would've been better. Oh well.
 

RaistlinZ

Diamond Member
Oct 15, 2001
7,470
9
91
That preview only made me want to upgrade to a 3930x and overclock it. I was hoping for a larger increase in performance from Haswell.

Buckling in my i7-930 for yet another year.
 

jpiniero

Lifer
Oct 1, 2010
16,828
7,276
136
Hmm, Tom reiterated the Broadwell BGA only rumor, citing talks with mobo manufacturers. Will Intel admit to it at IDF?
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
A bird in hand is worth 2 in the bush, dont count your chickens before they're hatched, etc. etc.
My comment was about counting chickens that have already hatched. The game plan has changed -- the name of the game, since 2006, has been about improving performance per watt. Caring solely about performance is completely asinine. You'd be much more content (and much more informed) with the state of things if you made comparisons at the same power draw.

Otherwise, by your demands, a processor that draws 200w and is 10% faster than one that draws 10w would be better than the 10w one. I doubt that you actually see it this way -- your expectations need revision.

So far, this more mobile focused re-targeting hasn't had substantial effects on maximum performance when power is disregarded (i.e., when overclocking). It's rather easy to see that the max-on-air overclock of Sandy Bridge thoroughly trounces the max-on-air overclock of Nehalem and Westmere. You can repeat this back in history ad naseum.

The only outlier in this case is with Ivy Bridge. The transistor design makes for a more mobile focused, less overclock friendly product. We only have to take that hit once. If we threw Ivy Bridge on a planar process, we'd notice that it would have overclocked a couple of extra bins or so on air. The game has changed, but the progress has not. Intel could theoretically revert back to a planar process and reacquire the better performance at higher switching speeds.

The change from 32nm to 22 is rather showing of physics rearing it's ugly head. You can complain all that you'd like, but you're barking up the wrong tree by directing your frustrations at Intel.

There are only so many tricks left in the bag. Think about it -- there are a finite number of improvements possible. Some day, we will hit a wall. We will have mastered computer architecture. Transistors won't shrink anymore without ceasing to be useful.

Take the function of performance over time out to infinity. We're going to run into the asymptote eventually. To those of us that don't dabble in partisan semiconductor politics, it's rather clear that the steep gains in performance are tapering off.

There will, of course, be breakthrough technologies. There are terahertz transistors out there. Superconductors. All sorts of wonderful things that we might see even within a decade. But given that Intel is the undisputed leader in fab tech, it just doesn't make sense to blame them for the way things are going. They're already responsible for bringing us better technology to the market before anyone else. Why give them grief for doing their job so well?
 

Piroko

Senior member
Jan 10, 2013
905
79
91
You do know that previous mobile Apu's (llano) barely had a working turbo boost at all? And that running prime 95 on a mobile a10 will actually result in downclocking no matter how much cooling it is given, let alone running something on the gpu as well. Intel Turbo boost is much better than the amd equivalent.
Just to be clear, you (and mrmt) started the AMD CPU argument, I was talking about my own Intel GPU experience.
But just for fun I ran some tests (made sure that the CPU/GPU never exceeded 65°C):
Only Furmark, no CPU load except TMonitor and GPU-z: GPU will hover between 1000 Mhz and 1050 Mhz. That's 10% lower than listed maximum turbo.
Only Prime95, no GPU load: CPU will hover at 2700 Mhz, again 10% lower than maximum turbo.
Furmark + Prime95: GPU will stay at 1000 Mhz for a few seconds and then drop down to 700-750 Mhz, CPU drops down to 1900 Mhz with "spikes" to 800 Mhz. The whole system feels unresponsive, opening the task manager takes several seconds longer than usual.

Intel rates their ULV mobile igp for 350 mhz with turbo to 1 Ghz plus under specific conditions (thermals and power). Your ULV is hitting its rated speed (350mhz) anything more than that doesn't count as throttling. Intel can run the igp at full clocks or the cpu at full clocks but not both at the same time. Its not wrong for them to list separately the maximum gpu and cpu clocks.
In theory that's absolutely right, in practice I have yet to find a game which is light enough on the CPU that this Core i7-u will run its GPU at maximum turbo clock while plugged in and below 60°C. And honestly I've stopped looking, it's quite frustrating when the games won't run well period.
 

Wall Street

Senior member
Mar 28, 2012
691
44
91
I think that a lot of the performance disappointment the past few generations isn't IPC but is actually clock speed:

Base Turbo
Q8400 2.67 N/A
i5-750 2.67 3.20
i5-2500k 3.30 3.70
i5-3570k 3.40 3.80
i5-4670k 3.40 3.80

Notice how Intel had IPC improvements each generation, but also added a few hundred Mhz base clock (or turbo in the move to i5) every generation up until Sandy Bridge.

I too am wondering if how the TSX trade off vs. K series will matter. Guess I will sit on my i5-750 until the new USB 3.0 boards anyways. It isn't like I am an extreme overclocker, so I might get the non-K if I can still OC non-K chips by 4 bins.
 
Aug 11, 2008
10,451
642
126
I think that a lot of the performance disappointment the past few generations isn't IPC but is actually clock speed:

Base Turbo
Q8400 2.67 N/A
i5-750 2.67 3.20
i5-2500k 3.30 3.70
i5-3570k 3.40 3.80
i5-4670k 3.40 3.80

Notice how Intel had IPC improvements each generation, but also added a few hundred Mhz base clock (or turbo in the move to i5) every generation up until Sandy Bridge.

I too am wondering if how the TSX trade off vs. K series will matter. Guess I will sit on my i5-750 until the new USB 3.0 boards anyways. It isn't like I am an extreme overclocker, so I might get the non-K if I can still OC non-K chips by 4 bins.

Yes, that is the point I have been trying to make also. I realize IPC improvements may be difficult on an already efficient processor, but I would think Intel could easily increase base clocks to something like 3.7 or 3.8 and turbo to maybe 4 or slightly higher.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
I think that a lot of the performance disappointment the past few generations isn't IPC but is actually clock speed:

Base Turbo
Q8400 2.67 N/A
i5-750 2.67 3.20
i5-2500k 3.30 3.70
i5-3570k 3.40 3.80
i5-4670k 3.40 3.80

Notice how Intel had IPC improvements each generation, but also added a few hundred Mhz base clock (or turbo in the move to i5) every generation up until Sandy Bridge.

I too am wondering if how the TSX trade off vs. K series will matter. Guess I will sit on my i5-750 until the new USB 3.0 boards anyways. It isn't like I am an extreme overclocker, so I might get the non-K if I can still OC non-K chips by 4 bins.

The key performance part is AVX2 and everything being 256bit wide in the core.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
Not too exciting unless its OC average is several hundred MHz higher than Ivy. New instructions take several years to show up in most retail software.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Huh... Upgrade itch completely vanished. Anticipation trumps actual, I suppose.

If unlocked chips had GT3 and TSX, it would've been better. Oh well.

Won't comment about TSX. But for GT3 its likely intentional.

Powerful iGPU on desktop is worth far less. Discrete cards are easily upgradeable, far cheaper than laptop counterparts, barely anyone cares about power use on desktop.

They are a business, so they need to do something to make investments pay off. Their billions of $ investment into perf/power optimized processes(compared to previously perf only optimization), are better off in laptops and mobiles.

That's why you see mobile CPUs nowadays rivaling desktop CPUs, and specs that are even better than on desktop(like the presence of HD 3000 and 4000 on ALL laptop Core parts for example). Of course, they still have lot of work to do beyond that(especially in their Ultrabook initiative).

Honestly, how much traditional boxed desktop users really care about 3D performance on the iGPU? Emerging form factors like NUC, AIOs, already use mobile chips, so it really leaves the enthusiast/gamer crowd on the boxed desktop.

They aren't going to invest a massive amount of die real estate and R&D into perhaps what may be 5% of volume and maybe 2/3rds of that in revenues.
 
Last edited:

Makaveli

Diamond Member
Feb 8, 2002
4,976
1,571
136
That preview only made me want to upgrade to a 3930x and overclock it. I was hoping for a larger increase in performance from Haswell.

Buckling in my i7-930 for yet another year.

Well, i had my i7 920 relaxin at 3.8Ghz I guess I'll bump it back up to 4.2GHz & 1.42Vcore till 14nm


Agreed with both.

I think replacing my 920 with a 970 was the best upgrade I did on this rig besides the SSD's.

And maybe this will be the first machine to last me 5 years.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
There's absolutely room for improvement and optimization.

Whether or not they've already made those improvements or not is an entirely different question.

Valid points. But one thing I can agree with Piroko is that I have almost never seen driver-based performance improvements for Intel graphics. Perhaps they don't know how. Or maybe its just that their hardware is at a low performance point that drivers can't do more. Or that they release everything they can at launch.

20-50% gain with 25% more EUs is very good enough. I was expecting 15-25%. Its also showing bandwidth constraints based on the comparison of gains at 1366x768 and 1920x1080.

That doesn't mean GT3 will automatically be fantastic. First of all, the clock speeds are going down significantly(early leaks are indicating only 800MHz for the quad core CPU GT3 parts). We'll see.

In theory that's absolutely right, in practice I have yet to find a game which is light enough on the CPU that this Core i7-u will run its GPU at maximum turbo clock while plugged in and below 60°C. And honestly I've stopped looking, it's quite frustrating when the games won't run well period.

Not running games well has not much to do with your argument though. The whole point is that you are saying that Trinity not being able to reach Base clocks is somehow justified while Core chips not being able to reach peak Turbo(which is far above Base) is a sin.
 
Last edited:

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
I'm kind of surprised everyone is so down on Haswell. This is pretty close to the performance I expected. 0% improvement in existing FPU-intensive apps, ~10% improvement in single-threaded integer apps, and ~20% improvement in integer apps with hyper-threading. Actually, I hoped for more improvement with HT - maybe it's bandwidth-limited as was the cause of some of the other low numbers.

A lot of apps will require recompiling - though in many cases not redesigning - to take advantage of Haswell's AVX2 and FMA. Actually, I wouldn't be surprised to see an app come out that converts multiplies and adds in existing code to FMAs.

Even outside of recompiling, you should more than a 0% improvement in FP due to the doubling of FMUL units, increased L1 bandwidth and general buffer size increases. The big caveat is that of course, there will be some traces that these enhancements don't even matter.

Personally, I'm holding out on a 2nd opinion/review on Haswell.
 

fixbsod

Senior member
Jan 25, 2012
415
0
0
Not seeing this addressed --

I'm seeing a lot of support for intel noting that the reason for the overall small IPC increase is because of the ASSUMED higher perf/watt increase.

However, is it not true that the reason IB ran so HOT which also DECREASES perf/watt is due to it's cheaped out TIM? I am seeing threads about some 30 deg celcsius drop in temps just due to TIM change which is EYE POPPING. Would intel not have a cooler running, less wattage sucking chip with better TIM? And likewise, would intel not be able to squeeze out even MORE IPC with the same wattage using better TIM?
 
Last edited:

OVerLoRDI

Diamond Member
Jan 22, 2006
5,490
4
81
The lack of competition is slowing the CPU market progression right down.

Nope. The fact is that CPU performance has reached the point of "good enough" for 90-95% (maybe more) of users.

Highest priority for Intel is sacrifice as little of its high margins as possible while fitting x86 into the mobile space that ARM dominates. Performance is not the chief concern.
 

Riotvale

Member
Dec 20, 2009
88
0
66
The question begs to differ: Coming from an i7 920 @ 3.6ghz, is it worth upgrading to a 4770k in the near future, or go straight to a 3930k now?

PC use = 2% Productivity 98% Streaming and Gaming

i7 920@3.6ghz
12 gb ddr3
gtx680 @ 2500x1440
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Your point being? HD4600 isn't new architecture, HD7000 was. Also, Intels driver updates took long to release and did little for performance in the past.


Being cynical here: At least it does hit its rated speed for a short time. That is more than what I can report.

Just to be clear, you (and mrmt) started the AMD CPU argument, I was talking about my own Intel GPU experience.
But just for fun I ran some tests (made sure that the CPU/GPU never exceeded 65°C):
Only Furmark, no CPU load except TMonitor and GPU-z: GPU will hover between 1000 Mhz and 1050 Mhz. That's 10% lower than listed maximum turbo.
Only Prime95, no GPU load: CPU will hover at 2700 Mhz, again 10% lower than maximum turbo.
Furmark + Prime95: GPU will stay at 1000 Mhz for a few seconds and then drop down to 700-750 Mhz, CPU drops down to 1900 Mhz with "spikes" to 800 Mhz. The whole system feels unresponsive, opening the task manager takes several seconds longer than usual.

In theory that's absolutely right, in practice I have yet to find a game which is light enough on the CPU that this Core i7-u will run its GPU at maximum turbo clock while plugged in and below 60°C. And honestly I've stopped looking, it's quite frustrating when the games won't run well period.

Yes, intel does clock down, I'm just trying to say that your ULV part is significantly better than any amd apu (which has the same thermal problems with much worse performance). Intel isn't great but amd is worse with this.

A4-4355M and A6-4455M get eaten by an ivy ULV i5 (half the cpu performance) and are similarily much worse than the ULV hd 4000 (256 cores at 200-424ish mhz, though mostly runs at 200 mhz vs the 384 cores at 424-686 of the SV a10). I'm just saying that the problem is not unique to intel and that amd is having much more problems (the chips are throttling below the nominal frequency).