• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

First complete review of Haswell i7-4770K

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Consumers demand higher performance first and foremost.

Really? Most consumers use laptops. Laptops have very strict thermal limits. Better perf/W means you can shoehorn more overall performance into the same form factor. Perf/W is far more useful to the majority of consumers than peak performance.
 
Really? Most consumers use laptops. Laptops have very strict thermal limits. Better perf/W means you can shoehorn more overall performance into the same form factor. Perf/W is far more useful to the majority of consumers than peak performance.

Unfortunately this thread is about HD4770K(Desktop) and not about mobiles. We really have to distinguish from now on between laptops and desktops because speaking in general is not productive any more.

Desktop users (the majority of AT users by the way) care more about performance than power efficiency. It is the reason we buy $300-500 CPUs, $300-1K GPUs and then OC them. I dont believe people will care about 30-50W more having a system drawing 500-600W with dual GPUs.
 
Unfortunately this thread is about HD4770K(Desktop) and not about mobiles. We really have to distinguish from now on between laptops and desktops because speaking in general is not productive any more.

Desktop users (the majority of AT users by the way) care more about performance than power efficiency. It is the reason we buy $300-500 CPUs, $300-1K GPUs and then OC them. I dont believe people will care about 30-50W more having a system drawing 500-600W with dual GPUs.

Majority of desktop users still cares about performance/watt and request smaller form factors. The people requesting direct performance is the niche. Amount of people OCing is a tiny tiny niche.
 
Majority of desktop users still cares about performance/watt and request smaller form factors. The people requesting direct performance is the niche. Amount of people OCing is a tiny tiny niche.

This is AT forums, the majority of users have Discrete GPUs and OC both their CPUs and GPUs.
And once again, this thread is about high-end Overclockable Unlocked Core i7 4770K.
 
Majority of desktop users still cares about performance/watt and request smaller form factors. The people requesting direct performance is the niche. Amount of people OCing is a tiny tiny niche.

Are you sure that's the case? Maybe its just that since AMD/Intel cannot provide impressive preformance gains anymore without ridiculous TDP increases, they focus on lowering TDP instead since that's the only thing they can improve with today's technology advancements.

Honestly, how often do you see computer stores even specifying the power consumption of typical desktop PCs in their ads? If it would be the main thing the average buyer was interested in when deciding what desktop PC to buy, don't you think it would be specified in the ads?
 
This is AT forums, the majority of users have Discrete GPUs and OC both their CPUs and GPUs.
And once again, this thread is about high-end Overclockable Unlocked Core i7 4770K.

Do you have any numbers that back that up? I dont see that many OCing here.
 
Are you sure that's the case? Maybe its just that since AMD/Intel cannot provide impressive preformance gains anymore without ridiculous TDP increases, they focus on lowering TDP instead since that's the only thing they can improve with today's technology advancements.

Honestly, how often do you see computer stores even specifying the power consumption of typical desktop PCs in their ads? If it would be the main thing the average buyer was interested in when deciding what desktop PC to buy, don't you think it would be specified in the ads?

People are tired of big bulky cases. Hence why ATX is close to dead and mATX is the new standard with a huge influx of MiniITX and other custom factors. Not to mention companies request performance/watt as well for their userbase that is not on mobile.

Its not for fun Intel made the new NUC to battle the Mac Mini and other of similar size.
 
Really? Most consumers use laptops. Laptops have very strict thermal limits. Better perf/W means you can shoehorn more overall performance into the same form factor. Perf/W is far more useful to the majority of consumers than peak performance.

Agree with you on mobile but we are talking about desktop.
 
Do you have any numbers that back that up? I dont see that many OCing here.

You are posting on the CPU and OverClocking sub-forum, on the Desktop unlocked High-End Core i7 4770K topic owning an unlocked Core i5 3570K and all of a sudden you don't care about OC. Right 🙄
 
Honestly, how often do you see computer stores even specifying the power consumption of typical desktop PCs in their ads? If it would be the main thing the average buyer was interested in when deciding what desktop PC to buy, don't you think it would be specified in the ads?

Excellent point! Moreover, in mobile, final users do not really matter about power consumption but about battery duration. A chip with a bit more power consumption can be compensated by a better battery a different disk... the final user cares about the duration of the battery and this is what is specified in the ads of mobile PCs.
 
Last edited:
Excellent point! Moreover, in mobile, final users do not really matter about power consumption but about battery duration. A chip with a bit more power consumption can be compensated by a better battery a different disk... the final user cares about the duration of the battery and this is what is specified in the ads of mobile PCs.


Hmm, higher power consumption = larger capacity battery = less portability. I don't get what you are trying to say.
 
What's the use for AVX2? For gaming, graphics, physics and such...can I get a dumbed down version on how this is a good thing for performance? I mean, does it even impact gaming? (If it gets supported)

And how or why is it different than other "technologies", why should people care? We got OpenCL, CUDA, OpenMP, C++AMP, DirectCompute, HSA etc..
 
And how or why is it different than other "technologies", why should people care? We got OpenCL, CUDA, OpenMP, C++AMP, DirectCompute, HSA etc..
Apples and oranges.

Chunks of code are written written by people in programming languages, such as C++, CUDA, e, or OpenCL.

APIs are chunks of code packaged together for re-use, with an exposed interface, such as OpenMP.

Compilers generate code to be run on the hardware.

The hardware reads that code, and executes it. AVX2 is that level of code--from the perspective of >99% of programmers, it's the implementation. It's use cases are wherever data can be aligned to cache lines, and processed in sufficiently independent loop iterations as to be faster processed as a stream of vectors than as a stream of scalars.

Salespeople make big words and acronyms, and get partners to join in their vision, then don't actually do anything, but pat themselves on the back for having increased awareness, or some other BS, when people really wanted some common standard of some kind to emerge. That would be HSA 🙂.
 
For touting their next gen graphical capabilities, I don't care if this is Iris or not, it's pretty terrible. Can't even come close to matching GF118 - a tiny, severely underpowered 40nm GPU - in games? I'd rather Intel save the die space and pump up the CPU power instead of slapping crap iGPU functionality like this on a "high end" desktop CPU. Even by their own admission, iGPU performance with their "best" desktop part won't even be twice as good as this.... so in some instances, if these benchmarks are correct, performance will still be worse than the smallest Fermi 40nm GPU.

Utter crap.

And here's the funny part - Intel was touting gt650m performance with Iris.... but as we all know it's not going to happen. Not in real-world benchmarks. And with the 750m getting a solid 15-20% speed bump over the 650m, I'm willing to bet a "midrange" quad core mobile CPU + a gt750m will still be cheaper than whatever Intel processors end up with Iris graphics. The only thing better you'll get is the thinner form factor. IGP graphics may be good enough for my sister, but it's still crap. I don't get the hype over buying $600-1000+ laptops that are "gaming capable" on low settings. I don't want to spend $750 or more on a laptop to play a game on it's lowest settings.
 
Iris/Iris Pro hasn't been benchmarked anywhere yet. You're looking at benchmarks of HD4600 - The desktop part does not include the good graphics, because desktop users don't care about iGPU. The good graphics are only in BGA and mobile format. The desktop will have the poor graphics performance from HD4600. You're looking at benchmarks of the desktop HD4600 part, and it's a rumor at that.

Additionally, most ultra portable users don't game. I know this will come as a shock, but if you want to game on a laptop that's an entirely different category of machine that throws battery life out of the window. Hell, most gaming laptops barely get an hour of battery time. Intel is focused on the mass market which buys macbook airs and macbook pro retina's by the millions. And these users want 10+ hours of battery life, and Haswell should come awfully close to doing that. Sorry, a machine with the GTX780M will not get good battery life. That isn't what intel is aiming their product at. And they're not primarily gamers. Don't focus your argument on gaming, because the mass market does not care. Aside from this, Iris Pro hasn't been benchmarked. I'd barely consider these HD4600 benchmarks because it's from a non reputable rumor website.

If you want a gaming laptop, you want a hot, large, and loud piece of junk that gets 30 minutes of battery life at load. Intel, OTOH, gives the mass market a machine with 10 hours of battery life and good graphics performance that can properly drive retina displays without draining the battery at a laughable rate. Again, two different categories of machines.
 
Last edited:
Funny how GT2(4600) is faster in Synthetics (Vantage, 3D Mark 11) than GT630 but is almost produces half the frames in real games.

It seems that GT3 will have hard time beating GT630, and people expecting to be close to GT650 because of Intel’s 3D Mark 11 slides.

WNcrlZL.png

As I said about this in the thread about iGPU DESTROYING discrete soon.. all the doom and gloomers keep forgetting a key point: sure, it may have fast embedded vram, but as soon as it touches REAL games that overload the tiny 128mb, its going to system ram and it's GG from there for iGPUs.

Now, unless intel can embed 2gb of vram onto their CPUs in the future, they are going to need to put GDDR5 onto motherboards to compete. When that happens, don't expect GPU makers to stop with GDDR5, discrete will always be one or two steps ahead simply because they have a huge PCB they can cram high quality VRMs and chokes to supply the high TDP required for a fast GPU.
 
You are posting on the CPU and OverClocking sub-forum, on the Desktop unlocked High-End Core i7 4770K topic owning an unlocked Core i5 3570K and all of a sudden you don't care about OC. Right 🙄

So you dont know, just making random assumptions.
 
If GT3 40 EUs = x2 20EU's of the GT2, Then its expected to perform twice better at the same clocks, no? So my guess would be GT3e>GT3>GT630. But we don't know how the GT3e's will perform compared to a GT3.

I know, that dGPU's will be better, probably the GT640>GT3e, anyways. But I sure want to know where Intel fits into!

Question, can you Overclock Intel iGPU's? I gave my Trinity two bumps on the BIOS and my score went up by 400...lol
 
Back
Top