Core i7-4770K is performance crippled

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,417
75
91
Its worth noticing they used a database for the test as application. And that will be one of the best cases.

I get the impression, that it is a marketing attempt by Intel, to STOP business users from avoiding expensive Xeon (etc) server solutions, by using cheaper overclocked cpu's. Hence hurting databases is a good way of stopping business use.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
I get the impression, that it is a marketing attempt by Intel, to STOP business users from avoiding expensive Xeon (etc) server solutions, by using cheaper overclocked cpu's. Hence hurting databases is a good way of stopping business use.

What business with the slighest respect for itself would use overclocked CPUs with the reliability issues and data corruption that follows?

All places I know, such an action gets you fired. And its considered one of the dumbest things you can possible do.
 

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
233
106
Why can't we just have one CPU with all the features? Call it, Ultimate/Extreme edition, whatever.

It's getting tediously boring with Intel :/
 

MisterMac

Senior member
Sep 16, 2011
777
0
0
I really dislike they had to choose a mobile CPU with eDRAM.


That will definately help opcode access on the cache significantly.
And i doubt well see Xeon's with Embedded ram.


...but a database is a constant operation of locks and searches hardly real world application scenario.

Would it be rough to assume a general game engine has 10 times less lock issues than to a busy active DB?
 

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,417
75
91
What business with the slighest respect for itself would use overclocked CPUs with the reliability issues and data corruption that follows?

All places I know, such an action gets you fired. And its considered one of the dumbest things you can possible do.

I very much agree with you, but there must be some logic behind Intel's disabling of the TSX feature. If it's not to stop people using overclocked parts to save buying more expensive Intel chips, then why remove TSX ?
 
Last edited:

beginner99

Diamond Member
Jun 2, 2009
5,318
1,763
136
The application also needs to have well-known locking issues for the feature to become relevant. Just because a game spawns multiple threads doesn't mean it has locking issues. It seems it may mostly affect database applications and such.

Exactly. I doubt consumer applications profit from it even if they were coded to use it. TSX is not just for multi-threading, it is for heavy synchronization between threads. And hence parallel workloads which don't share common resources will not benefit from this at all.
And let's be honest, most multi-threading implementations in consumer apps are such scenarios, like video encoding.

Simplified HLE does not lock common resources with the assumption it was not changed by another thread (similar to optimistic locking in database). In case it was changed, the common resource gets locked.

however if you already do proper locking (read or write lock, see http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/locks/ReadWriteLock.html), the benefits from HLE get much, much smaller or said otherwise irrelevant for consumer apps.

http://www.anandtech.com/show/6290/...ll-transactional-synchronization-extensions/4
 

SPBHM

Diamond Member
Sep 12, 2012
5,066
418
126
What business with the slighest respect for itself would use overclocked CPUs with the reliability issues and data corruption that follows?

All places I know, such an action gets you fired. And its considered one of the dumbest things you can possible do.

I don't know how all the "business" work, but I can remember this article that shows a hardware company using lot's of overclocked Intel CPUs, they are probably not the only ones... even if it's a rare usage, I'm sure Intel have noticed something and are trying to prevent that from gaining more relevance...

still, Intel in the past deactivated the wrong features, I remember the Core 2 Quad Q8xxx and lower without VT-x support, as soon as VT-x became more relevant because of win 7 and Xp mode they were forced to refresh their entire lower end line up adding it, the competitor already had this feature enabled on low cost CPUs....

so yeah, I can't see this kind of tactic with a positive view, $300 CPUs with features with a lot of potential disabled... at the same time they disabled the small turbo OC from non-k CPUs...
 

cytg111

Lifer
Mar 17, 2008
26,174
15,591
136
Five times faster, sounds significant, even if it only translated to a 50% or whatever overall speed up in practice.
Would it be rough to assume a general game engine has 10 times less lock issues than to a busy active DB?

Thats just it, for real world numbers we have nothing yet, only vague indications, need more data, more benches. I wish for application patches (games apps whatnot) soon.. How unlikely it may be to happen.

But I do know that it has put me in a holding position .. was pretty sure I was gonna get first gen haswell K .. but right now I cant decide between the K or non-K .. need more data.
 
Last edited:

bronxzv

Senior member
Jun 13, 2011
460
0
71
Beware! All of the K-model Haswell CPUs deliberately lack an important performance feature for multi-threading: TSX. Therefore the 'flagship' i7-4770K may actually be slower than the i7-4770, even when overclocked. D:

note that hardware gather is also a big disapointment, I can't share figures I have from pre-release hardware covered by the IPLA but I'll glad to share microbenchmarks results in the coming days when I'll have a production CPU, it's way way slower than you were expecting
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
I very much agree with you, but there must be some logic behind Intel's disabling of the TSX feature. If it's not to stop people using overclocked parts to save buying more expensive Intel chips, then why remove TSX ?

None of us knows if the issue lies in overclocking and validating the features. Imagine you overclock at 4.5Ghz on your K chip with all features on. And you discover everything works, besides VT-D that breaks all your security there (Note VT_D only works on Q chipset with LGA115x.), TSX applications corrupts your data and so on. People would scream and yell in a never ending drama.

Its always easy to sit and nag on my X feature is missing. But try look back on SB-E and PCIe 3.0 as an example.

I just dont see a marketing reason for it.
 

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,417
75
91
Like with LGA2011 desktop chips?

The annoying thing about the LGA2011 and its future releases, is that it is usually the old/previous generation. E.g. Ivy Bridge AFTER haswell has come out, etc etc.

If we are paying the sky high prices for LGA2011 (and future generations of it), and coping with its other disadvantages, such as high power consumption (hence potentially noisy, room heating and expensive electricity bills), it could at least be the current generation, and be released at the same time, rather than (something like) one year later.
 

NTMBK

Lifer
Nov 14, 2011
10,448
5,831
136
Like with LGA2011 desktop chips?

Yes, because you can buy an LGA2011 chip with all virtualization features, all cores enabled, and with overclocking enabled? Oh no wait, you can't. The only overclockable ones are the crippled Enthusiast parts, with a quarter of the cores disabled and no ECC support.
 

SPBHM

Diamond Member
Sep 12, 2012
5,066
418
126
None of us knows if the issue lies in overclocking and validating the features. Imagine you overclock at 4.5Ghz on your K chip with all features on. And you discover everything works, besides VT-D that breaks all your security there, TSX applications corrupts your data and so on. People would scream and yell in a never ending drama.


isn't this just part of overclocking? as long as it all works at the default clock it should be fine... and I fail to see how unlocking the multi would make this validation so much harder for these specific features!?

this looks like just a marketing, strategic decision.
 

beginner99

Diamond Member
Jun 2, 2009
5,318
1,763
136
I really dislike they had to choose a mobile CPU with eDRAM.

Agree. That makes the test irrelevant because the better performance could also be due to a huge l4 cache. The don't even mention the size of the DB. If the whole db fits into this l4 cache....lol. idiotic test.

Also that Sandbridge gets better performance with HLE should make you suspicious. It should not. HLE on a CPU without TSX is ignored and it should behave exactly the same as "Software (Classic)" if the same code is used (which obviously isn't).
 

cytg111

Lifer
Mar 17, 2008
26,174
15,591
136
note that hardware gather is also a big disapointment, I can't share figures I have from pre-release hardware covered by the IPLA but I'll glad to share microbenchmarks results in the coming days when I'll have a production CPU, it's way way slower than you were expecting

Very much looking forward to.
 

NTMBK

Lifer
Nov 14, 2011
10,448
5,831
136
None of us knows if the issue lies in overclocking and validating the features. Imagine you overclock at 4.5Ghz on your K chip with all features on. And you discover everything works, besides VT-D that breaks all your security there (Note VT_D only works on Q chipset with LGA115x.), TSX applications corrupts your data and so on. People would scream and yell in a never ending drama.

Overclocking reduces your system stability? Why did nobody say so before?!

Oh wait, everyone knows that.
 

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,417
75
91
None of us knows if the issue lies in overclocking and validating the features. Imagine you overclock at 4.5Ghz on your K chip with all features on. And you discover everything works, besides VT-D that breaks all your security there (Note VT_D only works on Q chipset with LGA115x.), TSX applications corrupts your data and so on. People would scream and yell in a never ending drama.

Its always easy to sit and nag on my X feature is missing. But try look back on SB-E and PCIe 3.0 as an example.

I just dont see a marketing reason for it.

A counter argument to what you say is that the 2011 series, DOES allow both OVERCLOCKING and FULL FEATURE SET at the same time, implying that the chips would work alright (OR it could be that the extra engineering time which goes into the 2011 series, fixes any such problems).

I wish Intel would be less secretive about the real reasons why things are done the way they are, such as disabled options, poor heat sink interface materials, soldered down cpu chips, best graphics rumoured to NOT be on desktop parts, etc etc.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
still, Intel in the past deactivated the wrong features, I remember the Core 2 Quad Q8xxx and lower without VT-x support, as soon as VT-x became more relevant because of win 7 and Xp mode they were forced to refresh their entire lower end line up adding it, the competitor already had this feature enabled on low cost CPUs....

so yeah, I can't see this kind of tactic with a positive view, $300 CPUs with features with a lot of potential disabled... at the same time they disabled the small turbo OC from non-k CPUs...

While the Core 2 disable actually made "somewhat" sense in marketing. It just doesnt make much sense for K chips.

I fully agree with you that disabling features is not nice. But thats how capitalism works. You get what you pay for. AMD would do the exact same if they was in front. Or any other company for that matter. Lower end GPUs from AMD and nVidia is another example there. Disable filters and compute performance.
 

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
233
106
Like with LGA2011 desktop chips?
Similar to that, yeah. I wonder, if Intel intentionally fragments its market with a dozen of different SKUs, it only gives headache (choosing, predicting future uses), nothing else.

Isn't it laughable, that one of the cheapest processors (e.g. G2020) supports ECC memory, whilst 4x pricier options, don't. Pathetic.

Now this. LMAO.
 
Last edited:

R0H1T

Platinum Member
Jan 12, 2013
2,583
164
106
None of us knows if the issue lies in overclocking and validating the features. Imagine you overclock at 4.5Ghz on your K chip with all features on. And you discover everything works, besides VT-D that breaks all your security there (Note VT_D only works on Q chipset with LGA115x.), TSX applications corrupts your data and so on. People would scream and yell in a never ending drama.

Its always easy to sit and nag on my X feature is missing. But try look back on SB-E and PCIe 3.0 as an example.

I just dont see a marketing reason for it.
Yeah right, when was the last time overclocking a processor broke features on a modern x86 CPU :rolleyes:

Sure there are stability issues with x number of chips that are at different clock speeds & varying voltages, I assume this is part of the extensive validation process for servers, but overclocking breaks these features is totally unheard of !

edit : If this is the case then AMD sure as hell is doing way better job than Intel for the avg consumer !
 

SOFTengCOMPelec

Platinum Member
May 9, 2013
2,417
75
91
Yet plenty of people claiming they have rock solid OCs. ;)

If a person is ONLY using the computer as a dedicated gaming machine, and is not worried about the risk of losing saved game progress, possible shortening of component life and possibly getting crashes and other issues from time to time.
Then overclocking may be fine for them.

It's if the computer is used for serious purposes (even including serious gaming tournaments, where a crash at the wrong time, could hurt the game players rankings), and business use, where overclocking is a bad idea.