Is i7/HyperThreading performance improving for general use?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Duvie

Elite Member
Feb 5, 2001
16,215
0
71
To answer the author of this thread in terms of professional programs like Revit, no noticeable tangible difference. I could see a slight increase in rendering in the neighborhood of 5-10%

The funny thing is most if these programs recommend you disable HT. That is funny to comprehend them doing that if it gave real tangible results.

Since HT takes advantage of idle times of a core to schedule an additional activity I wonder if enhancement in software apps and thread scheduling of windows and apps would eventually make this obsolete. So I guess I always felt like this was a gimmick that would be minimized in the future.

I originally documented HT use in the P4 north wood back about 10years ago. Then when we had single cores I found some apps like divx encoding to see plus 10-15% increases.
 

Duvie

Elite Member
Feb 5, 2001
16,215
0
71
In these days I find myself being limited by applications not being fully multithreaded to take advantage of more than 2 cores let alone 4 or more, I rarely am pegged at 100% usage now with my sandy bridge i5.

I went with the non HT sandy bridge because I felt the limitation potentially for my overclock, along with the added price, and finally the lack of apps I use I felt could take advantage of the added virtual cores were enough proof
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
It's not just utilizing time when the whole core is doing nothing, but utilizing time when as little as a single execution port would otherwise be doing nothing. Intel's current CPUs are not scheduling one thread when another is idle, but feeding both threads to be executed by shared hardware, at the same time (some parts swap between threads, like the decoders, but they will do this regardless of activity level). Most of what only services one thread per cycle tends not to be much of a bottleneck (decoders and renamers, FI, are more of a latency and power bottleneck, than a throughput bottleneck--they're made fast more so they can idle more often than because of needing lots of front-end bandwidth all the time).

As the CPU gets effectively wider and wider, SMT will get better and better, since each thread can only use so much at any given time. Also, as the CPU gets wider and wider, and buffers all around larger, the potential negative impact will become less and less, as the CPU ends up with much more hardware than it needs to just execute this one thread over here. At this point, Intel has gotten to where, while HT itself may only use a very small part of each core, it is definitely the reason they can justify making other parts bigger and more complicated (dual-cores w/ HT are really popular), so it's gotten to be kind of a vicious cycle (IE, HT only takes up a few %, but you really need HT to make use of all the available registers, execution ports, and instructions per port being added each generation).

If clock speeds had kept on increasing, this would not be the case, since that would allow each thread to run faster, so it would be worth improving the speeds it ran at, instead of making the CPU much wider, so long as basic ALU and AGU operations could be done in a single cycle.

As well, as we scale out further, you will not see most software pegging all your cores. You'll have 1 pegged, and others running the software but not to 100%. What more HW threads can offer, is to reduce the amount of work the main thread or threads have to do, v. the other threads. Outside of embarrassingly parallel problems, the low-hanging fruit has been picked (we're just waiting for the improvements that can be had to spread, as there are unnecessary bottlenecks all over the place in existing code bases). Just as we went to dual-channel memory for 10-20% gains, we'll end up doing the same with numbers of cores, and for the same reason: faster is either impossible or more expensive.

As of today, the biggest problem is that synchronizing with HT is as bad as doing it with 2x the actual cores, and any IPC at all takes just as long, if not longer, too, even if you don't need to sync. Can software moving to work queues fix that (by trading fine grained parallelism with easier implementation of coarser grained parallelism)? Can HLE help that (by optimistically executing anyway)? Can RTM get around that (by going lockless for anything but physical IO)? Time will tell. But, better software and scheduling are much more likely to improve the performance of multithreaded systems (not merely HT), than to make them obsolete, especially with power being such a big issue, these days. MT like SMT, or AMD's CMT+SMT, are dealing with inefficiencies at individual clock cycles up to some tens of clock cycles, which are basically impossible for the OS to deal with.
 
Last edited:

Gryz

Golden Member
Aug 28, 2010
1,551
204
106
Distinction here is would you call your approach "high end" or would you call it something decidedly less than high-end?
For some people high-end means buying a $1000 CPU and doing triple SLI Titan. You might think that spending $100 extra on an i7 is the least you could do for a high-end gaming machine. Someone else will insist a budget under $2000 can't be called high-end.

For me, giving advice on buying a (high-end) gaming PC is all about deciding where to spend the budget.

My algorithm for a high-end gaming PC would be:
1) Buy a i5-3570K, cheap Z77 motherboard and 8GB of ram.
2) Buy a cheap case, known-brand PSU, and a cheap HDD.
3) Spend all the rest of the budget on a single videocard.
4) If you could afford a gtx680 or amd7970, congratulations, you got a high-end gaming machine.

5) If you got budget left, buy a 128GB SSD.
6) If you got budget left, change that into a 256GB SSD.
7) If you got considerable budget left, you might want to consider going CF/SLI. Need to buy a more expensive motherboard, and a 2nd videocard.
8) Only if you still have budget left, I might recommend to buy a i7-3770K.

Yes, a i7-3770K is nice. Yes, it might make your high-end system a bit more future-proof. But to be honest, I would recommend a gamer to spend his budget on many other things, before buying an i7.
 
Last edited:

toyota

Lifer
Apr 15, 2001
12,957
1
0
why keep saying future proof when as I have already shown you it can be used RIGHT NOW in Crysis 3?
 

Gryz

Golden Member
Aug 28, 2010
1,551
204
106
why keep saying future proof when as I have already shown you it can be used RIGHT NOW in Crysis 3?
Because the effect of a i7-3770K can be seen in only one game (Crysis3). All other benchmarks linked in this thread show the effect of HT to be much less dramatic. Therefor I think those extra $100 can be spent more efficient for most people.

I must agree, I was suprised to see the 2nd comparison in http://maldotex.blogspot.nl/2013/02/hyperthreading-and-real-custom-graphics.html use the very high preset. I wonder what exact feature is responsible for using so much CPU power that HT has a 30% effect in 1920x1200.
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
Because the effect of a i7-3770K can be seen in only one game (Crysis3). All other benchmarks linked in this thread show the effect of HT to be much less dramatic. Therefor I think those extra $100 can be spent more efficient for most people.

I must agree, I was suprised to see the 2nd comparison in http://maldotex.blogspot.nl/2013/02/hyperthreading-and-real-custom-graphics.html use the very high preset. I wonder what exact feature is responsible for using so much CPU power that HT has a 30% effect in 1920x1200.
yeah but Crysis 3 is the very type of game that people build high end comps for. why on earth would you build one with a cpu that cannot even let you stay above 60 fps right from the start? in those spots that are cpu limited it wont matter how much gpu power you have unless you just want to turn on more AA. for just a tiny increase in the overall price of a new build getting an i7 will let you get 100% out of any video card now and have you ready for future games and gpu upgrades.

btw I know from my testing that you better restart the game if you change settings because some are still persistent. I say that because when I went to low settings my framerate did not go up in that area in my screenshot. restarting the game however gave me about a 50% increase. saw this in other areas too where I had to actually restart the game for some settings to actually fully take effect even though the game does not require it expect for textures.
 
Last edited:

MaLDoHD

Junior Member
Sep 13, 2011
4
0
0
I would like to see those numbers verified by a more experienced test site. I would like to see HT be more beneficial in games, but that increase seems awfully high.


That's because physical grass needs a lot of threads in Crysis 3. That performance gain only is high in levels with too many grass. If you do a bench in another level you will not see that gain.
 

JimmiG

Platinum Member
Feb 24, 2005
2,024
112
106
Games would suffer horribly from HT btw, as HT thread is not as fast as a real core.

That's what I'm thinking too - Wouldn't games suffer from stutters and uneven framerates if their threads kept jumping between real and virtual cores?
 

MaLDoHD

Junior Member
Sep 13, 2011
4
0
0
That's what I'm thinking too - Wouldn't games suffer from stutters and uneven framerates if their threads kept jumping between real and virtual cores?

a well programmed application to use correctly a lot of threads, will suffer with HT on? No.

Same for a well programmed videogame.
 

Borealis7

Platinum Member
Oct 19, 2006
2,901
205
106
Same for a well programmed videogame.
your assumption that video games are well programmed is weak at best ;)
up-til-now, games have been programmed to run 30FPS@720p on 5 year old hardware.

i want to buy a 4770K in June as a way of guaranteeing i could run future games properly since i plan to hold on to Haswell for several years, but i struggle to justify the i7 over the i5 for the same reason.
 

TheRyuu

Diamond Member
Dec 3, 2005
5,479
14
81
Games would suffer horribly from HT btw, as HT thread is not as fast as a real core.

You clearly have no idea how HT works.

An operating system simply sees two logical threads per cpu core. If you're only utilizing one or the other they're just as fast.

Hence why keeping it enabled should be a negligible performance loss. The benefits will outweigh any downsides.

I could see a slight increase in rendering in the neighborhood of 5-10%

Furthermore, encoding programs like x264 can see ~20% increase in performance with HT.
 
Last edited:

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
Starting from Nehalem to Ivybridge HT gains were less and less, it's hard to predict if the same will hold true for Haswell, the core is getting wider for the first time so maybe not, we'll have to see some actual benchmarks run on 4770K with HT disabled to see if the same holds true. AFAIK no one has done so as of yet even though there's a ton of 4770k benchmarks.
http://ixbtlabs.com/articles3/cpu/intel-ci7-123gen-p3.html
"Speaking of Hyper-Threading, let's calculate how efficient it is in general. This should be indicative, considering how much diverse software we use in tests. So, to the 1st generation of quad-core Intel Core processors, HT provided about 11.3% boost. For the 2nd generation, that result reduced to 10.5%. Finally, to the 3rd generation, it provides 9.3% more performance. The software is same, test conditions are the same, the technology itself hasn't become any worse, but it's getting less and less useful nevertheless—for each single application at least. A single thread loads a physical core so well that it leaves no resources for anything else. However, Intel has seemingly changed something in the 3rd generation in this regard, judging by our experimental test results. You might say they could've done it in the 2nd generation instead, but it was a large step forward already."
On the other hand software is getting better threaded so tinier gains from HT should be offset by more apps/games using more threads, in the end the value proposition of i5 vs i7 might remain similar to what it's been in the past.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Did they test the cases where programs run worse w/ HT on, across generations (likewise, what about programs that benefit massively from HT, like Photoshop?)? No. Did they test more popular processors, IE 2C/4T? No. Did they test cases that were slower in one generation than HT off to other generations? No. Did they show individual applications? No.

HT has and will remain, program-dependent, in its performance. At some point, HT should reach a point where it is never slower on than off. If there's no downside to having it on, ever, and some programs can run 30-50% faster, then any debating would be moot. However, testing that well isn't done by averages across programs, especially of batches, as HT will almost always increase aggregate throughput.

SQL, FI, loves HT:
http://sqlblog.com/blogs/joe_chang/archive/2013/04/08/hyper-threading-performance.aspx
Notice that he's measuring time taken to complete the queries, rather than just total queries/second. In that case, the increase in time per query is relatively small, while the throughput advantage is relatively large, even when not using many threads for processing the data...but the increase is substantial, and cases not benefiting aren't suffering. The equivalent in game benchmarks, FI, would be minimum framerate comparisons (which can be greater w/ HT on, depending on game).

P.S. Regarding the games test:
http://forums.anandtech.com/showthread.php?t=2274887
That is good testing. iXBT has very high FPS numbers, leading me to believe they are using unrealistic settings (useless), and probably measuring average FPS, rather minimum FPS over some runs. They describe testing and results for 3D modeling, FI, but not much else. Regardless of whether the acreage FPS increase or not, comparing MT without including both throughput and service times does not do justice to what is being measured. It's fine for batch cases, like video encoding, but for something like a game, worst frame times are important.

What I'm saying is that one-dimensional results cannot tell enough. If the other useful dimension (service times) correlates with throughput, that's good, but that is in no way assured, without getting those results from the same testing runs.
 
Last edited:

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
I don't know what you're trying to dispute, all they did was show that enabling HT on Nehalem had a greater increase in performance than enabling it on IB. They didn't dispute its usefulness as one could imply from your post. And yes, they tested scenarios where enabling HT had a negative impact on performance. They tested 5 games and on average it caused a slight drop in performance on SB and IB but it didn't impact Lynnfield.
 

JimmiG

Platinum Member
Feb 24, 2005
2,024
112
106
your assumption that video games are well programmed is weak at best ;)
up-til-now, games have been programmed to run 30FPS@720p on 5 year old hardware.

i want to buy a 4770K in June as a way of guaranteeing i could run future games properly since i plan to hold on to Haswell for several years, but i struggle to justify the i7 over the i5 for the same reason.

Same here. My current CPU is from 2009, and I'm glad I went with the X4 over the X3, which was the budget alternative back then. The extra money was well spent. However that gave you an extra *real* core. I have a hard time justifying the extra cost for virtual cores. It even seems many get improved performance by disabling HT.
 

Remobz

Platinum Member
Jun 9, 2005
2,564
37
91
Same here. My current CPU is from 2009, and I'm glad I went with the X4 over the X3, which was the budget alternative back then. The extra money was well spent. However that gave you an extra *real* core. I have a hard time justifying the extra cost for virtual cores. It even seems many get improved performance by disabling HT.

Interesting. Something else for me to take into account before a CPU purchase.
 

Concillian

Diamond Member
May 26, 2004
3,751
8
81
It even seems many get improved performance by disabling HT.

I think this experience is limited to software that doesn't support all threads the CPU can handle. In applications multithreaded enough -- comfortable handling 8+ threads -- there is a definite advantage of HT, to the point that HT on a quad core has similar value to a full fifth core.

Three years from now don't you expect the applications that handle only 4 threads well to move towards hadling 8+ with newer versions? They certainly aren't going to be moving the other way. With both PS4 and XBOne looking to have 8 cores, we're pretty much guaranteed that any new gaming engines will handle 8 cores relatively well.

I see HT as a 'future proofing' addition for anyone who doesn't have an application that specifically makes use of it now. In the past I've been pretty anti-future proofing, because you could always count on using your future proofing money towards significantly better hardware and shortening your build's life expectancy. However, I don't see that the case anymore. two years have seen fairly marginal improvements, even SB was only a moderate improvement. I can see myself advocating HT as a future proofing purchase, even at the rather large $100 adder.
 

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
233
106
For some people high-end means buying a $1000 CPU and doing triple SLI Titan. You might think that spending $100 extra on an i7 is the least you could do for a high-end gaming machine. Someone else will insist a budget under $2000 can't be called high-end.

For me, giving advice on buying a (high-end) gaming PC is all about deciding where to spend the budget.

My algorithm for a high-end gaming PC would be:
1) Buy a i5-3570K, cheap Z77 motherboard and 8GB of ram.
2) Buy a cheap case, known-brand PSU, and a cheap HDD.
3) Spend all the rest of the budget on a single videocard.
4) If you could afford a gtx680 or amd7970, congratulations, you got a high-end gaming machine.

5) If you got budget left, buy a 128GB SSD.
6) If you got budget left, change that into a 256GB SSD.
7) If you got considerable budget left, you might want to consider going CF/SLI. Need to buy a more expensive motherboard, and a 2nd videocard.
8) Only if you still have budget left, I might recommend to buy a i7-3770K.

Yes, a i7-3770K is nice. Yes, it might make your high-end system a bit more future-proof. But to be honest, I would recommend a gamer to spend his budget on many other things, before buying an i7.
Good points. However, high-end is a matter of opinion. Check out AdamK47's rig, in my opinion, that is high-end. (1156/1155/1150 - that's pretty much mainstream).
 

willomz

Senior member
Sep 12, 2012
334
0
0
7) If you got considerable budget left, you might want to consider going CF/SLI. Need to buy a more expensive motherboard, and a 2nd videocard.
8) Only if you still have budget left, I might recommend to buy a i7-3770K.

The thing is you can always add a second card later, whereas you can't just add a new processor without wasting money.

The way things are going a 4770k could last 10 years, video cards still need replacing every few years. Spending an extra $100 on the CPU is small change compared to all the money spent on GPUs.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Good points. However, high-end is a matter of opinion. Check out AdamK47's rig, in my opinion, that is high-end. (1156/1155/1150 - that's pretty much mainstream).

Unless I'm mistaken, technically that is one-step above "high end" as it is intentionally branded "Extreme".
 

cytg111

Lifer
Mar 17, 2008
26,199
15,604
136
Some1 must have a HT pentium4 lying around .. and a few generation of i7's .. pretty simple, benchmark some multithreaded app on one core with and without HT on (superpi, mess with affinity first), document the performance gain pr generation core, if this percentage is rising, then the answer to the OPs question would be yes, right?
 

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
233
106
Unless I'm mistaken, technically that is one-step above "high end" as it is intentionally branded "Extreme".
Technically? This is not exact science, I am afraid. High-end is the top product for a specific market, the computer enthusiast market, in our case here.

The previous high-end "Extreme family of Intel Core 2 processors" shared the same platform with Celerons and Pentiums. Since then, they've upgraded the Extreme family to a server platform with additional features, and that's what relegates the regular i7 to a 2nd class platform ( better value, just as fast... uses less energy. The list goes on. But it's not a high-end enthusiast platform. It's more like a great all-around platform for the price-conscious market. ) People should be thankful, Intel allowed a "cheap" entry with the "non-Extreme" 3930K model :)
 
Last edited: