Drawbacks from Hyper-Threading

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
In Skylake Intel duplicated some of the back-end resources and gave each of the two threads a dedicated one. It would now be theoretically possible for there to be a defect that killed the second thread while not hurting the first.

I don't know if Intel uses this to bin any chips, though.

Even if possible, you think they try harvest an extremely low number of chips for that? It sounds like more trouble than worth.
 

zir_blazer

Golden Member
Jun 6, 2013
1,164
406
136
Technically Hyper Threading is quite cost-effective, both in die size and power consumption compared to extra performance. Somewhat surprising is that 10 years ago, claims were that Intel NEEDED Hyper Threading because NetBurst had a very deep pipeline and HT was great as a latency hide technique, which is supposedly why Conroe didn't bothered with it. Yet it was good enough to make a comeback for Nehalem and later and make a difference, which aren't NetBurst, either.
A funny thing is that supposedly AMD had the SMT (Simultaneous Multi Threading) patent but never bothered to use it.

Cons of HT? Probabily that it increases DPC Latency. On some DPC Latency optimization guides I recall having seen, they call for disable HT, along with Turbo and Power Saving. I don't know how much of a difference it makes, or if the issue is OS Task Scheduler side, that would have to think more than if all were physical Cores.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Technically Hyper Threading is quite cost-effective, both in die size and power consumption compared to extra performance. Somewhat surprising is that 10 years ago, claims were that Intel NEEDED Hyper Threading because NetBurst had a very deep pipeline and HT was great as a latency hide technique, which is supposedly why Conroe didn't bothered with it. Yet it was good enough to make a comeback for Nehalem and later and make a difference, which aren't NetBurst, either.

That claim was probably made by those who didn't know Pentium 4 architecture fully, or Intel people misleading others because they weren't willing to admit mistakes in the design.

The theory seemed pretty valid, but that was all tshrown out basically when later we found out it had a nice little thing called replay. The replay feature basically made Hyperthreading lot less than it could have been. Replay hardware stealing the already limited hardware resources and adding Hyperthreading on top of that gets you... well you know.

Core 2 Never had HT, enabled or not.

On this subject, am I right in saying that all Skylake/Haswell etc cores have the HT hardware built into them, only it's disabled on the i5s/Celerons/Pentiums? So it's not like the core was designed and then HT added on the i3s and i7s, rather the core was very much designed with HT in mind but in such a way the circuits could be disabled where needed for certain models.

Are you talking about the i5's that are quad core? Well that's purely segmentation. The i5's on the mobiles have Hyperthreading.

In the desktop its basically:
i3 - dual core + HT
i5 - quad core
i7 - quad core + HT
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,349
1,534
136
Even if possible, you think they try harvest an extremely low number of chips for that? It sounds like more trouble than worth.

They sort of get it for free. All chips must obviously be validated for both threads, so they know if any of their chips have a defect that only influences the second thread, and the chips already have a mechanism for completely disabling the second thread. I would be very surprised if they didn't bin those chips as non-HT, as the effort of do that is basically to write a document for the policy to do so.

The more interesting question is, how do they bin chips that have a defect in the first thread? Salvaging them would require some way of spoofing the thread id, and making the chip boot on the second one. I doubt they'd bother.
 
Apr 30, 2015
131
10
81
Are there any major consumer relevant drawbacks / cons in the implementation of hyper-threading? Essentially, what has to be sacrificed in the doubling of a chips total threads? Similarly, hyper-threading and competing virtual core solutions seems to be a no brainer to amplify a chips performance in certain scenarios without increasing the die size, so why is this not yet a commodity in the chip market?

I know there are probably major flaws in the logic behind my assumptions, but I have failed to find a suitable answer to this question that has been knawing at my brain regarding the entire industry!

There is some discussion in:
https://community.arm.com/servlet/J...Power Consumption for Mobile Applications.pdf
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
They sort of get it for free. All chips must obviously be validated for both threads, so they know if any of their chips have a defect that only influences the second thread, and the chips already have a mechanism for completely disabling the second thread. I would be very surprised if they didn't bin those chips as non-HT, as the effort of do that is basically to write a document for the policy to do so.

The more interesting question is, how do they bin chips that have a defect in the first thread? Salvaging them would require some way of spoofing the thread id, and making the chip boot on the second one. I doubt they'd bother.

There is no second thread. There are two threads that share the same hardware. If the harware won't run two threads, it won't run one thread.
 

TheELF

Diamond Member
Dec 22, 2012
3,973
730
126
There is no second thread. There are two threads that share the same hardware. If the harware won't run two threads, it won't run one thread.
No there is only one core and some extra stuff that makes it possible to use the otherwise wasted commands of this one core by running a second thread,if the extra stuff is busted the core itself would still run normally.
(In theory,in practice I have no idea)
 

MajinCry

Platinum Member
Jul 28, 2015
2,495
571
136
Well, back on the TaleOfTwoWastelands forum, one of the users was having some fairly nasty stutter with his New Vegas install. Turns out that disabling Hyperthreading on his i7 2600k fixed the issue.

So, uh, it may incur stutter on some vidya games?
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
No there is only one core and some extra stuff that makes it possible to use the otherwise wasted commands of this one core by running a second thread,if the extra stuff is busted the core itself would still run normally.
(In theory,in practice I have no idea)

No, wouldn't. Because that "extra" stuff isn't always available. SOme threads will use that extra stuff while others wont, so if you have buseed "extra stuff" you have a busted physical core that wouldn't pass validation as either an HT CPU or non HT CPU. The exception being that if what was busted was simply the HT logic itself.
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,349
1,534
136
There is no second thread. There are two threads that share the same hardware. If the harware won't run two threads, it won't run one thread.

Since skylake, some of the back-end resources are duplicated with a dedicated structure per thread. A defect there would kill one thread while not impacting the other.
 

podspi

Golden Member
Jan 11, 2011
1,965
71
91
Pentium-M didn't have it.

Not going to dig up a source on a 10 year old part.
Because it's not true?


Core and Core 2 (the lost generations of Core) did not have SMT. Would love to be proven wrong, double so if someone knew how to bring such a feature to life
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Since skylake, some of the back-end resources are duplicated with a dedicated structure per thread. A defect there would kill one thread while not impacting the other.

I would think the chances are low, no? Because the extra structures required still take up very small part of the core, and Intel was known to have high standards in regards to yields. That doesn't mean it can't happen.

I can think of other binning scenarios though:
1. Can't clock at full speeds with HT on, only with HT off
2. Consumes more power than normal with HT on, but ok with HT off
 

VirtualLarry

No Lifer
Aug 25, 2001
56,339
10,044
126
Because it's not true?


Core and Core 2 (the lost generations of Core) did not have SMT. Would love to be proven wrong, double so if someone knew how to bring such a feature to life

AFAIK, it's not true either.

The only thing that gives me pause, is the screenshots of BETA BIOSes, with a "Core Multi-Plexing: ENABLE / DISABLE" setting. Whether this is to disable one of the two cores of a dual-core Core2 die, or whether that had something to do with HyperThreading, I don't know.