Should AMD implement HT for nothing more than a cooler running core?

MadRat

Lifer
Oct 14, 1999
11,910
238
106
Hyperthreading appears to have benefits of keeping HT-enabled P4 cores more cool than non-HT P4's. I'd think that AMD could take a turn from the Intel page and use HT technology to keep their cores running cooler. The principle appears to be that when parts of the chip are used in alteration it has a quantitatively positive effect on the overall heat emmisions of the core. The argument is not whether or not HT would benefit the AMD design better than an Intel design, its whether or not AMD should use it to run their chips cooler.
 

DerwenArtos12

Diamond Member
Apr 7, 2003
4,278
0
0
well the HT anem is copyrighted for one so they would have to come up with a new name but the technology should be implemented if they can. There is no good reason why a performance enhancing technology such as HT or an equivalent should not be implemented.

BTW: just immagine what it would do to their ratings system then! the new Barton 4000+ that runs at 2.5ghz!!LOL!
 

INemtsev

Senior member
Jul 24, 2003
260
0
0
Yeah p4 made lots of improvements, amd keeps its biggest advantage....3 floating point decoders.....I think their new x86 extensions are otta do the trick....they rated it #1 cpu of the year in one of the magazines....because of its big preparations for the 64bit applications...
 

AndyHui

Administrator Emeritus<br>Elite Member<br>AT FAQ M
Oct 9, 1999
13,140
6
81
HyperThreading INCREASES the heat output on the Pentium 4 processor as more of the die is in use. It doesn't produce a cooler running CPU.

AMD would need a significant redesign of the core if they wanted to do a single core implementation of HT.
 

Soulkeeper

Diamond Member
Nov 23, 2001
6,712
142
106
yeah AMD will take their own route
their are many ways to compete with any performance gain that HT offers
 

MadRat

Lifer
Oct 14, 1999
11,910
238
106
Originally posted by: AndyHui
HyperThreading INCREASES the heat output on the Pentium 4 processor as more of the die is in use. It doesn't produce a cooler running CPU.

I've only heard the opposite. It should take more energy to increase heat from the core; with HT enabled the P4 chip is rated identically to the non-HT P4. However, I've been hearing nothing from forum to forum but that the temperatures decrease when HT is enabled.
 

RaynorWolfcastle

Diamond Member
Feb 8, 2001
8,968
16
81
Originally posted by: MadRat
Originally posted by: AndyHui
HyperThreading INCREASES the heat output on the Pentium 4 processor as more of the die is in use. It doesn't produce a cooler running CPU.

I've only heard the opposite. It should take more energy to increase heat from the core; with HT enabled the P4 chip is rated identically to the non-HT P4. However, I've been hearing nothing from forum to forum but that the temperatures decrease when HT is enabled.

See for yourself here Intel P4 datasheets
If you look under the thermal specifications you find the following:
2.4A (Vcore: 1.525V, 400 MHz bus, no HT) - 59.8W TDP
2.4B (Vcore: 1.525V, 533 MHz bus, no HT) - 59.8W TDP
2.4C (Vcore: 1.525V, 800 MHz bus, HT-enabled) - 66.2W TDP

Obviously since HT increases the efficiency of the CPU it must be increasing the amount of work done per cycle. Since there are heat losses for all usefull work done there must be heat losses due to the additional work being done every cycle and that is why HT-enabled P4's run hotter than their non-HT brothers (die area is the same but energy output increases).
 

ahfung

Golden Member
Oct 20, 1999
1,418
0
0
Various methods exist for cooler AMD in idle mode. WCPREDIT is one of it. Currently HLT doesn't work for AMD without tweaking. Who to blame, M$? mobo? or AMD itself?
 

Duvie

Elite Member
Feb 5, 2001
16,215
0
71
they rated it #1 cpu of the year in one of the magazines....because of its big preparations for the 64bit applications...

And that is a f-ing joke....LOL!!!! We will soom all see but I think many are going to be quite disappointed after a 2 year wait and amd will only manage a slight lead over current p4 3.2ghz in 32bit apps cuase there will be no 64bit desktop application from windows ready....Then ofcoure prescott will be released at 3.4ghz with a host of improvements and they will take back the lead, and then all that BS and hype will have been a waste....If Intel ran a bullshit Pr rating scheme like AMD then the prescott could likely be called a 3600-3700+ northwood.
 

mechBgon

Super Moderator<br>Elite Member
Oct 31, 1999
30,699
1
0
Duvie, the same could've been said of the Pentium4 and its useless (at the time) SSE2 instruction set, back in 2000. Give 'em a chance. :(
 

Duvie

Elite Member
Feb 5, 2001
16,215
0
71
Originally posted by: mechBgon
Duvie, the same could've been said of the Pentium4 and its useless (at the time) SSE2 instruction set, back in 2000. Give 'em a chance. :(


I agree when it comes to 64bit, but how long are ppl going to wait??? I mean a lot of ppl in this forum change cpus and systems like underwear and they are not going to sit around and wait for promises. AMD can get that gain later but what if the p4 prescott comes out good and then they plain just outramp AMD like they have over the last 1-1/2 years???
 

mechBgon

Super Moderator<br>Elite Member
Oct 31, 1999
30,699
1
0
You've got a good point there. AMD needs to really hit the ground running. I'm optomistic that 64-bit Windows and 64-bit game code will give them at least something to showcase the AMD64 technology. Remember that Tim Sweeny of Epic is predicting about a 15% boost from the 64-bit OS/code combo, and is planning to release 64-bit versions of games. So that's hopeful to me.

I'm not going around with my eyes closed. From the previews we've seen so far, like at XBit, it's clear that there will be stuff that the Athlon 64's high IPC just doesn't help with, they need MHz too. Here's hoping they manage to wring some good MHz bumps out of those CPUs, everyone benefits from the competition. :cool:
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
Originally posted by: MadRat
Originally posted by: AndyHui
HyperThreading INCREASES the heat output on the Pentium 4 processor as more of the die is in use. It doesn't produce a cooler running CPU.

I've only heard the opposite. It should take more energy to increase heat from the core; with HT enabled the P4 chip is rated identically to the non-HT P4. However, I've been hearing nothing from forum to forum but that the temperatures decrease when HT is enabled.

Think about it, does that really seem logical? Hyper-Threading makes use of otherwise unused parts of the processor... so it's doing more work. Using more of the CPU, and doing more work will create more heat... not less. Not sure where you heard that it makes it run cooler, but that just sounds rediculous to me.

... did you read it in an article published by the Inquirer? ;)
 

orion7144

Diamond Member
Oct 8, 2002
4,425
0
0
Originally posted by: Duvie
Originally posted by: mechBgon
Duvie, the same could've been said of the Pentium4 and its useless (at the time) SSE2 instruction set, back in 2000. Give 'em a chance. :(


I agree when it comes to 64bit, but how long are ppl going to wait??? I mean a lot of ppl in this forum change cpus and systems like underwear and they are not going to sit around and wait for promises. AMD can get that gain later but what if the p4 prescott comes out good and then they plain just outramp AMD like they have over the last 1-1/2 years???

Intel just announced that they are going to change the stepping on the Prescott so they can decrease the Heat disapation down from 106w so I guess they have been keeping up on the forums and listening to all of us talk about how hot they are going to be. They also said this will not delay the release of the 90nm Prescott's.
 

AndyHui

Administrator Emeritus<br>Elite Member<br>AT FAQ M
Oct 9, 1999
13,140
6
81
They do a respin/redesign of the core for the new stepping to decrease heat. Wingznut probably has a more detailed explanation.
 

KF

Golden Member
Dec 3, 1999
1,371
0
0
>Intel just announced ...
>so I guess they have been keeping up on the forums and listening to all of us
> talk about how hot they are going to be

You're kidding, right? Intel knows what they want to do about designing CPUs without consulting a tiny bunch kibitzers.

>I agree when it comes to 64bit, but how long are ppl going to wait??? I mean a lot of ppl in this
> forum change cpus and systems like underwear and they are not going to sit around and wait for promises.

I'm sure AMD would like to keep everybody happy and interested, including the people that spread the good word for AMD on their own, but AMD sells CPUs to a mass market. That has to be their their first business concern. Both Intel and AMD are AHEAD of the CPU mass market, due to mutual competition. That's why prices are so depressed. Come on; under $200 for a great performing Intel CPU? That's something Intel never wanted to get into. That's where they want AMD.

AMD is putting 64bit capability out there now on the theory that the capability has to be present first before developers will develop for it. The market for the 64 bit use is slight to non-existent (except in servers), and will be until there is a lot of software support. Just because of the way people behave, AMD would have preferred Intel to lead and set the standard for 64 bit CPUs, but Intel has a different idea: Itaniums. So AMD is leading the best they can, and hoping it works out. It if works, it will be a first. In a market share sense, everyone always goes along with Intel, not AMD. AMD designed Athlon 64s so that the 64 bit capability does not add greatly to the transistor count. Therefore having it present but unused is a minor hit on chip cost. It probably accounts for less transistors than AMDs super-duper FPU which goes practically unused in 98% of programs except for games.

I read Intel's papers on Hyper-threading. Unfortunately, why the concept was possible to implement and actually boost performance, rather than hurt (due to resource conficts), was not present. It is simply a fact that the designers simple methods of resolving conflicts works in practice. One key statistic from Intel stands out. Intel claims that actual measurements (without HT) show that the instructions-per-cycle on real software has been ONE statistically, despite the fact that the P4 is theoretically capable of multiple instructions per cycle. (Kind of a shock to me. I didn't know commercial programmers were that incompetant.) So real resource usage is so sparse that hyperthreading almost has to help.

As I think people accept, AMD does get better instructions-per-cycle statistics. Without that, their CPU clock would be too slow to compete. And AMD has already committed to improving the performance of the successor Athlon, the Athlon 64, by executing more instructions per cycle. Because of that choice, hyperthreading can't be expected to help Athlon 64s the way it helps P4s.

Intel does not claim they invented HT. They outline that, as a concept, it has been around in papers, although obviously they didn't use Intel's brand name for it. Intel says they chose Hyper-threading because every other possiblity for improving performance increases the transistor count tremendously more. But HT was very a risky choice. The complexity of verifying that a design is keeping track of two virtual CPUs that are using the same collection of resouces correctly, is squared, and with it so is the possibility of missing a design mistake. There is in fact no humanly possible way to verify every CPU1 vs CPU2 state. They had to devise new checking techniques. Among them were mathematical proofs of the algorithms. BTW, the verification team always finds some design mistakes while testing. They always attempt to find the errors by running a software model on arrays of supercomputers before they go to silicon, but they always find they have missed a few when the silicon is first run. It is that fact-of-life, combined with the impossiblity of checking everything, that made HT so risky. (Remember the legendary FPU bug?)

Intel says that this is just their first implementation of HT, and now that they know it can work, they are going to do it more cleverly and effectively. I suppose we can look forward to 4 and 16 virtual CPUs on a chip.

Considering how difficult a correct HT design is to achieve, and how long Intel worked on it, I don't think AMD is going to try it just to reduce heat, if it would. The amount of people Intel can put on a project, if necessary, has to be fabulously larger than AMD. Still, I would think something from the way Intel did HT can be adapted to the way AMD handles instruction streams. AMD is somehow getting much more instructions-per-cycle on the Athlon 64, when everyone, including Intel, was saying that avenue had passed its useful limit years ago. They said there was not that much "parallelism" in instruction streams. So how does AMD do it, when P4s are only getting one instruction per cycle?
 

yhelothar

Lifer
Dec 11, 2002
18,408
39
91
I saw this thing where it showed that AMD already has patents for HT, or something similar. They had it patented way before P4s were out.
Here it is
 

orion7144

Diamond Member
Oct 8, 2002
4,425
0
0
Originally posted by: virtualgames0
I saw this thing where it showed that AMD already has patents for HT, or something similar. They had it patented way before P4s were out.
Here it is

Well before the P4 HT were out but not before Intel had it's own patent back in 94. I guess the 5 years it took AMD to come up with something Intel already did isnt that bad considering it is AMD.
 

Wingznut

Elite Member
Dec 28, 1999
16,968
2
0
Originally posted by: virtualgames0
I saw this thing where it showed that AMD already has patents for HT, or something similar. They had it patented way before P4s were out.
Here it is
Considering that Intel and AMD have a technology cross-licensing agreement, there's no reason AMD couldn't implement some sort of HT.

Guys, don't forget it takes years to design and manufacture a cpu. It's not like AMD could decide to add HT... And then have it done by Xmas.

 

orion7144

Diamond Member
Oct 8, 2002
4,425
0
0
Originally posted by: Wingznut
Originally posted by: virtualgames0
I saw this thing where it showed that AMD already has patents for HT, or something similar. They had it patented way before P4s were out.
Here it is
Considering that Intel and AMD have a technology cross-licensing agreement, there's no reason AMD couldn't implement some sort of HT.

Guys, don't forget it takes years to design and manufacture a cpu. It's not like AMD could decide to add HT... And then have it done by Xmas.

Good point
 

MadRat

Lifer
Oct 14, 1999
11,910
238
106
The idea of SMT isn't owned by Intel, nor would the alogorythms used by Intel's SMT approach (Hyper Threading = Intel's SMT) necessarily be included in their technology exchange.
 

KF

Golden Member
Dec 3, 1999
1,371
0
0
The Inquirer says about the patent:

>It specifically says that in one test, a CPU executed at least two threads concurrently

Hmmm... makes it sound as if AMD has a real CPU with HT. Doesn't sound very likely, does it? The Inquirers link to the patent doesn't get there. This is the text of that link:

http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=/netahtml/search-adv.htm&r=1&p=1&f=G&l=50&d=ft00&S1=5,944,816.WKU.&OS=PN/5,944,816&RS=PN/5,944,816

I doubt if I'd understand the patent anyway. (I've looked at patents before!)

One thing about patents: one party can patent one aspect needed to implement a solution, and another party a second, and another party a third. By the time anyone gets to making a real version, they might need all three patents.

In their paper, Intel gives the ways to execute threads concurrently. It kind of depends on what counts as simultaneous. Here's my interpretation, listed by how much is shared:

1) Two entirely separate computers can execute parts of the same program.

2) Dual processors sharing the same bus and memory, and maybe even the same chip, can do it.

3) Simplest of all: one processor can be switched between the several parts (this is what Windows normally does, since Windows 95.) It can be done by just having a timer interrupt the CPU. Before that, Window had voluntary switching. Each program was supposed to have spots, while it was executing, where it returned to control to the OS, which in turn switched to another program. Needless to say, programmers being the way they are, every program wanted to hog everything for itself.

4) The hardest: one chip can implement two virtual processors sharing the same CPU resources (cache, buffers, registers, execution units.)

Because of the way x86 CPUs are already implemented as virtual CPUs, #4 is not as incredible as it once must have seemed. How can two virtual CPUs share the same 8 registers? They can't really. CPUs already have already have a lot of registers behind the scenes, and those registers are renamed as needed, even with one virtual CPU. How can two virtual CPUs use the same execution unit at the same time? They can't. There are several execution units already available to be used simultaneously even with one virtual CPU.

Sharing the cache does seem to cut the cache per CPU in half, and you would think that would be a serious performance hit. It also opens the possiblilty that one thread would kick out all the key instructions and data from the cache that the other needs. Both threads could be doing it to each other in fact. In that case, you would get cache thrashing and effective speed would slow down to a crawl, worse than if you had no cache. Intel simply put a limit on how much a virtual CPU can hog. They don't go into how they managed to do it.

Keeping track of which virtual CPU is using what, seems difficult to me. Imagine trying to cook up two complex dishes simultaneously, when either recipe can use the any measuring cup, bowl, or pot, and a given container sometimes has things for one recipe and sometimes for the other. Obviously everything has to have a temporary label, but even so, how is the CPU keeping the two sequences separate?

Since both recipes might need the same pot when it is occupied by the other, how can you be sure that one recipe does not block the other from progressing? Intel resolved this by always giving the opposite CPU the next turn, without worrying about whether this produced optimal speed. Keeping both programs progessing is the priority.

Back to the patent: I wonder if it is really HT in the Intel sense?

Patents can be conflicting. The patent applicant pays for a patent search, which hopefully turns up conflicts. Just because it may not, doesn't mean the patent offices decision garantees your rights. In can be overturned in court, and there are a lot of important patents that were. The patent on the automobile was overturned. I think the patent on the laser, or at least some lasers, was overturned.