Intel annouces reverse HyperThreading...

Status
Not open for further replies.

SunnyD

Belgian Waffler
Jan 2, 2001
32,675
146
106
www.neftastic.com
Itanium was a better architecture than x86 (on paper). Unfortunately VLIW/EPIC requires such massive software support from the compiler that it has failed miserably. The irony is that Intel is actually far better at writing compilers (for their own CPUs - even better for AMD's CPUs ironically) than almost any other compiler developer out there, yet they couldn't make Itanium work right.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
It's called a "GPU". You should look it up sometime.

GPU, ala APU, is for well-parallelizable code...quite the opposite of the basis for the thread's topic (which itself is a repost of an xbit article which is reposting information articulated on an Intel Europe blog which is doing nothing more than reposting information that was published by Intel last year!).
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,675
146
106
www.neftastic.com
What research is AMD conducting in this area?

A GPU is speculative multithreading?

Learn something new every day.

No, a GPU is a massively-parallelizeable device that could potentially take advantage of identical techniques to Intel's project in order to enhance single-threaded applications. Try thinking instead of using sarcasm.

GPU, ala APU, is for well-parallelizable code...quite the opposite of the basis for the thread's topic (which itself is a repost of an xbit article which is reposting information articulated on an Intel Europe blog which is doing nothing more than reposting information that was published by Intel last year!).

See above - AMD's Fusion goal is likely to involve research on how to utilize the parallel talents of the fused GPU to augment traditional CPU tasks. Having multiple cores in a GPU, which incidentally grow ever more general-purpose every iteration, will naturally lend itself to a hybridized design where multiple resources from the GPU side will be able to execute speculative threading for the CPU.

To answer Phynaz's question more directly: Just because you think nothing has been announced, you think that AMD isn't working on it?
 

JFAMD

Senior member
May 16, 2009
565
0
0
What research is AMD conducting in this area?

We are doing things, don't worry about that.

If IntelUser is right and this shows up in the 2015 timeframe, I wonder about the value. The world is moving to more and more cores, software will expect that. The trend is to more cores, more threads, fewer fat, dependent threads.

Feels a lot like this is looking backwards instead of forwards. Might be interesting today, but with every day things get more threaded than the day before, eventually the value starts to fall apart.

Remember when Windows dropped support for 16-bit apps? There was all kinds of tricks that people were coming up with to keep those things running beyond that point, but eventually all of that fell away and nobody asks about 16-bit apps anymore.

That will happen to single threaded apps eventually, it is just a matter of time.

Legacy support is great to a point, but eventually it holds you back. Would you rather people spend time and money trying to make 2015 platforms deal with pre-2005 problems or have 2015 platforms better utilize 2015 software?
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
We are doing things, don't worry about that.

If IntelUser is right and this shows up in the 2015 timeframe, I wonder about the value. The world is moving to more and more cores, software will expect that. The trend is to more cores, more threads, fewer fat, dependent threads.

Feels a lot like this is looking backwards instead of forwards. Might be interesting today, but with every day things get more threaded than the day before, eventually the value starts to fall apart.

Remember when Windows dropped support for 16-bit apps? There was all kinds of tricks that people were coming up with to keep those things running beyond that point, but eventually all of that fell away and nobody asks about 16-bit apps anymore.

That will happen to single threaded apps eventually, it is just a matter of time.

Legacy support is great to a point, but eventually it holds you back. Would you rather people spend time and money trying to make 2015 platforms deal with pre-2005 problems or have 2015 platforms better utilize 2015 software?

JF, There are many, many workloads the are sequential in nature. Surely AMD knows this. In particular I can think of Business Intelligence applications as an example. Can you comment on any particular initiative underway at AMD to address these type of workloads?

I find it interesting that you say AMD is working on this, but yet it involves "looking backwards". Applying logic to your statement it then follows that AMD is looking backwards :)
 

JFAMD

Senior member
May 16, 2009
565
0
0
I didn't say we were working on THAT particularly. I was replying to the "what is amd doing in this area" question.

As you can imagine we won't get into specifics.

For instance, before we revealed Magny Cours when people would ask if we were doing HT the answer would have been "we are working on things." We just go about getting more threads in a different way.

There are apps that will gain more performance in this manner. The point I was making was that as time goes on, this is a technology with diminishing returns because each year will have fewer single threaded apps than the year before.

If you are trying to make movies better for flat panel TVs, do you focus on a VHS improvement or a DVD/Blu Ray improvement? Is a 40% increase in VHS quality better than a 10% increase in Blu Ray quality?
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
There are apps that will gain more performance in this manner. The point I was making was that as time goes on, this is a technology with diminishing returns because each year will have fewer single threaded apps than the year before.
But you can't deny that there are lots and lots of applications that rely on purely sequential algorithms, where it's either hard (e.g. still a research problem, think for example sparse graphs) or completely impossible to implement efficient parallel algorithms and that won't change anytime soon. And don't forget all those algorithms that just don't scale very well above 8 or 16 (or 32 - does that matter in the long run?) cores. So that's something completely different than the transition from 16 to 32bit.

No doubt that there are lots of applications that can be programmed to benefit from more cores and we can make lots of advancements with better libraries, tools and whatnot (who wants the average programmer out there to think about hook and jump techniques?), but I don't think we should belittle the percentage of sequential workloads especially in the consumer space.

So I don't think this kind of research is "backwards thinking", you could even make the point that since it's hard to get enough parallelism out of a lot of algorithms, this kind of research will get even more interesting with the vastly increasing core count we'll no doubt see.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
This is a further developed version of the "Mitosis" project they have showcased few years ago. Anandtech has an article on this.
 
Status
Not open for further replies.