Interesting Bulldozer speculation found

jones377

Senior member
May 2, 2004
463
64
91
http://citavia.blog.de/

Additional thoughts here (By Hans de Vries)

It all eludes to what AMD calls clustered multithreading. If Bulldozer does indeed come out with such a design it will be quite different from most conventional architectures. It's hard to speculate about performance based on this but it should allow for smaller cores while still maintaining most of the performance of bigger cores no? Maybe this would allow AMD to compete with Intel in the core race even while being 1 year behind in process nodes.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
"clustered multithreading"

AMD is opening themselves up to a host of "cluster f...ed" cliches if BD fails to be competitive with Sandy Bridge.
 

ilkhan

Golden Member
Jul 21, 2006
1,117
1
0
IDC: just to make sure my timelines are right.
we're expecting bulldozer in 2011 vs sandybridge; and 45nm larabee will compete with GT300, yes?
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,675
146
106
www.neftastic.com
This has me thinking - couldn't SIMD units effectively be replaced by a GPU, particularly if it's on-die with the CPU itself? Not to mention since GPUs inherently deal with floating point data quite a bit and are optimized, might as well replace the entire FPU with a GPU on-die, since the FPU generally handles the SIMD duties as well. This makes Fusion actually sound like a good idea, and all you do is wrap the GP-CPU (ALUs, Integer, Memory, Cache, Branch, etc) around a GPU...

Interesting idea for dual purpose use - free video if you need it, fast FPU even if you don't.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Originally posted by: SunnyD
This has me thinking - couldn't SIMD units effectively be replaced by a GPU, particularly if it's on-die with the CPU itself? Not to mention since GPUs inherently deal with floating point data quite a bit and are optimized, might as well replace the entire FPU with a GPU on-die, since the FPU generally handles the SIMD duties as well. This makes Fusion actually sound like a good idea, and all you do is wrap the GP-CPU (ALUs, Integer, Memory, Cache, Branch, etc) around a GPU...

Interesting idea for dual purpose use - free video if you need it, fast FPU even if you don't.


No, wouldn't work. GPU's are very fast at a limited set of functions, not general purpose math.
For example, its why while GPU's are very fast at folding, they only run certain folding tasks, the one can be specifically designed to run on a GPU.
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,675
146
106
www.neftastic.com
Originally posted by: Phynaz
Originally posted by: SunnyD
This has me thinking - couldn't SIMD units effectively be replaced by a GPU, particularly if it's on-die with the CPU itself? Not to mention since GPUs inherently deal with floating point data quite a bit and are optimized, might as well replace the entire FPU with a GPU on-die, since the FPU generally handles the SIMD duties as well. This makes Fusion actually sound like a good idea, and all you do is wrap the GP-CPU (ALUs, Integer, Memory, Cache, Branch, etc) around a GPU...

Interesting idea for dual purpose use - free video if you need it, fast FPU even if you don't.


No, wouldn't work. GPU's are very fast at a limited set of functions, not general purpose math.
For example, its why while GPU's are very fast at folding, they only run certain folding tasks, the one can be specifically designed to run on a GPU.

When you consider that the GPU is trying to move more general purpose with every revision, I don't think it's as far fetched as you may think. After all, the notion that Nvidia wants in on the x86 market, the common logic is they do this but in reverse - they take their GPU, and wrap all of the supporting CPU features around it.
 

jones377

Senior member
May 2, 2004
463
64
91
Originally posted by: SunnyD
Originally posted by: Phynaz
Originally posted by: SunnyD
This has me thinking - couldn't SIMD units effectively be replaced by a GPU, particularly if it's on-die with the CPU itself? Not to mention since GPUs inherently deal with floating point data quite a bit and are optimized, might as well replace the entire FPU with a GPU on-die, since the FPU generally handles the SIMD duties as well. This makes Fusion actually sound like a good idea, and all you do is wrap the GP-CPU (ALUs, Integer, Memory, Cache, Branch, etc) around a GPU...

Interesting idea for dual purpose use - free video if you need it, fast FPU even if you don't.


No, wouldn't work. GPU's are very fast at a limited set of functions, not general purpose math.
For example, its why while GPU's are very fast at folding, they only run certain folding tasks, the one can be specifically designed to run on a GPU.

When you consider that the GPU is trying to move more general purpose with every revision, I don't think it's as far fetched as you may think. After all, the notion that Nvidia wants in on the x86 market, the common logic is they do this but in reverse - they take their GPU, and wrap all of the supporting CPU features around it.

This is what Intel did with the Larrabee x86 ISA extension (not to be confused with the Larrabee microarchitecture). I believe Intel said that only 5% of the added instructions are for graphic only, the rest are general purpose (but some presumably still used in the software render). So it should be possible for Intel to include the Larrabee instructions in a future CPU. Now whether they will do this is another question. Anyway this thread was about Bulldozer!
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: ilkhan
IDC: just to make sure my timelines are right.
we're expecting bulldozer in 2011 vs sandybridge; and 45nm larabee will compete with GT300, yes?

Yep that's pretty much the sum of it. I don't know what AMD intends to field in the GPU market at that time though.

But if you think about it, the only thing preventing Nvidia from fielding a Larrabee-like GPU (if it turns out that Larrabee is the bee's knees) is its lack of an x86 license plus experience making architectures to support such an ISA...precisely the things AMD does have at its disposal...should they want to make a more Larrabee-like GPU for any reason that is.
 
Dec 30, 2004
12,553
2
76
Originally posted by: Idontcare
Originally posted by: ilkhan
IDC: just to make sure my timelines are right.
we're expecting bulldozer in 2011 vs sandybridge; and 45nm larabee will compete with GT300, yes?

Yep that's pretty much the sum of it. I don't know what AMD intends to field in the GPU market at that time though.

But if you think about it, the only thing preventing Nvidia from fielding a Larrabee-like GPU (if it turns out that Larrabee is the bee's knees) is its lack of an x86 license plus experience making architectures to support such an ISA...precisely the things AMD does have at its disposal...should they want to make a more Larrabee-like GPU for any reason that is.

Why would they want to do a Larabee like GPU?
I thought most had decided that Larabee wouldn't compete.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: soccerballtux
Why would they want to do a Larabee like GPU?

I'm not saying they DO want to, I am saying there really isn't anything preventing AMD from doing a Larrabee when you consider what is preventing NV from doing a Larrabee.

In other words it is of NO surprise that NV is anti-Larrabee, it is competition first and foremost but even more worrisome for NV is that Larrabee represents a potential shift in the GPGPU architecture landscape that could shut-out NV for simple sake of their lacking x86 license and experience.

This isn't the case for AMD. And surely you have noticed that AMD hasn't uttered a word regarding Larrabee's assured failure.

If you were AMD, even if you hadn't thought about a Larrabee styled GPU, the moment you heard of the possibility of such a beast wouldn't you create a skunkworks crack team of maybe 5-10 people to flesh out the barest of details regarding what manner of a contender AMD could create and field? Sit on the fence a little, let the market decide which GPU path you take?

My point is simply that IF Larrabee turns out to be a superior approach to the GPU segment then NV is screwed (and they know it, hence all the sabre rattling from them) but AMD has the ability to ensure they remain in the game as a competitor because they have all they need to make their own Larrabee-styled GPU.

Originally posted by: soccerballtux
I thought most had decided that Larabee wouldn't compete.

I have yet to read an article/review from anyone who has actually held Larrabee or had it in their labs to test in which they derided its capability.

The people you refer to as "most had decided" are (a) the competition composed mainly of Nvidia employees, and (b) some near-obsessively biased forum members who seem quite zealous about ensuring the world knows their opinion on a product they know little to nothing about.

The forum rhetoric (on both sides) regarding Larrabee reminds me deeply of the exact same mentality that existed pre-Conroe in the CPU forum in 2006. That is not to say I think Larrabee is going to be a success or a failure, I fully admit I haven't seen the specs I need to see in order to formulate my opinion on the matter , but it is to say that I do recognize the zealot posting activity and mentality when it rears its ugly head.

And there has been a lot of head rearing lately, but that is all it is.