what happened to that amd fusion thing?

IlllI

Diamond Member
Feb 12, 2002
4,927
11
81
memory is foggy but i think i recall reading about it a couple years ago.. it was suppose to combine a gpu and cpu onto one chip, i guess for laptops etc.

was suppose to be the 'next best thing' and where the future was going to be etc etc...

then... nothing.


will it ever see the light day? or was it shelved?

i've not read anything about it in a very long time



 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
Takes time to figure out what performance levels you want in a given die size. More GPU at the expense of less CPU or vice versa and you can't get both without raising the $$$ of the chip.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Viditor's response to this is going to be great!

"No baiting other members into an altercation."

One of the guidelines. I'll advise Viditor to ignore your post here.

Anandtech Moderator - Keysplayr
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Originally posted by: TuxDave
Takes time to figure out what performance levels you want in a given die size. More GPU at the expense of less CPU or vice versa and you can't get both without raising the $$$ of the chip.

And TDP for the socket, which probably makes things really challenging for the high-performance segment.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,578
10,215
126
Originally posted by: Phynaz
Viditor's response to this is going to be great!

Careful. I got a warning from the mods about "Calling out" someone, when I posted something similar about someone else a while ago.
 

heyheybooboo

Diamond Member
Jun 29, 2007
6,278
0
0
More Cliffs:

1) In August, 2007, AMD introduced the SSE5 instruction set to optimize the Fusion arch;
2) Said introduction of SSE5 was greeted with a collective thud (except for maybe Sun);
3) In March, 2008, Intel announced the AVX instruction set;
4) In May, 2009, SSE5 was expanded & renamed XOP with "more compatible" Intel AVX instructions; and
5) In Q409 here comes Clarks (-field?)(-dale?) with AVX.

I'm not certain what part(s) of AVX the 32nm Clarks(?) will be initially optimizing but this is a fairly big step forward - hopefully Bulldozer will build on it in reasonable time.


Edit ---

Originally posted by: IlllI
memory is foggy but i think i recall reading about it a couple years ago.. it was suppose to combine a gpu and cpu onto one chip, i guess for laptops etc.

Does not really have much to do (AFAIK) with that --- the intention is to harness the parallel processing capability of the GPU and use it in conjunction with the CPU to improve IPC.

Here is the Intel White Paper (pdf) on AVX. XOP instruction stuff here.

Enjoy :D



 

IlllI

Diamond Member
Feb 12, 2002
4,927
11
81
Originally posted by: Aluvus
http://www.xbitlabs.com/news/c...rocessors_to_2011.html

Cliffs: Implementing it is hard.


thanks for that. too many code names to keep track of or remember lol

something i noticed about that article was strange though:

"Another interesting chip due in 2011 is code-named Ontario, which has two general-purpose x86 cores, built-in graphics processing engine, 1MB of cache and DDR3 memory support. The chip will be based on the code-named Bobcat micro-architecture, which is projected to be very power efficient, and will also be among the first "Fusion" processors that combine x86 and graphics processing on the same chip."


...so is ontario suppose to be the first fusion released? the wording is a little confusing b/c it seems like a bobcat will come out first followed by ontario but in that picture i saw no timeline for a 'bobcat.. or did the bobcat "evolve" into ontario? :confused:

 

ilkhan

Golden Member
Jul 21, 2006
1,117
1
0
um, westmere (clarkdale/arrandale/gulftown) doesn't include AVX, sandy (ivy?) does. westmere includes AES.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
Originally posted by: Idontcare
Originally posted by: TuxDave
Takes time to figure out what performance levels you want in a given die size. More GPU at the expense of less CPU or vice versa and you can't get both without raising the $$$ of the chip.

And TDP for the socket, which probably makes things really challenging for the high-performance segment.

who said fusion was a performance part? they're just bringing the IGP onto the CPU. it's still an IGP. it's going to be small. it's going to share system RAM. it's going to be slow.

I highly doubt that excessive TDP is where the struggle lies.
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
Originally posted by: alyarb
Originally posted by: Idontcare
Originally posted by: TuxDave
Takes time to figure out what performance levels you want in a given die size. More GPU at the expense of less CPU or vice versa and you can't get both without raising the $$$ of the chip.

And TDP for the socket, which probably makes things really challenging for the high-performance segment.

who said fusion was a performance part? they're just bringing the IGP onto the CPU. it's still an IGP. it's going to be small. it's going to share system RAM. it's going to be slow.

I highly doubt that excessive TDP is where the struggle lies.

I would venture to guess integrated CPU/GPU in a socket wouldn't complete with high end discrete cards but should compete against a CPU + $100 GPU market at best. And agreed that memory bandwidth is a serious issue that I'm sure they're working on improving through other means.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
$100 GPUs are in the 100+ watt ballpark and have ~70 GB/s memory system. system memory would have to incur a huge leap in the width of its bus for this to be effective, and how are you going to fit ~600-800 shaders onto a quad-core CPU and keep the whole package under 200 watts? a fully-integrated high-performance part is a long way off (and by that time, discrete radeons will have ~2000 shaders or more). early fusion iterations are going to be just like IGPs. low end, but cheap and integrated. plenty of power to accelerate GUIs and do the windows 7 dx-compute transcoding/decoding.
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
The first Fusion is for ULP (Ultra Low Power) scenarios (laptops for example). Power usage for the system is supposed to be dramatically lower when the GPU and CPU share resources...
That said, in it's first iterations, Fusion will also be less powerful than a seperate IGP (sort of like the Intel IGPs are now...). :)
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Originally posted by: alyarb
Originally posted by: Idontcare
Originally posted by: TuxDave
Takes time to figure out what performance levels you want in a given die size. More GPU at the expense of less CPU or vice versa and you can't get both without raising the $$$ of the chip.

And TDP for the socket, which probably makes things really challenging for the high-performance segment.

who said fusion was a performance part? they're just bringing the IGP onto the CPU. it's still an IGP. it's going to be small. it's going to share system RAM. it's going to be slow.

I highly doubt that excessive TDP is where the struggle lies.

Nobody did, and nobody said it wouldn't be either.

So obviously I am speaking to that condition in which the IGP's power-consumption consumes "TDP budget" to the detriment of the clockspeed and performance of the CPU which has to operate at a TDP that is now lowered because of the presence of the IGP.

By "high performance" I am not explicitly referring to high-performance GPU, I am talking about high-performance CPU which itself is already TDP limited (think i7 or i9 coupled with an IGP).

Integrated graphics gain no synergy by being on-die or sharing the same socket (MCM) until such time that there processing units actually become virtually integrated by software compilers into the overall computing resource budget. Just like the transition from math co-processor to integrated fp units. CUDA is a brute-force transition.

Specifically I am talking about the actual performance synergy that will come from integrating GPU and CPU compute units as Goto-san captures in his graphic here (see far right).
 

heyheybooboo

Diamond Member
Jun 29, 2007
6,278
0
0
Originally posted by: ilkhan
um, westmere (clarkdale/arrandale/gulftown) doesn't include AVX, sandy (ivy?) does. westmere includes AES.

Whatever it is called and whenever it comes out ....

A GPU on die with a CPU will require AVX/XOP instructions.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Originally posted by: heyheybooboo
Originally posted by: ilkhan
um, westmere (clarkdale/arrandale/gulftown) doesn't include AVX, sandy (ivy?) does. westmere includes AES.

Whatever it is called and whenever it comes out ....

A GPU on die with a CPU will require AVX/XOP instructions.

When you say this, are you referring to the prospects of using the GPU for GPGPU stuff or for video/graphics GPU stuff?

For GPU stuff I don't see why an on-die GPU would require AVX/XOP support in the CPU's ISA.

I'm thinking ISA versus Architecture here...integrating a GPU on-die or even within-socket can be merely nothing more than an architecture transition without any ISA changes.

The performance advantages and improvements with an architecture transition like this are likely to be limited to nothing more than some latency improvements, same idea as moving the memory controller into the socket.

The benefits of architecture improvements is that they can be transparent to software, meaning you don't necessarily have to recompile anything in order to have your software benefit from the enhanced architecture (think IMC here again).

However if you want to take it one step further and create synergy between the GPU and CPU then you do need ISA improvements...which AVX/XOP can address.

The disadvantage to ISA improvements of course is that they require software to be aware of the new ISA before the new ISA can deliver on its potential of improving performance. SSE4.2 and 3DNow! for example.

(I know you know this, am simply expounding on it for general contemplation of what Fusion means versus what it can potentially mean after a few iterations)
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
integration is going to come first to cut manufacturing cost and make better use of fab space. synergy comes later, probably much later. low end computers are already fast enough for low end computer users. less than 10% of the buyers in that segment will stumble upon a useful GPGPU app. considering that number is 0% right now, that's being generous. at least you said "after a few iterations" because i think that would be the absolute earliest. right now they just want to integrate.
 

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
Originally posted by: heyheybooboo
Does not really have much to do (AFAIK) with that --- the intention is to harness the parallel processing capability of the GPU and use it in conjunction with the CPU to improve IPC.

Here is the Intel White Paper (pdf) on AVX. XOP instruction stuff here.

Enjoy :D
I don't see that AVX and XOP have anything to do with combined cpu+gpu or with GPU computer at all. AVX and XOP are just more extentions to the vector instruction sets in x86.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Originally posted by: zephyrprime
Originally posted by: heyheybooboo
Does not really have much to do (AFAIK) with that --- the intention is to harness the parallel processing capability of the GPU and use it in conjunction with the CPU to improve IPC.

Here is the Intel White Paper (pdf) on AVX. XOP instruction stuff here.

Enjoy :D
I don't see that AVX and XOP have anything to do with combined cpu+gpu or with GPU computer at all. AVX and XOP are just more extentions to the vector instruction sets in x86.

The instructions themselves (an ISA) lend themselves nicely to implementation/execution on a massively parallel architecture like that of a GPU. At least that is what I get out of it.
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
Originally posted by: alyarb
$100 GPUs are in the 100+ watt ballpark and have ~70 GB/s memory system. system memory would have to incur a huge leap in the width of its bus for this to be effective, and how are you going to fit ~600-800 shaders onto a quad-core CPU and keep the whole package under 200 watts? a fully-integrated high-performance part is a long way off (and by that time, discrete radeons will have ~2000 shaders or more). early fusion iterations are going to be just like IGPs. low end, but cheap and integrated. plenty of power to accelerate GUIs and do the windows 7 dx-compute transcoding/decoding.

I did qualify myself with "at best" the $100 GPU market. There will definitely be some really cheap laptop version which probably has 2 cores and competes with the $20 GPU that they really need to crunch down because even with that little functionality, it will probably be twice the size as a 2 core CPU without a GPU.

But definitely if they can harness GPGPU functions in the server market which has those very same problems. High TDP and high bandwidth requirements. Who knows how much EDRAM will add but that or something similar will be a requirement.
 

heyheybooboo

Diamond Member
Jun 29, 2007
6,278
0
0
Originally posted by: Idontcare
Originally posted by: zephyrprime
Originally posted by: heyheybooboo
Does not really have much to do (AFAIK) with that --- the intention is to harness the parallel processing capability of the GPU and use it in conjunction with the CPU to improve IPC.

Here is the Intel White Paper (pdf) on AVX. XOP instruction stuff here.

Enjoy :D
I don't see that AVX and XOP have anything to do with combined cpu+gpu or with GPU computer at all. AVX and XOP are just more extentions to the vector instruction sets in x86.

The instructions themselves (an ISA) lend themselves nicely to implementation/execution on a massively parallel architecture like that of a GPU. At least that is what I get out of it.


Pretty much this:

Originally posted by: Idontcare
..... create synergy between the GPU and CPU then you do need ISA improvements...which AVX/XOP can address.

The disadvantage to ISA improvements of course is that they require software to be aware of the new ISA before the new ISA can deliver on its potential of improving performance. SSE4.2 and 3DNow! for example .....

:D

A gpu is a gi-normus parallel processing vector machine. We will have to wait for software to catch up in the 'mainstream' - but hopefully we will quickly see security/encryption improvements.

Sorry I have to go ---- my personal stalker has put me on vacation again. :laugh:
 

MODEL3

Senior member
Jul 22, 2009
528
0
0
Maybe AMD could do it (theoretically) in Q1 2007!

Brisbane (X2) at 65 nm was only 125mm2 and the whole RS690 northbridge (classic northbridge+GPU) at 80nm was 49mm2!

it would be something like 160-170mm2 which is fine!

But i guess the ATI buyout was too recent, to enable that!

Today maybe they could do it with Athlon II X2 (117mm2 at 45nm) and with the 785 based GPU (probably the GPU part is around 60-70mm2 at 55nm)

Does AMD have some technical problems?
Or is it pure design choice/roadmap execution problem?






 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
I don't know if any of it has to do with problems of any sort...could just be that until recently it simply made no sense to create a fusion-like product as it would have had little-to-no net benefits to the end-user and little-to-no gross margin benefits to the manufacturer.