• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

AMD Fusion

Originally posted by: Inspector Jihad
from what i understand its the gpu and cpu on the same die. is this a good idea, seems like it would limit upgrading potential.

1. The first iteration of Fusion will be for mobile and server computers. The main draw there is that Fusion requires much less power to run the graphics than an independent on-board graphic chip for equal results.
2. The second iteration will be AMD's ?accelerated computing? systems...these are essentially modular chips (remember that when even the first Fusion is released in 2009, AMD will be at 45nm and 8-16 core), with many times the choices in SKUs that they have today (multiple numbers/combinations of CPU, GPU, and possibly things like Physics Co-Processors all on one chip).
3. Also keep in mind that AMD successfully tested Z-Ram at 45nm in 2006. Z-Ram is capable of 5 times the density of their current cache ram (i.e. instead of 2MB cache, 10MB cache in the same space). I would guess that Z-Ram will be used only in the L3 cache, but who knows...?
 
Originally posted by: Inspector Jihad
from what i understand its the gpu and cpu on the same die. is this a good idea, seems like it would limit upgrading potential.

Initial fusion products are likely to be a CPU and a GPU in the same socket, but on separate dies similar to Intel?s current quad core incarnation.
 
Originally posted by: PlasmaBomb
Originally posted by: Inspector Jihad
from what i understand its the gpu and cpu on the same die. is this a good idea, seems like it would limit upgrading potential.

Initial fusion products are likely to be a CPU and a GPU in the same socket, but on separate dies similar to Intel?s current quad core incarnation.

I don't think that's correct...
The reason Intel's quad core are on seperate dies is because they must communicate through the MCH away from the chip (that's also why Intel requires a much greater cache size).
AMD uses a Crossbar Switch for communication, which means they have to be on the same die (that's also why we didn't see a quad core Opteron using 2xDC Opterons).
ArsTechnica has a decent article on it with a block diagram of what we're expecting...
Article
 
Originally posted by: Viditor
Originally posted by: PlasmaBomb
Originally posted by: Inspector Jihad
from what i understand its the gpu and cpu on the same die. is this a good idea, seems like it would limit upgrading potential.

Initial fusion products are likely to be a CPU and a GPU in the same socket, but on separate dies similar to Intel?s current quad core incarnation.

I don't think that's correct...
The reason Intel's quad core are on seperate dies is because they must communicate through the MCH away from the chip (that's also why Intel requires a much greater cache size).
AMD uses a Crossbar Switch for communication, which means they have to be on the same die (that's also why we didn't see a quad core Opteron using 2xDC Opterons).
ArsTechnica has a decent article on it with a block diagram of what we're expecting...
Article

Actually he's 100% correct. The 2009 version of fusion is a low end GPU and a mobile cpu in the same package.



 
Originally posted by: Phynaz
Originally posted by: Viditor
Originally posted by: PlasmaBomb
Originally posted by: Inspector Jihad
from what i understand its the gpu and cpu on the same die. is this a good idea, seems like it would limit upgrading potential.

Initial fusion products are likely to be a CPU and a GPU in the same socket, but on separate dies similar to Intel?s current quad core incarnation.

I don't think that's correct...
The reason Intel's quad core are on seperate dies is because they must communicate through the MCH away from the chip (that's also why Intel requires a much greater cache size).
AMD uses a Crossbar Switch for communication, which means they have to be on the same die (that's also why we didn't see a quad core Opteron using 2xDC Opterons).
ArsTechnica has a decent article on it with a block diagram of what we're expecting...
Article

Actually he's 100% correct. The 2009 version of fusion is a low end GPU and a mobile cpu in the same package.

I found a block diagram and a photo of the die if you guys are interested (BTW, the GPU will still be DX 10.1).
Block Diagram
Die Shot
 
from my limited knowledge it seems like its a dual core processor but instead of 2 cpu cores, there is one cpu and one gpu. is this the basic concept of it? i guess this suits mobile computing because it'll reduce energy consumption and upgrading the graphics isn't important. but how exactly would this affect gaming desktops, or is it not designed for that at all?
 
Originally posted by: Viditor
Originally posted by: Phynaz
Originally posted by: Viditor
Originally posted by: PlasmaBomb
Originally posted by: Inspector Jihad
from what i understand its the gpu and cpu on the same die. is this a good idea, seems like it would limit upgrading potential.

Initial fusion products are likely to be a CPU and a GPU in the same socket, but on separate dies similar to Intel?s current quad core incarnation.

I don't think that's correct...
The reason Intel's quad core are on seperate dies is because they must communicate through the MCH away from the chip (that's also why Intel requires a much greater cache size).
AMD uses a Crossbar Switch for communication, which means they have to be on the same die (that's also why we didn't see a quad core Opteron using 2xDC Opterons).
ArsTechnica has a decent article on it with a block diagram of what we're expecting...
Article

Actually he's 100% correct. The 2009 version of fusion is a low end GPU and a mobile cpu in the same package.

I found a block diagram and a photo of the die if you guys are interested (BTW, the GPU will still be DX 10.1).
Block Diagram
Die Shot

To be honest I kinda wish that AMD would go with an MCM solution for the CPU-GPU, at least for the first generation of products. I doubt very much that there will be much difference in performance, even less so than the difference between true quad and two dualcores glued together. And secondly having two separate smaller dies would increase overall yeilds.
 
Originally posted by: Inspector Jihad
from my limited knowledge it seems like its a dual core processor but instead of 2 cpu cores, there is one cpu and one gpu. is this the basic concept of it? i guess this suits mobile computing because it'll reduce energy consumption and upgrading the graphics isn't important. but how exactly would this affect gaming desktops, or is it not designed for that at all?

You have the basics...except that it will be 2x2 cores (and then 4x4 or 8x8).
The first iteration (Bulldozer) isn't designed for gaming desktops (at least as far as we know so far).
My own opinion is that building for mobiles and servers is much easier than for high performance, so they will get Bulldozer out the door before releasing a more refined and powerful version...
 
Originally posted by: jones377
Originally posted by: Viditor
Originally posted by: Phynaz
Originally posted by: Viditor
Originally posted by: PlasmaBomb
Originally posted by: Inspector Jihad
from what i understand its the gpu and cpu on the same die. is this a good idea, seems like it would limit upgrading potential.

Initial fusion products are likely to be a CPU and a GPU in the same socket, but on separate dies similar to Intel?s current quad core incarnation.

I don't think that's correct...
The reason Intel's quad core are on seperate dies is because they must communicate through the MCH away from the chip (that's also why Intel requires a much greater cache size).
AMD uses a Crossbar Switch for communication, which means they have to be on the same die (that's also why we didn't see a quad core Opteron using 2xDC Opterons).
ArsTechnica has a decent article on it with a block diagram of what we're expecting...
Article

Actually he's 100% correct. The 2009 version of fusion is a low end GPU and a mobile cpu in the same package.

I found a block diagram and a photo of the die if you guys are interested (BTW, the GPU will still be DX 10.1).
Block Diagram
Die Shot

To be honest I kinda wish that AMD would go with an MCM solution for the CPU-GPU, at least for the first generation of products. I doubt very much that there will be much difference in performance, even less so than the difference between true quad and two dualcores glued together. And secondly having two separate smaller dies would increase overall yeilds.

I really don't think that they can afford it...
For AMD to create an MCM, they'd have to radically alter their design. This is a place where the on-die MC and DCA makes changes difficult...
I also think that latency is much more of an issue with AMD, so the performance hit would be far more significant than it is with Intel.
 
Originally posted by: Viditor
Originally posted by: jones377
Originally posted by: Viditor
Originally posted by: Phynaz
Originally posted by: Viditor
Originally posted by: PlasmaBomb
Originally posted by: Inspector Jihad
from what i understand its the gpu and cpu on the same die. is this a good idea, seems like it would limit upgrading potential.

Initial fusion products are likely to be a CPU and a GPU in the same socket, but on separate dies similar to Intel?s current quad core incarnation.

I don't think that's correct...
The reason Intel's quad core are on seperate dies is because they must communicate through the MCH away from the chip (that's also why Intel requires a much greater cache size).
AMD uses a Crossbar Switch for communication, which means they have to be on the same die (that's also why we didn't see a quad core Opteron using 2xDC Opterons).
ArsTechnica has a decent article on it with a block diagram of what we're expecting...
Article

Actually he's 100% correct. The 2009 version of fusion is a low end GPU and a mobile cpu in the same package.

I found a block diagram and a photo of the die if you guys are interested (BTW, the GPU will still be DX 10.1).
Block Diagram
Die Shot

To be honest I kinda wish that AMD would go with an MCM solution for the CPU-GPU, at least for the first generation of products. I doubt very much that there will be much difference in performance, even less so than the difference between true quad and two dualcores glued together. And secondly having two separate smaller dies would increase overall yeilds.

I really don't think that they can afford it...
For AMD to create an MCM, they'd have to radically alter their design. This is a place where the on-die MC and DCA makes changes difficult...
I also think that latency is much more of an issue with AMD, so the performance hit would be far more significant than it is with Intel.
HyperTransport = super easy inter-chip communication.
 
Originally posted by: Viditor
Originally posted by: jones377
Originally posted by: Viditor
Originally posted by: Phynaz
Originally posted by: Viditor
Originally posted by: PlasmaBomb
Originally posted by: Inspector Jihad
from what i understand its the gpu and cpu on the same die. is this a good idea, seems like it would limit upgrading potential.

Initial fusion products are likely to be a CPU and a GPU in the same socket, but on separate dies similar to Intel?s current quad core incarnation.

I don't think that's correct...
The reason Intel's quad core are on seperate dies is because they must communicate through the MCH away from the chip (that's also why Intel requires a much greater cache size).
AMD uses a Crossbar Switch for communication, which means they have to be on the same die (that's also why we didn't see a quad core Opteron using 2xDC Opterons).
ArsTechnica has a decent article on it with a block diagram of what we're expecting...
Article

Actually he's 100% correct. The 2009 version of fusion is a low end GPU and a mobile cpu in the same package.

I found a block diagram and a photo of the die if you guys are interested (BTW, the GPU will still be DX 10.1).
Block Diagram
Die Shot

To be honest I kinda wish that AMD would go with an MCM solution for the CPU-GPU, at least for the first generation of products. I doubt very much that there will be much difference in performance, even less so than the difference between true quad and two dualcores glued together. And secondly having two separate smaller dies would increase overall yeilds.

I really don't think that they can afford it...
For AMD to create an MCM, they'd have to radically alter their design. This is a place where the on-die MC and DCA makes changes difficult...
I also think that latency is much more of an issue with AMD, so the performance hit would be far more significant than it is with Intel.

As opposed to being able to afford the development of numerous versions with different amount of CPU cores along with a GPU core? All these would requre different masks for production. Masks are expensive and get more expensive on each new node.

What latency impact would there be? The CPU and GPU do not need to be cache-coherent. All it would need is a dedicated bus between the CPU and GPU. The memory controller can stay on the CPU. Basically the overall architecture wouldn't be much different from current integrated graphics solutions, since the GPU is alledgedly targetted for the low-end anyway. The latency between the GPU and system memory would be lowered compared with existing integrated graphics solutions for the K8 platform, and they are doing just fine now as it is.

Then there is the issue of merging two different development teams that have different cultures and tools. Not an easy task.

I'm not saying it is impossible, but the risks become higher for virtually no gain.
 
Originally posted by: Inspector Jihad
what is MCM?

Multi-Chip-Module. Kentsfield (Core 2 Quad) is an MCM for example but there are others too. IBM uses MCM's extensively in their POWER products.
 
Originally posted by: jones377

As opposed to being able to afford the development of numerous versions with different amount of CPU cores along with a GPU core? All these would requre different masks for production. Masks are expensive and get more expensive on each new node.

A damned good point...it does make me wonder if AMD is looking at a different manufacturing technique. (speculation mode on) Do you think it's possible that AMD is developing a form of "modular masking"? I know that the Crossbar Switch functions just like an ethernet switch does (I realise that this is an over-simplification), so would it be possible? (any chip architects out there??) Other than that, I must admit that the thought hadn't occured to me till you said it...

What latency impact would there be? The CPU and GPU do not need to be cache-coherent. All it would need is a dedicated bus between the CPU and GPU. The memory controller can stay on the CPU. Basically the overall architecture wouldn't be much different from current integrated graphics solutions, since the GPU is alledgedly targetted for the low-end anyway. The latency between the GPU and system memory would be lowered compared with existing integrated graphics solutions for the K8 platform, and they are doing just fine now as it is.

I believe that the goal is to re-write the ISA so that the GPU is indeed requiring cache coherency...not my field at all, but with the CTM project and Torrenza, this appears to me to be the goal. If that is the case, then there would be a latency issue there...
BTW, the GPU is also targeted for the high-end...
Here's something from that article I linked:

"To support CPU/GPU integration at either level of complexity (i.e. the modular core level or something deeper), AMD has already stated that they'll need to add a graphics-specific extension to the x86 ISA. Indeed, a future GPU-oriented ISA extension may form part of the reason for the company's recently announced "close to metal" (CTM) initiative. By exposing the low-level hardware of its ATI GPUs to coders, AMD can accomplish two goals. First, they can get the low-level ISA out there and in use, thereby creating a "legacy" code base for it and moving it further toward being a de facto standard. Second, they can get feedback from the industry on what coders want to see in a graphics-specific ISA"

Then there is the issue of merging two different development teams that have different cultures and tools. Not an easy task.

I'm not saying it is impossible, but the risks become higher for virtually no gain.

I personally think that Fusion will HUGE gain for AMD (is that what you were referring to?).
The power/performance numbers alone will be a very hard to resist draw for both the mobile as well as server OEMs and ODMs.
 
Originally posted by: jones377
Originally posted by: Inspector Jihad
what is MCM?

Multi-Chip-Module. Kentsfield (Core 2 Quad) is an MCM for example but there are others too. IBM uses MCM's extensively in their POWER products.

Exactly...in other words (for example), putting 2 dual cores on one chip (Kentsfield).
 
in the long run an integrated GPU could assist in physics processing and other parallel calculations and would therefore also be of use in high-end systems with a powerful videocard.
 
There are rumors of AMD putting out a Tri-core chip (inq!); might this work into their plans for fusion? Seems like it might be an adaptable solution for dualcore + graphics.
 
Originally posted by: Viditor
Originally posted by: jones377

As opposed to being able to afford the development of numerous versions with different amount of CPU cores along with a GPU core? All these would requre different masks for production. Masks are expensive and get more expensive on each new node.

A damned good point...it does make me wonder if AMD is looking at a different manufacturing technique. (speculation mode on) Do you think it's possible that AMD is developing a form of "modular masking"? I know that the Crossbar Switch functions just like an ethernet switch does (I realise that this is an over-simplification), so would it be possible? (any chip architects out there??) Other than that, I must admit that the thought hadn't occured to me till you said it...

What latency impact would there be? The CPU and GPU do not need to be cache-coherent. All it would need is a dedicated bus between the CPU and GPU. The memory controller can stay on the CPU. Basically the overall architecture wouldn't be much different from current integrated graphics solutions, since the GPU is alledgedly targetted for the low-end anyway. The latency between the GPU and system memory would be lowered compared with existing integrated graphics solutions for the K8 platform, and they are doing just fine now as it is.

I believe that the goal is to re-write the ISA so that the GPU is indeed requiring cache coherency...not my field at all, but with the CTM project and Torrenza, this appears to me to be the goal. If that is the case, then there would be a latency issue there...
BTW, the GPU is also targeted for the high-end...
Here's something from that article I linked:

"To support CPU/GPU integration at either level of complexity (i.e. the modular core level or something deeper), AMD has already stated that they'll need to add a graphics-specific extension to the x86 ISA. Indeed, a future GPU-oriented ISA extension may form part of the reason for the company's recently announced "close to metal" (CTM) initiative. By exposing the low-level hardware of its ATI GPUs to coders, AMD can accomplish two goals. First, they can get the low-level ISA out there and in use, thereby creating a "legacy" code base for it and moving it further toward being a de facto standard. Second, they can get feedback from the industry on what coders want to see in a graphics-specific ISA"

Then there is the issue of merging two different development teams that have different cultures and tools. Not an easy task.

I'm not saying it is impossible, but the risks become higher for virtually no gain.

I personally think that Fusion will HUGE gain for AMD (is that what you were referring to?).
The power/performance numbers alone will be a very hard to resist draw for both the mobile as well as server OEMs and ODMs.

I meant no performance gain.

There is the saying that before you learn to walk you need to learn to crawl. I don't see how you can put a high-end GPU on the same die as the CPU, if we define high-end as comparable with the best discrete graphics on the market at the time of the release. Current high-end GPUs have a die budget even bigger than most CPUs, consume 100W+ and have 100GB/s of memory bandwidth.

I remember seeing slides from an AMD presentation about integration of the CPU and GPU where they described it in several steps. The first being through MCM and the last one being total integration of the functional units through an extension of the ISA. However that presentation was more abstract and didn't really indicate what they are planning. I have also read reports that suggests that the first Fusion product will be single die but with separate functional units. I just can't find the link to it anymore, maybe you remember it too?

I merely suggested that the first generation would benefit from being as simple as possible.
 
I should add that it has also been reported that Intel is planning to integrate a GPU in some of their Nehalem line of products. If Intel again uses the MCM approach it could be possible that they again beat AMD to market. Fusion is planned for 2009 while Nehalem is coming out in 2H08. It is not entirely impossible that Nehalem+GPU comes out before Fusion but it is too early to say anything for certain.
 
This would all sound more realistic if 2009 was 10 years away. ive just got a Quadcore cpu and the heat this this gives off is incredible im using a cooler the size of a childs shoebox to keep it cool so the thought of having 8 cores and a couple of GPU's nevermind 16core cpu's just seems so unrealistic for 2009. 🙂
 
Originally posted by: jones377
I should add that it has also been reported that Intel is planning to integrate a GPU in some of their Nehalem line of products. If Intel again uses the MCM approach it could be possible that they again beat AMD to market. Fusion is planned for 2009 while Nehalem is coming out in 2H08. It is not entirely impossible that Nehalem+GPU comes out before Fusion but it is too early to say anything for certain.

Well, a few comments...
1. I believe the model AMD has for Fusion is to put several GPUs on each chip, so high-end may be closer than you think...I don't know, but it certainly is possible.

2. I personally don't think Intel is going to introduce graphics OR MCMs to Nehalem in it's first year...it's a radically different architecture compared to Core, and as you said, "before you learn to walk you need to learn to crawl".

3. While Intel is the largest graphics manufacturer (from their on-board graphics), they have never had success with discrete high-end graphics. Their only real stab at it was a dismal failure...and while I have heard of their new upcoming attempt, I can't say I'm very optimistic.
The same can't be said for ATI/AMD however, and while your point on integration is well taken, it's apparent that discussion of this project began well before the buyout occured.
 
Back
Top