Inspector Jihad
Lifer
from what i understand its the gpu and cpu on the same die. is this a good idea, seems like it would limit upgrading potential.
Originally posted by: Inspector Jihad
from what i understand its the gpu and cpu on the same die. is this a good idea, seems like it would limit upgrading potential.
Originally posted by: Inspector Jihad
from what i understand its the gpu and cpu on the same die. is this a good idea, seems like it would limit upgrading potential.
Originally posted by: PlasmaBomb
Originally posted by: Inspector Jihad
from what i understand its the gpu and cpu on the same die. is this a good idea, seems like it would limit upgrading potential.
Initial fusion products are likely to be a CPU and a GPU in the same socket, but on separate dies similar to Intel?s current quad core incarnation.
Originally posted by: Viditor
Originally posted by: PlasmaBomb
Originally posted by: Inspector Jihad
from what i understand its the gpu and cpu on the same die. is this a good idea, seems like it would limit upgrading potential.
Initial fusion products are likely to be a CPU and a GPU in the same socket, but on separate dies similar to Intel?s current quad core incarnation.
I don't think that's correct...
The reason Intel's quad core are on seperate dies is because they must communicate through the MCH away from the chip (that's also why Intel requires a much greater cache size).
AMD uses a Crossbar Switch for communication, which means they have to be on the same die (that's also why we didn't see a quad core Opteron using 2xDC Opterons).
ArsTechnica has a decent article on it with a block diagram of what we're expecting...
Article
Originally posted by: Phynaz
Originally posted by: Viditor
Originally posted by: PlasmaBomb
Originally posted by: Inspector Jihad
from what i understand its the gpu and cpu on the same die. is this a good idea, seems like it would limit upgrading potential.
Initial fusion products are likely to be a CPU and a GPU in the same socket, but on separate dies similar to Intel?s current quad core incarnation.
I don't think that's correct...
The reason Intel's quad core are on seperate dies is because they must communicate through the MCH away from the chip (that's also why Intel requires a much greater cache size).
AMD uses a Crossbar Switch for communication, which means they have to be on the same die (that's also why we didn't see a quad core Opteron using 2xDC Opterons).
ArsTechnica has a decent article on it with a block diagram of what we're expecting...
Article
Actually he's 100% correct. The 2009 version of fusion is a low end GPU and a mobile cpu in the same package.
Originally posted by: bryanW1995
proof?
we were replying to phynaz at the same time. Your post hit before mine.Originally posted by: Viditor
Originally posted by: bryanW1995
proof?
Ummm...proof of what and who are you asking? 🙂
Originally posted by: Viditor
Originally posted by: Phynaz
Originally posted by: Viditor
Originally posted by: PlasmaBomb
Originally posted by: Inspector Jihad
from what i understand its the gpu and cpu on the same die. is this a good idea, seems like it would limit upgrading potential.
Initial fusion products are likely to be a CPU and a GPU in the same socket, but on separate dies similar to Intel?s current quad core incarnation.
I don't think that's correct...
The reason Intel's quad core are on seperate dies is because they must communicate through the MCH away from the chip (that's also why Intel requires a much greater cache size).
AMD uses a Crossbar Switch for communication, which means they have to be on the same die (that's also why we didn't see a quad core Opteron using 2xDC Opterons).
ArsTechnica has a decent article on it with a block diagram of what we're expecting...
Article
Actually he's 100% correct. The 2009 version of fusion is a low end GPU and a mobile cpu in the same package.
I found a block diagram and a photo of the die if you guys are interested (BTW, the GPU will still be DX 10.1).
Block Diagram
Die Shot
Originally posted by: Inspector Jihad
from my limited knowledge it seems like its a dual core processor but instead of 2 cpu cores, there is one cpu and one gpu. is this the basic concept of it? i guess this suits mobile computing because it'll reduce energy consumption and upgrading the graphics isn't important. but how exactly would this affect gaming desktops, or is it not designed for that at all?
Originally posted by: jones377
Originally posted by: Viditor
Originally posted by: Phynaz
Originally posted by: Viditor
Originally posted by: PlasmaBomb
Originally posted by: Inspector Jihad
from what i understand its the gpu and cpu on the same die. is this a good idea, seems like it would limit upgrading potential.
Initial fusion products are likely to be a CPU and a GPU in the same socket, but on separate dies similar to Intel?s current quad core incarnation.
I don't think that's correct...
The reason Intel's quad core are on seperate dies is because they must communicate through the MCH away from the chip (that's also why Intel requires a much greater cache size).
AMD uses a Crossbar Switch for communication, which means they have to be on the same die (that's also why we didn't see a quad core Opteron using 2xDC Opterons).
ArsTechnica has a decent article on it with a block diagram of what we're expecting...
Article
Actually he's 100% correct. The 2009 version of fusion is a low end GPU and a mobile cpu in the same package.
I found a block diagram and a photo of the die if you guys are interested (BTW, the GPU will still be DX 10.1).
Block Diagram
Die Shot
To be honest I kinda wish that AMD would go with an MCM solution for the CPU-GPU, at least for the first generation of products. I doubt very much that there will be much difference in performance, even less so than the difference between true quad and two dualcores glued together. And secondly having two separate smaller dies would increase overall yeilds.
HyperTransport = super easy inter-chip communication.Originally posted by: Viditor
Originally posted by: jones377
Originally posted by: Viditor
Originally posted by: Phynaz
Originally posted by: Viditor
Originally posted by: PlasmaBomb
Originally posted by: Inspector Jihad
from what i understand its the gpu and cpu on the same die. is this a good idea, seems like it would limit upgrading potential.
Initial fusion products are likely to be a CPU and a GPU in the same socket, but on separate dies similar to Intel?s current quad core incarnation.
I don't think that's correct...
The reason Intel's quad core are on seperate dies is because they must communicate through the MCH away from the chip (that's also why Intel requires a much greater cache size).
AMD uses a Crossbar Switch for communication, which means they have to be on the same die (that's also why we didn't see a quad core Opteron using 2xDC Opterons).
ArsTechnica has a decent article on it with a block diagram of what we're expecting...
Article
Actually he's 100% correct. The 2009 version of fusion is a low end GPU and a mobile cpu in the same package.
I found a block diagram and a photo of the die if you guys are interested (BTW, the GPU will still be DX 10.1).
Block Diagram
Die Shot
To be honest I kinda wish that AMD would go with an MCM solution for the CPU-GPU, at least for the first generation of products. I doubt very much that there will be much difference in performance, even less so than the difference between true quad and two dualcores glued together. And secondly having two separate smaller dies would increase overall yeilds.
I really don't think that they can afford it...
For AMD to create an MCM, they'd have to radically alter their design. This is a place where the on-die MC and DCA makes changes difficult...
I also think that latency is much more of an issue with AMD, so the performance hit would be far more significant than it is with Intel.
Originally posted by: Viditor
Originally posted by: jones377
Originally posted by: Viditor
Originally posted by: Phynaz
Originally posted by: Viditor
Originally posted by: PlasmaBomb
Originally posted by: Inspector Jihad
from what i understand its the gpu and cpu on the same die. is this a good idea, seems like it would limit upgrading potential.
Initial fusion products are likely to be a CPU and a GPU in the same socket, but on separate dies similar to Intel?s current quad core incarnation.
I don't think that's correct...
The reason Intel's quad core are on seperate dies is because they must communicate through the MCH away from the chip (that's also why Intel requires a much greater cache size).
AMD uses a Crossbar Switch for communication, which means they have to be on the same die (that's also why we didn't see a quad core Opteron using 2xDC Opterons).
ArsTechnica has a decent article on it with a block diagram of what we're expecting...
Article
Actually he's 100% correct. The 2009 version of fusion is a low end GPU and a mobile cpu in the same package.
I found a block diagram and a photo of the die if you guys are interested (BTW, the GPU will still be DX 10.1).
Block Diagram
Die Shot
To be honest I kinda wish that AMD would go with an MCM solution for the CPU-GPU, at least for the first generation of products. I doubt very much that there will be much difference in performance, even less so than the difference between true quad and two dualcores glued together. And secondly having two separate smaller dies would increase overall yeilds.
I really don't think that they can afford it...
For AMD to create an MCM, they'd have to radically alter their design. This is a place where the on-die MC and DCA makes changes difficult...
I also think that latency is much more of an issue with AMD, so the performance hit would be far more significant than it is with Intel.
Originally posted by: Inspector Jihad
what is MCM?
Originally posted by: jones377
As opposed to being able to afford the development of numerous versions with different amount of CPU cores along with a GPU core? All these would requre different masks for production. Masks are expensive and get more expensive on each new node.
What latency impact would there be? The CPU and GPU do not need to be cache-coherent. All it would need is a dedicated bus between the CPU and GPU. The memory controller can stay on the CPU. Basically the overall architecture wouldn't be much different from current integrated graphics solutions, since the GPU is alledgedly targetted for the low-end anyway. The latency between the GPU and system memory would be lowered compared with existing integrated graphics solutions for the K8 platform, and they are doing just fine now as it is.
Then there is the issue of merging two different development teams that have different cultures and tools. Not an easy task.
I'm not saying it is impossible, but the risks become higher for virtually no gain.
Originally posted by: jones377
Originally posted by: Inspector Jihad
what is MCM?
Multi-Chip-Module. Kentsfield (Core 2 Quad) is an MCM for example but there are others too. IBM uses MCM's extensively in their POWER products.
Originally posted by: Viditor
Originally posted by: jones377
As opposed to being able to afford the development of numerous versions with different amount of CPU cores along with a GPU core? All these would requre different masks for production. Masks are expensive and get more expensive on each new node.
A damned good point...it does make me wonder if AMD is looking at a different manufacturing technique. (speculation mode on) Do you think it's possible that AMD is developing a form of "modular masking"? I know that the Crossbar Switch functions just like an ethernet switch does (I realise that this is an over-simplification), so would it be possible? (any chip architects out there??) Other than that, I must admit that the thought hadn't occured to me till you said it...
What latency impact would there be? The CPU and GPU do not need to be cache-coherent. All it would need is a dedicated bus between the CPU and GPU. The memory controller can stay on the CPU. Basically the overall architecture wouldn't be much different from current integrated graphics solutions, since the GPU is alledgedly targetted for the low-end anyway. The latency between the GPU and system memory would be lowered compared with existing integrated graphics solutions for the K8 platform, and they are doing just fine now as it is.
I believe that the goal is to re-write the ISA so that the GPU is indeed requiring cache coherency...not my field at all, but with the CTM project and Torrenza, this appears to me to be the goal. If that is the case, then there would be a latency issue there...
BTW, the GPU is also targeted for the high-end...
Here's something from that article I linked:
"To support CPU/GPU integration at either level of complexity (i.e. the modular core level or something deeper), AMD has already stated that they'll need to add a graphics-specific extension to the x86 ISA. Indeed, a future GPU-oriented ISA extension may form part of the reason for the company's recently announced "close to metal" (CTM) initiative. By exposing the low-level hardware of its ATI GPUs to coders, AMD can accomplish two goals. First, they can get the low-level ISA out there and in use, thereby creating a "legacy" code base for it and moving it further toward being a de facto standard. Second, they can get feedback from the industry on what coders want to see in a graphics-specific ISA"
Then there is the issue of merging two different development teams that have different cultures and tools. Not an easy task.
I'm not saying it is impossible, but the risks become higher for virtually no gain.
I personally think that Fusion will HUGE gain for AMD (is that what you were referring to?).
The power/performance numbers alone will be a very hard to resist draw for both the mobile as well as server OEMs and ODMs.
Originally posted by: jones377
I should add that it has also been reported that Intel is planning to integrate a GPU in some of their Nehalem line of products. If Intel again uses the MCM approach it could be possible that they again beat AMD to market. Fusion is planned for 2009 while Nehalem is coming out in 2H08. It is not entirely impossible that Nehalem+GPU comes out before Fusion but it is too early to say anything for certain.