- Jul 1, 2005
- 5,529
- 0
- 0
That's silly. My 8800GTS is around the same size as an MATX motherboard.Originally posted by: SexyK
Such a motherboard would be way more expensive than current boards. Right now we have 2x 64-bit memory interfaces. If you go to a 2x256-bit configuration you are quadrupling the number of traces on the board which is going to mean more layers and more cost. Then, assuming you can even get DIMMs to function at clock speeds equivalent to GDDR4 speeds (which from what I've read seems very very unlikely due to the longer traces used on motherboards as opposed to graphics cards, and the noise introduced by the slot interface) you're going to need to either 1) use 8 64-bit DIMMs in every system to saturate a 512-bit interface, or create 256-bit DIMMs and use pairs, which would vastly increase the cost of memory modules. Considering the hurdles this type of implementation brings with it, the only realistic option is to integrate the memory into the motherboard itself, which leads to the lock-in problem whereby you can upgrade the core but not the memory. Either way, there's no way a Fusion-type system will offer the same performance as a discreet add-in card before 2010 at the absolute earliest - there are just too many hurdles to overcome right now.
Originally posted by: SexyK
Originally posted by: apoppin
isn't it all really speculation - at this point?
there is really very little hard info available ... i expect everything to be much clearer by the end of this year
Agreed apoppin - it is all speculation, but I think people predicting that Fusion-based GPUs will outperform discreet add-in boards, or even the demise of discreet add-in GPUs entirely are off-base. Neither of those scenarios will be happening any time soon.
That's fair, but I think it's more likely that nVidia will either die or be absorbed by a CPU company than it is to think that AMD will go out of business within the next 5 years.Originally posted by: SexyK
Originally posted by: apoppin
isn't it all really speculation - at this point?
there is really very little hard info available ... i expect everything to be much clearer by the end of this year
Agreed apoppin - it is all speculation, but I think people predicting that Fusion-based GPUs will outperform discreet add-in boards, or even the demise of discreet add-in GPUs entirely are off-base. Neither of those scenarios will be happening any time soon.
It was, like I said, just an example.As for your point about the UltraSPARC-T1's memory interface, need I point out that an entry-level system based on the T1 (with a sinlge, 1GHz CPU) costs $9,995? That hardly seems like technology that will be available in affordable desktop systems anytime soon. As I noted in my previous post, right now this type of implementation is cost prohibitive, if not technically unfeasible.
Originally posted by: SickBeast
That's silly. My 8800GTS is around the same size as an MATX motherboard.Originally posted by: SexyK
Such a motherboard would be way more expensive than current boards. Right now we have 2x 64-bit memory interfaces. If you go to a 2x256-bit configuration you are quadrupling the number of traces on the board which is going to mean more layers and more cost. Then, assuming you can even get DIMMs to function at clock speeds equivalent to GDDR4 speeds (which from what I've read seems very very unlikely due to the longer traces used on motherboards as opposed to graphics cards, and the noise introduced by the slot interface) you're going to need to either 1) use 8 64-bit DIMMs in every system to saturate a 512-bit interface, or create 256-bit DIMMs and use pairs, which would vastly increase the cost of memory modules. Considering the hurdles this type of implementation brings with it, the only realistic option is to integrate the memory into the motherboard itself, which leads to the lock-in problem whereby you can upgrade the core but not the memory. Either way, there's no way a Fusion-type system will offer the same performance as a discreet add-in card before 2010 at the absolute earliest - there are just too many hurdles to overcome right now.
There is no way they will solder the memory to the motherboard. That's only done due to space constraints in devices like consoles. They do it on graphics cards because there is usually no need for the user to upgrade the memory (and it's not like they sell GDDR4 in stores).
I'm certain that the memory bus width is very 'low tech' at this point and will be one of the simplest issues to overcome. The only 'cost issue' that I forsee is having to buy GDDR4 for your entire rig, not just the graphics card. That should be offset by the fact that you're sharing the memory and not having an add-in board. IMO high end systems will be $100 to $200 *cheaper* than current high end setups because of this type of implementation.
You still have not refuted my comment re: the Xbox 360. That's an integrated GPU, and it was faster than any graphics card on the market....in 2006.
Cache and prefetch are example of creative solution that engineers come up with when they reach hard technological limitations (latency).Originally posted by: SexyK
If you think a cache/prefetch system will alleviate the latency/bandwidth problems in a high-end graphics subsystem, then please check out the performance of nVidia Turbocache or ATI/AMD Hypermemory add-in cards. Again, we are talking about an integrated GPU, not a CPU.
Well, I was just reading that AMD is working on 4 versions of Fusion:Originally posted by: SexyK
Originally posted by: SickBeast
That's silly. My 8800GTS is around the same size as an MATX motherboard.Originally posted by: SexyK
Such a motherboard would be way more expensive than current boards. Right now we have 2x 64-bit memory interfaces. If you go to a 2x256-bit configuration you are quadrupling the number of traces on the board which is going to mean more layers and more cost. Then, assuming you can even get DIMMs to function at clock speeds equivalent to GDDR4 speeds (which from what I've read seems very very unlikely due to the longer traces used on motherboards as opposed to graphics cards, and the noise introduced by the slot interface) you're going to need to either 1) use 8 64-bit DIMMs in every system to saturate a 512-bit interface, or create 256-bit DIMMs and use pairs, which would vastly increase the cost of memory modules. Considering the hurdles this type of implementation brings with it, the only realistic option is to integrate the memory into the motherboard itself, which leads to the lock-in problem whereby you can upgrade the core but not the memory. Either way, there's no way a Fusion-type system will offer the same performance as a discreet add-in card before 2010 at the absolute earliest - there are just too many hurdles to overcome right now.
There is no way they will solder the memory to the motherboard. That's only done due to space constraints in devices like consoles. They do it on graphics cards because there is usually no need for the user to upgrade the memory (and it's not like they sell GDDR4 in stores).
I'm certain that the memory bus width is very 'low tech' at this point and will be one of the simplest issues to overcome. The only 'cost issue' that I forsee is having to buy GDDR4 for your entire rig, not just the graphics card. That should be offset by the fact that you're sharing the memory and not having an add-in board. IMO high end systems will be $100 to $200 *cheaper* than current high end setups because of this type of implementation.
You still have not refuted my comment re: the Xbox 360. That's an integrated GPU, and it was faster than any graphics card on the market....in 2006.
As for the xbox 360, the GPU has access to ~25GB/s of memory bandwidth, however the memory is not in DIMM form - it is attached to the motherboard making the xbox360 graphics solution more like an add-in card than an integrated solution.
I also am not disagreeing that in 5 years we may have a fusion-type product that can compete on the high end. However many people around here seem to think that the first-gen fusion products will make discreet add-in cards obsolete overnight, which will not be the case.
Originally posted by: Wreckage
http://dictionary.reference.com/browse/derailed
Originally posted by: SickBeast
Well, I was just reading that AMD is working on 4 versions of Fusion:Originally posted by: SexyK
Originally posted by: SickBeast
That's silly. My 8800GTS is around the same size as an MATX motherboard.Originally posted by: SexyK
Such a motherboard would be way more expensive than current boards. Right now we have 2x 64-bit memory interfaces. If you go to a 2x256-bit configuration you are quadrupling the number of traces on the board which is going to mean more layers and more cost. Then, assuming you can even get DIMMs to function at clock speeds equivalent to GDDR4 speeds (which from what I've read seems very very unlikely due to the longer traces used on motherboards as opposed to graphics cards, and the noise introduced by the slot interface) you're going to need to either 1) use 8 64-bit DIMMs in every system to saturate a 512-bit interface, or create 256-bit DIMMs and use pairs, which would vastly increase the cost of memory modules. Considering the hurdles this type of implementation brings with it, the only realistic option is to integrate the memory into the motherboard itself, which leads to the lock-in problem whereby you can upgrade the core but not the memory. Either way, there's no way a Fusion-type system will offer the same performance as a discreet add-in card before 2010 at the absolute earliest - there are just too many hurdles to overcome right now.
There is no way they will solder the memory to the motherboard. That's only done due to space constraints in devices like consoles. They do it on graphics cards because there is usually no need for the user to upgrade the memory (and it's not like they sell GDDR4 in stores).
I'm certain that the memory bus width is very 'low tech' at this point and will be one of the simplest issues to overcome. The only 'cost issue' that I forsee is having to buy GDDR4 for your entire rig, not just the graphics card. That should be offset by the fact that you're sharing the memory and not having an add-in board. IMO high end systems will be $100 to $200 *cheaper* than current high end setups because of this type of implementation.
You still have not refuted my comment re: the Xbox 360. That's an integrated GPU, and it was faster than any graphics card on the market....in 2006.
As for the xbox 360, the GPU has access to ~25GB/s of memory bandwidth, however the memory is not in DIMM form - it is attached to the motherboard making the xbox360 graphics solution more like an add-in card than an integrated solution.
I also am not disagreeing that in 5 years we may have a fusion-type product that can compete on the high end. However many people around here seem to think that the first-gen fusion products will make discreet add-in cards obsolete overnight, which will not be the case.
- data centric
- graphics centric
- media centric
- general use
What I take from this is that the 'graphics centric' version will probably be a pretty good performer.
They're saying that AMD's main ambition for making Fusion was to create laptops with longer battery life and much better graphics performance, so you may well be correct to a degree.
I'm thinking Fusion will be better than most midrange graphics cards when it is released. That's not to say that it won't be possible to beat out the high-end; it's just that very few people require that level of graphics horsepower.
LoL!!You still have not refuted my comment re: the Xbox 360. That's an integrated GPU, and it was faster than any graphics card on the market....in 2006.
Originally posted by: josh6079
LoL!!You still have not refuted my comment re: the Xbox 360. That's an integrated GPU, and it was faster than any graphics card on the market....in 2006.
Sorry, but the 360's Xenos GPU was not faster than any card on the market in 2006. Hell, the G80 launched in 2006.
Even before the G80, the X19k series and G7 series could play some of the same games with higher resolution higher IQ and still get playable frames.
I still have yet to see a single Xbox 360 game that actually uses decent levels of AA and AF. (Granted, there could be one out now. I haven't played the 360 for a few months)
Part of the reason why the probably don't use higher IQ in their games is because of the bandwidth constraints.
Wiki LinkOriginally posted by: SexyK
That's interesting information on fusion, do you have a link?
Yeah, TODAY there are better graphics cards than the Xbox 360`s GPU. When it was released, there were not.Originally posted by: Wreckage
Originally posted by: josh6079
LoL!!You still have not refuted my comment re: the Xbox 360. That's an integrated GPU, and it was faster than any graphics card on the market....in 2006.
Sorry, but the 360's Xenos GPU was not faster than any card on the market in 2006. Hell, the G80 launched in 2006.
Even before the G80, the X19k series and G7 series could play some of the same games with higher resolution higher IQ and still get playable frames.
I still have yet to see a single Xbox 360 game that actually uses decent levels of AA and AF. (Granted, there could be one out now. I haven't played the 360 for a few months)
Part of the reason why the probably don't use higher IQ in their games is because of the bandwidth constraints.
The 360 does not even come close to matching the image quality or resolution of a separate video card. Not too mention that all it has to do is run games customized just for the GPU and a minimal operating system.
I don't think a CGPU or a integrated GPU will ever replace a separate graphics card. Why else would the R600 be so huge if they were moving in a direction of making em smaller.
The only benefit of a combined CPU\GPU is cost.
You said 2006.The X1900 cards weren`t out yet, and I`m not certain if even the X1800 cards were.
Originally posted by: SickBeast
Yeah, TODAY there are better graphics cards than the Xbox 360`s GPU. When it was released, there were not.
The X1900 cards weren`t out yet, and I`m not certain if even the X1800 cards were.
Originally posted by: apoppin
intel has their own solution
even the OP is OFF topic
:roll:
what's that about "derail" again, Wreckage?
![]()
Having a higher bit width would require a mother board with more layers. That would add cost. For comparison, a 8800gtx has 12 pcb layers and a motherboard has 4-8 depending on how fancy and how compact the board is.Originally posted by: SickBeastI'm certain that the memory bus width is very 'low tech' at this point and will be one of the simplest issues to overcome.
Originally posted by: kobymu
My point is the argument "i don?t see any CPU with high bandwidth to memory subsystem NOW" is flowed because CPU don?t NEED it.
If CPU NEEDED high bandwidth, you would have seen high bandwidth memory subsystem.
Saying that it doesn't exist NOW is a moot point. It isn?t needed NOW.
...
Originally posted by: Janooo
Originally posted by: kobymu
My point is the argument "i don?t see any CPU with high bandwidth to memory subsystem NOW" is flowed because CPU don?t NEED it.
If CPU NEEDED high bandwidth, you would have seen high bandwidth memory subsystem.
Saying that it doesn't exist NOW is a moot point. It isn?t needed NOW.
...
CPU NEEDS high bandwidth! It's just not realistic (meaning much more expensive) to get it. That's why there are tricks around that (cache, prefetch, ...). The ultimate ideal state would be the whole RAM in a form of cache.
Some tasks don't need more than 1MB of memory. They run from cache and they are fast but there are many tasks that need to go main memory and they would benefit from high bandwidth.
GPUs need high bandwidth because by nature of a task at hand they need more than 1MB(2, 4, 8,... what ever cache size would be possible) of memory.
CPUs are a little bit different. They execute different type of tasks and many of them fit into cache and that's the reason why they appear that they don't need high bandwidth. But if there was no cache they would starve to death for high bandwidth.
AMD JUST ANNOUNCED lower earnings for its first financial quarter, it is now set to earn $1.225 billion.
It blames the revenue shortfall on the price war with Intel, although it won't directly say it. Since the quarter is already over, one would guess this number is pretty accurate.
In any case, to make up for it, it will lower capex by about $500M in 2007, but said this will not change any numbers for the current year. AMD is also going to stop hiring in all but critical positions, and slow discretionary spending.
Originally posted by: kobymu
Originally posted by: Janooo
Originally posted by: kobymu
My point is the argument "i don?t see any CPU with high bandwidth to memory subsystem NOW" is flowed because CPU don?t NEED it.
If CPU NEEDED high bandwidth, you would have seen high bandwidth memory subsystem.
Saying that it doesn't exist NOW is a moot point. It isn?t needed NOW.
...
CPU NEEDS high bandwidth! It's just not realistic (meaning much more expensive) to get it. That's why there are tricks around that (cache, prefetch, ...). The ultimate ideal state would be the whole RAM in a form of cache.
Some tasks don't need more than 1MB of memory. They run from cache and they are fast but there are many tasks that need to go main memory and they would benefit from high bandwidth.
GPUs need high bandwidth because by nature of a task at hand they need more than 1MB(2, 4, 8,... what ever cache size would be possible) of memory.
CPUs are a little bit different. They execute different type of tasks and many of them fit into cache and that's the reason why they appear that they don't need high bandwidth. But if there was no cache they would starve to death for high bandwidth.
From anand latest article:
http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=2963&p=6
All the core2duo variety had the SAME bandwidth.
All the Athlon 64 X@ variety had the SAME bandwidth.
And that is from the 3D RENDERING Performance page.
Even when you look at 2 core CPU only you see a delta of 50% if not more.
What does that tell you?
Did you ever program any real world application? Do you have any idea what the hell you are talking about?
For every application you can find that is bottlenecked by bandwidth I can find you 10 that are bottlenecked by other subsystems, 10!