- Aug 14, 2000
- 22,709
- 3,003
- 126
Because we're getting a bit off topic in the GT200/4xxx threads (sorry about that Virge
) I thought it was time to start this thread.
In this thread we can discuss the advantages, pitfalls, opinions, facts and whether you think multi-GPU is the way of the future.
I'll start with my thoughts on the issue:
The first major point made against single monolithic cores is the fact that die size is increasing despite process improvements. While this is certainly true my response to that is while the GT200 will be a big die @ 65 nm, it should be much more reasonable once it shrinks to 55 nm and it?ll likely be mid-range by the time it hits 45 nm. Also we haven?t even touched alternative manufacturing techniques like laser and organic.
I think the major problem at the moment is that GPU vendors are pursuing performance at the expense of everything else, much like how Intel was during the P4 days and it backfired on them once they started hitting thermal limits. Not even throwing multiple cores at the problem couldn?t dig them out of that hole because their single cores were already at the limit. It was only once they started designing chips with a focus on doing as much work per watt (i.e. the Core architecture) things got much more reasonable.
Moving on from this, slapping multiple smaller cores may seem like a good idea from yield and thermal perspectives but it?s absolutely horrid from a performance and compatibility perspective. Adding multiple cores doesn?t guarantee any kind of speedup from a single core. None whatsoever. You need to get the driver in there and start optimizing on an application basis to get scaling working and to work out the individual quirks of today?s complex games.
Sure, an NV40 chip @ 65 nm would be tiny and a card with eight such cores might be quite easy to build compared to a GT200 but will it be a performance match for a GT200? I doubt it, not unless you get reasonably perfect 8-way GPU scaling in most games which of course isn?t going to happen anytime soon. And this is ignoring the other issues associated with multi-GPU drivers such as input lag, micro-stutter, and general driver incompatibilities.
Also single cores form the basis of multi-core so if you?re hitting limits on them your multi-GPU won?t be viable either. Case and point the R600 before the 55 nm shrink. A 3870X2 simply wasn?t possible with that core before that.
You still need more performance in single cores so if you hit a wall you can?t really make multi-core faster either unless you keep adding more and more cores and rely on the driver to provide n-way scaling. Given it?s quite easy to see 2-way fail what chance does n-way have?
The final example given is the Voodoo 5. While that is indeed a shining example of multi-GPU ?just working? it?s also not really relevant to today?s world. Firstly the card used SFR (up to 128 scanlines per GPU) and while this is more compatible than AFR, it won?t scale as well because among other things vertex performance isn?t increased with this method. Both ATi and nVidia are currently pursuing AFR whenever possible.
The other point is that games, APIs and drivers are much more advanced. Now we have multiple render targets, complex shaders and similar while the Voodoo 5 didn?t even have T&L so it was basically just a simple rasterizer that ran very simple games. To get proper scaling today a lot more work is required from the driver.
Edit: added third poll option.
In this thread we can discuss the advantages, pitfalls, opinions, facts and whether you think multi-GPU is the way of the future.
I'll start with my thoughts on the issue:
The first major point made against single monolithic cores is the fact that die size is increasing despite process improvements. While this is certainly true my response to that is while the GT200 will be a big die @ 65 nm, it should be much more reasonable once it shrinks to 55 nm and it?ll likely be mid-range by the time it hits 45 nm. Also we haven?t even touched alternative manufacturing techniques like laser and organic.
I think the major problem at the moment is that GPU vendors are pursuing performance at the expense of everything else, much like how Intel was during the P4 days and it backfired on them once they started hitting thermal limits. Not even throwing multiple cores at the problem couldn?t dig them out of that hole because their single cores were already at the limit. It was only once they started designing chips with a focus on doing as much work per watt (i.e. the Core architecture) things got much more reasonable.
Moving on from this, slapping multiple smaller cores may seem like a good idea from yield and thermal perspectives but it?s absolutely horrid from a performance and compatibility perspective. Adding multiple cores doesn?t guarantee any kind of speedup from a single core. None whatsoever. You need to get the driver in there and start optimizing on an application basis to get scaling working and to work out the individual quirks of today?s complex games.
Sure, an NV40 chip @ 65 nm would be tiny and a card with eight such cores might be quite easy to build compared to a GT200 but will it be a performance match for a GT200? I doubt it, not unless you get reasonably perfect 8-way GPU scaling in most games which of course isn?t going to happen anytime soon. And this is ignoring the other issues associated with multi-GPU drivers such as input lag, micro-stutter, and general driver incompatibilities.
Also single cores form the basis of multi-core so if you?re hitting limits on them your multi-GPU won?t be viable either. Case and point the R600 before the 55 nm shrink. A 3870X2 simply wasn?t possible with that core before that.
You still need more performance in single cores so if you hit a wall you can?t really make multi-core faster either unless you keep adding more and more cores and rely on the driver to provide n-way scaling. Given it?s quite easy to see 2-way fail what chance does n-way have?
The final example given is the Voodoo 5. While that is indeed a shining example of multi-GPU ?just working? it?s also not really relevant to today?s world. Firstly the card used SFR (up to 128 scanlines per GPU) and while this is more compatible than AFR, it won?t scale as well because among other things vertex performance isn?t increased with this method. Both ATi and nVidia are currently pursuing AFR whenever possible.
The other point is that games, APIs and drivers are much more advanced. Now we have multiple render targets, complex shaders and similar while the Voodoo 5 didn?t even have T&L so it was basically just a simple rasterizer that ran very simple games. To get proper scaling today a lot more work is required from the driver.
Edit: added third poll option.