- Oct 9, 1999
- 9,140
- 67
- 91
A question posed by Soccerman in Dave's current thread lead me to post this topic.
"Tilers", such as the PowerVR/Kyro boards are much better at handling what has always been considered the most important aspects of 3D graphics cards. With 3dfx poised to launch their Gigapixel based part in the not too distant future(I'm assuming by the end of 01) I feel that it is fairly safe to say that they will likely dominate in the traditional performance standard, mainly effective fillrate.
I don't think it will matter very much.
Since the launch of the Voodoo1 graphics boards have been dealing with the CPU as their main counterpart, be it the limiting factor in some cases("Crusher" type situations) or a possible boost in others(SIMD in general).
In terms of the graphics boards themselves, memory bandwith has been increasingly a limiting factor for rasterizers, particularly the current and more then likely upcoming nVidia parts(and likely the RadeonII Rampage, though not enough is known to be sure).
Now we are at the point in time with AMD and Intel upping the ante faster on CPUs then any developer would have likely predicted eighteen months ago, and we are well into the territory of performance held by Crays and the like at the dawn of 3D PC gaming(circa 1996 with the Voodoo1). This, combined with the current generation of 3D accelerators offloading certain functions from the CPU, with more to follow in the upcoming generation, has rendered CPU speed pretty much a non factor currently. I suspect that CPUs will continue to outpace gaming advancements, particularly with so many DX8 titles likely to target XBox level hardware.
In terms of graphics boards themselves, memory bandwith is definitely rearing its' head. Unlike CPUs which are having tasks offloaded with increasing frequency, memory bandwith requirement are going up extremely fast, particularly when compared to the relative power increases of rasterizers. The next generation of parts(Rampage, NV20, RadeonII) I assume will all be using at least some sort of primitive HSR saving them some effective bandwith. This combined with MSAA's reduced memory needs has us very close to hitting a wall that moves very, very slowly.......monitors.
Right now, you can buy a board that can push Quake3, still one of the most fillrate intensive games on the market, 1600x1200 32bit color at nearly 60FPS. The next generation should have an edge in terms of both actual bandwith and more efficiently utilize the available bandwith.
How far off are we from 16x12 4X FSAA with the at the time current games? I'm sure the GP technology will give us that with plenty to spare, but what do we need more fillrate for?
The first answer to that is more advanced rendering techniques and increased texture passes. We have all heard about Doom3 and some rumbling have it using as many as ten texture passes at once, this certainly will require some serious fillrate, and bandwith, but how much more then what we will have in current offerings? I think it is safe to assume that the level of "HSR" and like techniques(eDRAM) will have progressed by that point, how much of an edge will having an effective 3GTexels fill be over 2GTexels? Even if we up that to 10GTexels, what good will it do us with 1600x1200 being the limit for the forseeable future?
Increased FSAA samples? Of course this is definately a possibility, but FSAA has very quickly diminishing returns when you pass 4x. Telling the difference between 4x and 9x is fairly easy(nothing like 2x and 4x though), 9x to 16x gets a bit tougher, particularly at higher resolutions. 16x to 32x and I am willing to bet you would need a trained eye, even when zooming in on a still, particularly if we are dealing with 1600x1200 resolution anyway.
Is this going to change? Without a major technilogical breakthrough in monitors it is extremely unlikely. We are going to hit the limits of monitors sooner then many think. Sure, you could go out and pick up a real high end Sony that offers 20xx+ resolution, but that certainly won't be what many, if any, gamers are going to want to do to improve visual quality.
Look to CGI. Gaming has been following several years behind CGI for some time now, and in that area increasing resolution and texture passes isn't the norm, not at all. Look at the difference between some fixed resolution DVDs for an example. Even using two now aging examples, Toy Story and A Bug's Life(your d@mn straight Robo, what good would a thread be if I didn't bring up TS
). Viewing them both on DVD at a set resolution it is extremely clear which has the superior visuals, and neither of them use ten pass rendering or anything else like it, they use procedural textures.
Procedural textures, a mathematical equation instead of a stored image for texture maps is one direction that should be looked at. Why go through ten pass texturing when you can calculate all of your desired effects and handle everything in a single pass? Not only can this produce vastly superior visual results, it also saves *considerably* on bandwith needs(which aren't likely to be too much of a concern by that point).
Another area is a given, increase the d@mn poly counts. Not this little 15%-25% a year BS either. DX8/X-Box should give us some significantly improved game visuals due to target hardware vs developments costs. Having a console to cover the dev costs should make developers *ignoring* the average eMachine user a bit more acceptable to the publishers. This is another area that needs to be improved upon significantly. Current T&L solutions have a lot more to offer then what we have today, but by the time we are dealing with GP and NV25 they will be far off the cutting edge. We need poly rates, and real world poly rates, in the hundreds of millions of polys range, and sooner rather then later. This is one area that I am wondering on with the GP technology, but Mr T
and the guys I'm sure have this covered(and have stated they do, feel free to fill us in on any particulars Dave
).
After that, we should be looking at real time RayTracing. A few months, even a few weeks ago if anyone asked me about this I would and did in at least one thread say this was several years off. Since then I have learned that at least on application is shipping *this month* with support for real time RayTracing in a 3D environment(I'll let you know how it goes as soon as I get my hands on it). This doesn't lead me to believe that it will be reasonable in a game within the next six months, but I am cutting back on the amount of time I think it will take to get this up and running in hardware.
In summary, the GP will likely be the fillrate king by a wide margin, but will that be worth much by the time it is in production? nVidia has made it clear in many statements, and by certain hiring practices, where they are going. The Cannucks seem to be following the market leader at this point(well, who knows what Matrox is doing), and I think they are more likely to move closer to the above direction then in the fillrate is king mindset(the ArtX acquistion only reinforces this in my mind).
Next fall winter should be very, very interesting. NV25, Gigapixel, G800 and ATi's offering based on GameCube technology fighting it out for supremacy.
The only question in my mind is where will developers go? I am absolute in terms of where I think graphics technology *should* go, and it isn't keeping with the same fillrate is king mentality that has ruled the 3D grphics card market for so long.
BTW- I'm sure I have dozens of typos in this post... No, I don't feel like fixing them either
"Tilers", such as the PowerVR/Kyro boards are much better at handling what has always been considered the most important aspects of 3D graphics cards. With 3dfx poised to launch their Gigapixel based part in the not too distant future(I'm assuming by the end of 01) I feel that it is fairly safe to say that they will likely dominate in the traditional performance standard, mainly effective fillrate.
I don't think it will matter very much.
Since the launch of the Voodoo1 graphics boards have been dealing with the CPU as their main counterpart, be it the limiting factor in some cases("Crusher" type situations) or a possible boost in others(SIMD in general).
In terms of the graphics boards themselves, memory bandwith has been increasingly a limiting factor for rasterizers, particularly the current and more then likely upcoming nVidia parts(and likely the RadeonII Rampage, though not enough is known to be sure).
Now we are at the point in time with AMD and Intel upping the ante faster on CPUs then any developer would have likely predicted eighteen months ago, and we are well into the territory of performance held by Crays and the like at the dawn of 3D PC gaming(circa 1996 with the Voodoo1). This, combined with the current generation of 3D accelerators offloading certain functions from the CPU, with more to follow in the upcoming generation, has rendered CPU speed pretty much a non factor currently. I suspect that CPUs will continue to outpace gaming advancements, particularly with so many DX8 titles likely to target XBox level hardware.
In terms of graphics boards themselves, memory bandwith is definitely rearing its' head. Unlike CPUs which are having tasks offloaded with increasing frequency, memory bandwith requirement are going up extremely fast, particularly when compared to the relative power increases of rasterizers. The next generation of parts(Rampage, NV20, RadeonII) I assume will all be using at least some sort of primitive HSR saving them some effective bandwith. This combined with MSAA's reduced memory needs has us very close to hitting a wall that moves very, very slowly.......monitors.
Right now, you can buy a board that can push Quake3, still one of the most fillrate intensive games on the market, 1600x1200 32bit color at nearly 60FPS. The next generation should have an edge in terms of both actual bandwith and more efficiently utilize the available bandwith.
How far off are we from 16x12 4X FSAA with the at the time current games? I'm sure the GP technology will give us that with plenty to spare, but what do we need more fillrate for?
The first answer to that is more advanced rendering techniques and increased texture passes. We have all heard about Doom3 and some rumbling have it using as many as ten texture passes at once, this certainly will require some serious fillrate, and bandwith, but how much more then what we will have in current offerings? I think it is safe to assume that the level of "HSR" and like techniques(eDRAM) will have progressed by that point, how much of an edge will having an effective 3GTexels fill be over 2GTexels? Even if we up that to 10GTexels, what good will it do us with 1600x1200 being the limit for the forseeable future?
Increased FSAA samples? Of course this is definately a possibility, but FSAA has very quickly diminishing returns when you pass 4x. Telling the difference between 4x and 9x is fairly easy(nothing like 2x and 4x though), 9x to 16x gets a bit tougher, particularly at higher resolutions. 16x to 32x and I am willing to bet you would need a trained eye, even when zooming in on a still, particularly if we are dealing with 1600x1200 resolution anyway.
Is this going to change? Without a major technilogical breakthrough in monitors it is extremely unlikely. We are going to hit the limits of monitors sooner then many think. Sure, you could go out and pick up a real high end Sony that offers 20xx+ resolution, but that certainly won't be what many, if any, gamers are going to want to do to improve visual quality.
Look to CGI. Gaming has been following several years behind CGI for some time now, and in that area increasing resolution and texture passes isn't the norm, not at all. Look at the difference between some fixed resolution DVDs for an example. Even using two now aging examples, Toy Story and A Bug's Life(your d@mn straight Robo, what good would a thread be if I didn't bring up TS
Procedural textures, a mathematical equation instead of a stored image for texture maps is one direction that should be looked at. Why go through ten pass texturing when you can calculate all of your desired effects and handle everything in a single pass? Not only can this produce vastly superior visual results, it also saves *considerably* on bandwith needs(which aren't likely to be too much of a concern by that point).
Another area is a given, increase the d@mn poly counts. Not this little 15%-25% a year BS either. DX8/X-Box should give us some significantly improved game visuals due to target hardware vs developments costs. Having a console to cover the dev costs should make developers *ignoring* the average eMachine user a bit more acceptable to the publishers. This is another area that needs to be improved upon significantly. Current T&L solutions have a lot more to offer then what we have today, but by the time we are dealing with GP and NV25 they will be far off the cutting edge. We need poly rates, and real world poly rates, in the hundreds of millions of polys range, and sooner rather then later. This is one area that I am wondering on with the GP technology, but Mr T
After that, we should be looking at real time RayTracing. A few months, even a few weeks ago if anyone asked me about this I would and did in at least one thread say this was several years off. Since then I have learned that at least on application is shipping *this month* with support for real time RayTracing in a 3D environment(I'll let you know how it goes as soon as I get my hands on it). This doesn't lead me to believe that it will be reasonable in a game within the next six months, but I am cutting back on the amount of time I think it will take to get this up and running in hardware.
In summary, the GP will likely be the fillrate king by a wide margin, but will that be worth much by the time it is in production? nVidia has made it clear in many statements, and by certain hiring practices, where they are going. The Cannucks seem to be following the market leader at this point(well, who knows what Matrox is doing), and I think they are more likely to move closer to the above direction then in the fillrate is king mindset(the ArtX acquistion only reinforces this in my mind).
Next fall winter should be very, very interesting. NV25, Gigapixel, G800 and ATi's offering based on GameCube technology fighting it out for supremacy.
The only question in my mind is where will developers go? I am absolute in terms of where I think graphics technology *should* go, and it isn't keeping with the same fillrate is king mentality that has ruled the 3D grphics card market for so long.
BTW- I'm sure I have dozens of typos in this post... No, I don't feel like fixing them either