- Oct 9, 1999
- 9,140
- 67
- 91
This thread doesn't deal with issues pertaining to one company over another, so those only interested in such please look elsewhere.
A trend that I have noticed building over the last several years is people's growing lack of enthusiasm for the progress being made in game engines in terms of the evolution of real time 3D. The recent release of FEAR has really driven this home, although it has been evident for quite a few years now. People are very displeased with the small gains we are seeing for the enormous levels of performance it is taking to drive them- this is only going to get more and more drastic.
Starting back with the beginnings of real time 3D we had software rasterized super low resolution titles with point filtering and 8bit color. Hardware acceleration came along and we were able to quadruple the resolutions we were using, move to 16bit color and add the wonders of bilinear filtering. Going from 320x240 w/point filtering using 256 colors to 640x480 w/bilinear filtering and 65,536 colors was a quantum leap type improvement in visual quality. We could actually make out what those texture maps were supposed to look like- and we were all thrilled. We went from a fairly nondescript collection of blocks to smoothly blended images.
No shift from that point forward is going to be as large- there is nothing that will arrive anytime in the future that will have that kind of impact on real time 3D.
Moving forward we saw continued evolution of the basic principles started with the first 3D hardware until we hit a shifting point and moved from rasterizers to GPUs. With the move to GPUs we were allowed to have graphics functions outside of basic rasterization benefit from the same greater then linear performance increases due to the nature of adding dedicated hardware for features that were previously approximated in software. This move started us on our current evolutionary path.
For early 3D titles we were dealing with low bit depth, low resolution, low quality textures placed on top of low poly, low animation, low interactive objects. The evolution of rasterizers has taken care of the first set of functions while the overlapping evolution of GPUs is taking care of many elements of the second set of issues. We never saw the slap in the face transition to GPUs as we did with hardware rasterizers, as the market was still split, but its impact is easily seen today with the enormously increased model complexity, animations and shader effects being applied to games.
For model complexity we started off seing models in the dozens or hundreds of polys at the maximum. Moving from that levels to tens of thousands of polys is a staggering difference- going from the level to millions of polys is a much smaller step in terms of end visuals while moving to hundreds of millions is a much smaller step still. Each of these steps requires no less computational power in terms of exponential growth- but each one returns increasingly small returns.
Animations are the same. When you move from a poorly 'bound box' model bouncing off air and clipping through walls to some of the slick movements we are seeing out of characters today the transition seems huge. But adding in the finer nuances such as muscular deformation and cloth simulation, while very cool to the geek set when looking for it, is a much smaller transition. Going from that level up to per vertice accurate animations on a multi million vertice model will yield a significantly smaller improvement in terms of end visual impact also.
Shaders are a major function of GPUs as of now, and they have currently taken center stage as the focal point of performance in real time 3D. This is a very good thing as on a realistic basis shader hardware is still very weak compared to what we need to see that second level progress that the other elements have already reached. That said, the move from no shaders at all to what we are currently seeing in titles like DooM3, Quake4 and FEAR has certainly been an enormous one. Objects have been given a depth and luster that were never there before- we have some primitive lighting working its way into titles and are starting to see one of the last major hurdles cleared.
So what do you do when you have 10K-100K poly models with hig res textures, running at high resolutions with great animation and pixels shaded all over the screen? Where do you go from there?
For anything in the near future the answer is more of the same. The problem is that the power requirements to offer even a modest bump under the current circumstances is exponential in nature in terms of looking at the end visual impact. That isn't to say that certain engines aren't going to prove they have the ability to stand out running the same hardware, Carmack, Sweeney and the other top tier developers have always exceeded in figuring out which aspect they need to push the hardest to give them the best return per cycle.
This is not saying that other developers are creating sloppy code by any means, in fact it is possible that a lackluster appearing engine may have code worthy of worship written for it, but the proper choices were not made in where to use computational resources and so the end product doesn't appear to look all that good compared to how it performs. This particular aspect will likely require a very close working relationship between the artists and the coders from very early on in engine development(moreso then it is currently).
There are two fairly large areas I see where a relatively considerable leap can be made that will really stand out- physics and lighting. With physics we know that the answer is on the horizon- PPUs or perhaps GPUs will end up providing us with the power we need to handle singificantly higher levels of interactivity then what we are used to. This development should not be underestimated, it is the last major shift we are going to see in real time interactive 3D for some time. Likely the transition will be much as it was with GPUs, the full impact not readily apparent for a while as the haves and have nots will split developer attention until a sizeable market penetration is reality.
The final issue, lighting, is already being approximated by shaders to some extent, but the real holy grail- and what will almost certainly be the final major hurdle real time 3D will face is radiosity. Radiosity for those that don't know in simplified terms would be accurately modeled light. Not just I turn on flashlight and the circle on the wall lights up, but the actual calculations are done out for the interaction of light as it would happen in the real world. This is an enormously complex issue, and one that is still a long way off on the horizon(perhaps decades, hopefully less) but outside of physics it is really all we have left in terms of major pushes in real time 3D.
I don't want any of this to be misleading- a slow continued evolution of exactly what we have now is not a bad thing by any means, but it could very well end up over the next few years that you will need to wait for multiple generations of video cards to pass before you see a major improvement in end results. Your high end Q1 '08 board may look nigh identical to a Q4 '10 board playing a particular game- even though the '10 part allows you to set the detail levels higher- it may not be worth all that much or even noticeable to the majority of people.
This is not some trend I am trying to claim I am looking into the future and seeing- it is already happening and the situation has been escalating for some time now.
A trend that I have noticed building over the last several years is people's growing lack of enthusiasm for the progress being made in game engines in terms of the evolution of real time 3D. The recent release of FEAR has really driven this home, although it has been evident for quite a few years now. People are very displeased with the small gains we are seeing for the enormous levels of performance it is taking to drive them- this is only going to get more and more drastic.
Starting back with the beginnings of real time 3D we had software rasterized super low resolution titles with point filtering and 8bit color. Hardware acceleration came along and we were able to quadruple the resolutions we were using, move to 16bit color and add the wonders of bilinear filtering. Going from 320x240 w/point filtering using 256 colors to 640x480 w/bilinear filtering and 65,536 colors was a quantum leap type improvement in visual quality. We could actually make out what those texture maps were supposed to look like- and we were all thrilled. We went from a fairly nondescript collection of blocks to smoothly blended images.
No shift from that point forward is going to be as large- there is nothing that will arrive anytime in the future that will have that kind of impact on real time 3D.
Moving forward we saw continued evolution of the basic principles started with the first 3D hardware until we hit a shifting point and moved from rasterizers to GPUs. With the move to GPUs we were allowed to have graphics functions outside of basic rasterization benefit from the same greater then linear performance increases due to the nature of adding dedicated hardware for features that were previously approximated in software. This move started us on our current evolutionary path.
For early 3D titles we were dealing with low bit depth, low resolution, low quality textures placed on top of low poly, low animation, low interactive objects. The evolution of rasterizers has taken care of the first set of functions while the overlapping evolution of GPUs is taking care of many elements of the second set of issues. We never saw the slap in the face transition to GPUs as we did with hardware rasterizers, as the market was still split, but its impact is easily seen today with the enormously increased model complexity, animations and shader effects being applied to games.
For model complexity we started off seing models in the dozens or hundreds of polys at the maximum. Moving from that levels to tens of thousands of polys is a staggering difference- going from the level to millions of polys is a much smaller step in terms of end visuals while moving to hundreds of millions is a much smaller step still. Each of these steps requires no less computational power in terms of exponential growth- but each one returns increasingly small returns.
Animations are the same. When you move from a poorly 'bound box' model bouncing off air and clipping through walls to some of the slick movements we are seeing out of characters today the transition seems huge. But adding in the finer nuances such as muscular deformation and cloth simulation, while very cool to the geek set when looking for it, is a much smaller transition. Going from that level up to per vertice accurate animations on a multi million vertice model will yield a significantly smaller improvement in terms of end visual impact also.
Shaders are a major function of GPUs as of now, and they have currently taken center stage as the focal point of performance in real time 3D. This is a very good thing as on a realistic basis shader hardware is still very weak compared to what we need to see that second level progress that the other elements have already reached. That said, the move from no shaders at all to what we are currently seeing in titles like DooM3, Quake4 and FEAR has certainly been an enormous one. Objects have been given a depth and luster that were never there before- we have some primitive lighting working its way into titles and are starting to see one of the last major hurdles cleared.
So what do you do when you have 10K-100K poly models with hig res textures, running at high resolutions with great animation and pixels shaded all over the screen? Where do you go from there?
For anything in the near future the answer is more of the same. The problem is that the power requirements to offer even a modest bump under the current circumstances is exponential in nature in terms of looking at the end visual impact. That isn't to say that certain engines aren't going to prove they have the ability to stand out running the same hardware, Carmack, Sweeney and the other top tier developers have always exceeded in figuring out which aspect they need to push the hardest to give them the best return per cycle.
This is not saying that other developers are creating sloppy code by any means, in fact it is possible that a lackluster appearing engine may have code worthy of worship written for it, but the proper choices were not made in where to use computational resources and so the end product doesn't appear to look all that good compared to how it performs. This particular aspect will likely require a very close working relationship between the artists and the coders from very early on in engine development(moreso then it is currently).
There are two fairly large areas I see where a relatively considerable leap can be made that will really stand out- physics and lighting. With physics we know that the answer is on the horizon- PPUs or perhaps GPUs will end up providing us with the power we need to handle singificantly higher levels of interactivity then what we are used to. This development should not be underestimated, it is the last major shift we are going to see in real time interactive 3D for some time. Likely the transition will be much as it was with GPUs, the full impact not readily apparent for a while as the haves and have nots will split developer attention until a sizeable market penetration is reality.
The final issue, lighting, is already being approximated by shaders to some extent, but the real holy grail- and what will almost certainly be the final major hurdle real time 3D will face is radiosity. Radiosity for those that don't know in simplified terms would be accurately modeled light. Not just I turn on flashlight and the circle on the wall lights up, but the actual calculations are done out for the interaction of light as it would happen in the real world. This is an enormously complex issue, and one that is still a long way off on the horizon(perhaps decades, hopefully less) but outside of physics it is really all we have left in terms of major pushes in real time 3D.
I don't want any of this to be misleading- a slow continued evolution of exactly what we have now is not a bad thing by any means, but it could very well end up over the next few years that you will need to wait for multiple generations of video cards to pass before you see a major improvement in end results. Your high end Q1 '08 board may look nigh identical to a Q4 '10 board playing a particular game- even though the '10 part allows you to set the detail levels higher- it may not be worth all that much or even noticeable to the majority of people.
This is not some trend I am trying to claim I am looking into the future and seeing- it is already happening and the situation has been escalating for some time now.