Next gen 3d h/w approaches - but where's the s/w?

gday

Junior Member
Oct 17, 2001
10
0
0
Once again the new generation of 3d cards is approaching quickly. We are months away from R420 and NV40 - which may give us dazzling performance compared to our 9800 Pros and 5950 Ultras. But does anyone feel software that will utilise their capabilities is anywhere closer to appearing during the lifetime of these next generation of video cards than it did for their predecessors?

At least now it seems the industry has adopted scalable graphics processing in the most modern of 3d game engines (e.g. Source, Krass, CryEngine etc...). Engines that fall back to simpler shaders if they assess you don't have enough CPU/GPU 3d power to run incredible shaders. I imagine these engines will make it easier for game developers to also dial up the graphics load for any new DX9 or OGL card that appears that is detected and assessed as having performance capabilities way beyond what we have today - maybe???

But does anyone else fell we are getting closer to the time when leading edge cards' capabilities can actually be satisfactorily used in their lifetime?

Or asked another way if 2004 delivers us all of NV40, NV45, R420 and R500 - a possibility - will any game engine scale to actually push them? Are game engines and shader development teams good enough to deliver surperb shaders that are held in reserve simply waiting for detection of a card fast enough to run them? Or is development of such scalable shader effects simply viewed as too much wasted effort - even if the best 3d engines could rather elegantly detect them and rather simply deliver their effects?

What are your thoughts?
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
I think software can always be created to bring hardware to it's knees. And that's the order it should happen in... hardware should be slightly ahead of software at all times... otherwise you end up with software that's a pain in the ass to use because the hardware can't run it as intended to create a positive experience.
 

jackschmittusa

Diamond Member
Apr 16, 2003
5,972
1
0
You can't write software with capabilities that are not supported by hardware. The hardware always advances first and offers new features. Then the software developers have to figure out the best way to exploit these features in there code. They may try things that have unexpected results and recommend changes to hardware drivers, then start all over. There is also the marketing angle. Hard to market as the "greatest graphics in the universe" if it only looks ordinary on 96% of the hardware out there. When hard drives were small, many games ran from a cd. For years, 4x was the standard drive speed optimization because because 4x drives were far more common than faster drives, and no one wanted to bypass so large a market segment. Vid card product cycles are now so short that software can't possibly catch up. Start developing a game for today's hardware, and by the time it's done, hardware has cycled up again.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
But does anyone feel software that will utilise their capabilities is anywhere closer to appearing during the lifetime of these next generation of video cards than it did for their predecessors?

It really depends on what you are talking about in terms of how you want to see the new features implemented and which features you want to see utilized. The GeForce3(and all of the newer boards) still has at least one feature that I can think of that hasn't been used yet(3D textures). For the upcoming parts their big change over the current generation will be expanded functionality of their shader units via PS/VS 3.0, but this is not a radical evolution from current standards by any means nor will it significantly change what is possible versus today's high end parts.

When it comes to making a game engine that scales to a point that even the highest end boards can't handle it at launch, the issue is if the developers got their desired effect in terms of the end product. Take a look at titles like Doom3 and Half-Life2 and ask yourself what more should they be doing in terms of utilizing features to enhance visuals? With HL2 you can make the point, and it is a valid one, that is utilizes all of the latest features and it would be hard to gauge what other features they could use since we don't have them in hardware yet, but Doom3 is mainly a DX7 level feature set game and it easily, and significantly, surpasses the visual standards of anything shipping right now.

Are game engines and shader development teams good enough to deliver surperb shaders that are held in reserve simply waiting for detection of a card fast enough to run them?

Likely all of them could do it, the question becomes is it worth the time and effort to do so. The Doom3 engine as an example will be with us for years, it is capable of quite a bit more then what we will be seeing with Doom3, but what advantage would there be to someone exploiting that increased flexibility if no hardware around can utilize it for a game that is shipping anytime soon? If there was a particular effect that developers wanted to see implemented but the hardware didn't support it, it is actually fairly simplistic to set a game up to simply enable it when a card ships that can handle it.

That said, new features are overwhelmingly too slow to be truly viable. The R9800 and FX5900 both have shader limits that are well beyond what is close to reasonable for them to actually pull off in real time. Both of them can handle shaders that are thousands of instructions long, that level of functionality can have you comparable to CGI levels although the performance it would give you would not be close to viable for real time 3D.

The upcoming gen parts may end up supporting combined VS/PS functionality which could increase the performance levels of shaders considerably if the developers aren't pushing both ends hard, but they still won't be fast enough to handle in real time what current parts are already capable of. The level of programmability we already have in GPUs is quite impressive(although there is still a ways to go to be sure), what we don't have right now is a level of performance close to viable to actually push those limits.