Will Doom 3 spur a revival in the computer hardware industry?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

bendixG15

Diamond Member
Mar 9, 2001
3,483
0
0
Best Laugh I had today..
Is this an April Fools thing ?????

In case you haven't noticed, gamers do not make the computer industry..

Tomorrow is April 2.............
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: KnightBreed
When DX9 was released, every DX8 video card didn't suddenly stop working.
Thanks for the news flash.
rolleye.gif
How is that any different from any previous DX release? It still doesn't change the fact that DX8 hardware will not be able to run DX9 features in hardware.

What's stopping PC developers from focusing heavily on a DX8-optimized title?
That's a good question, you should ask them since I haven't seen a DX8-optimized title to date. My guess however, is because dev houses know that the overwhelming majority of GPUs and systems based on sales figures, market research, surveys (like when your system specs are polled when you install a game) and online feedback (such as 3DMark ORB) are incapable of running a game requiring a fully hardware compliant DX8 based engine.

You seem to forget that DirectX revisions are backwards compatible. The userbase for a particular DX revision is cumulative with future revisions.
Where in my post did I give that impression? If anything, full forward (in software) and backward compatibility only compounds the problem, as again, it leads to unoptimized code and the need for programming on multiple platforms and hardware. We're on our 3rd DX version since DX7, yet the majority of games still run on DX7 engines. Any time a developer wants to implement new features in hardware, they have the vaunting task of exponentially increasing their workload in order to assure compatibility with older parts. The quick and dirty solution would be to have hardware-level requirements to run a game, but considering the slim margins and user-base in PC gaming, that's simply not an option. What we end up with is a bunch of bastardized titles that run on old, unoptimized engines that offer the option of current hardware-level features. Longer product cycles and time between API versions will allow game devs and PCs to catch up to a current APIs and the hardware best suited for it. If you keep raising the bar without need, you'll never reach a common ground.

1) Longer times between API revisions will make the transition from one revision to another that much more difficult.
What makes you think that? The last major API improvement that we've seen in games is hardware T&L. We haven't seen anything as significant on the hardware level other than the use of shaders, which still haven't seen widespread use in games. Shader ops were introduced with DX8, but they won't even be the standard until DX9 parts and engines become mainstream and take advantage of fully programmable shaders using HLSL, cG etc. I expect a rather acute shift from DX7 compliant hardware straight to DX9 level hardware as the base API for future games, and it seems MS and IHVs are shooiting for the same goal.
2) Longer product cycles will slow the advancement of new features, since the market will be flooded with products that have the lesser featurset.
Again, I'm not following you here. When IHVs talk about increasing product cycles, they are referring to major core revisions that enable a new API feature set. That does not preclude them from coming out with product refreshes or even significant core changes every 6 months based on the same hardware requirements for a given API. A die shrink, increased clockspeeds, faster memory/memory interface, additional rendering pipelines, changes to the logical units can all yield performance boosts w/out changing the underlying technology of a particular GPU family.

Hardware support and actual performance aren't synonymous either. Look at the R200 core, a DX 8.1 hardware compliant part vs. NV25(GF4), a DX 8 hardware compliant part. The GF4 requires multiple passes to render a single scene in Doom 3 b/c of PS 1.1, the R8500 can render in a single pass b/c of PS 1.4, yet the GF4 still outperforms the R200 significantly. This is a perfect example of why enabling future technology instead of optimizing current technology isn't always preferable in current or future titles. We're seeing the opposite situation with the lower-end FX cards (5600 and 5200); from all indications, they've sacrificed performance in today's games (DX7 w/ some DX8 features) for DX9 hardware compliance. Personally, I feel its a necessary evil in order to push the hardware curve towards DX9, but I'd hate to be the fellow who buys a 5200FX expecting it to run Doom3 better than a GF4Ti simply b/c the FX is a DX9 part.

And again, I have to ask, what's the point of enabling hardware level support of new features when it'll be 1-2 years before they are implemented in games, at which point the product you bought might very well be inadequate to use said features?

If you want a stable, non-volatile platform, buy a console.
Stable and non-volatile have nothing to do with this, but I do have an XBox. Btw, you haven't played Splinter Cell "the way its meant to be played" until you've played it on XBox.

Chiz

 

BoomAM

Diamond Member
Sep 25, 2001
4,546
0
0
Originally posted by: chizowBtw, you haven't played Splinter Cell "the way its meant to be played" until you've played it on XBox.

Chiz

Ive played the first few levels on Xbox (a friends Xbox, not mine, i have a GC), and completed it about 3 times myself on pc.
While both versions are very very good games, i feel that the pc version is better.
The controls are far better, easyier to aim, sneek up on people, control speed/character.
The sound is about the same, as they are both surround sound games.
The graphics, although similar, are more crisp on the pc version, due to the fact the a monitor is clearer/sharper and you can run it at a higher resolution. Textures seem the same as well.
The only thing that is pro for the Xbox is that the "koji cell" extra level(i think thats what its called) is Xbox/PS2/GC only for the moment. Its only a matter of time till its converted somehow to pc.

On a side note, not getting at you chiz.
Is that i really pisses me off when you see Xbox games that say things like "the way its ment to be played" or "designed for Xbox".
Most of those games were ment to be PC games first, but due to a bit of cash from MS, go to the Xbox first, leaving me to argue with ignorant pratts that think Halo and the likes was designed from the ground up to be a Xbox game. Yeah right.
 

The Sauce

Diamond Member
Oct 31, 1999
4,741
34
91
I have always felt that consoles are inferior to the PC as a gaming platform. I own an XBox and a high end PC and I have never found a reason to purchase a game for the XBox that was also out on the PC. A recent example that has convinced me further of this is Rayman 3, which has gotten rave reviews in terms of graphics and gameplay on the console platform. PC Gamer just reviewed it and it got like a 50%. What this tells me is that the bar is just so much lower for the console platform that marginal games get higher reviews and kudos whereas when compared to the typical PC fare they look weak. I have seen this ad nauseum. Including Splinter cell, which I think plays and looks 10 times better on the PC (yes, I have it for both platforms, the XBox version was a gift).
 

Looney

Lifer
Jun 13, 2000
21,938
5
0
Btw, you haven't played Splinter Cell "the way its meant to be played" until you've played it on XBox.

I definitely preferred playing SC on my PC more than the Xbox.
 

Mallow

Diamond Member
Jul 25, 2001
6,108
1
0
Originally posted by: moonshinemadness
It will either make people go out and buy new hardware...or sign up here and gripe about the fact there hardware wont run Doom 3 ;)

I think the latter is more likely ;)
 
Jun 18, 2000
11,209
775
126
Originally posted by: chizow
Thanks for the news flash.
rolleye.gif
How is that any different from any previous DX release? It still doesn't change the fact that DX8 hardware will not be able to run DX9 features in hardware.
But every single DX9 video card will be able to run all DX8 features in hardware, so why hold back hardware development?
That's a good question, you should ask them since I haven't seen a DX8-optimized title to date. My guess however, is because dev houses know that the overwhelming majority of GPUs and systems based on sales figures, market research, surveys (like when your system specs are polled when you install a game) and online feedback (such as 3DMark ORB) are incapable of running a game requiring a fully hardware compliant DX8 based engine.
You can blame the consumer for not buying new video cards, or you can blame the developers for not making games that need new or faster video cards. That still doesn't explain why extending the lifecycle of an API revision will somehow advance feature adoption.
Where in my post did I give that impression? If anything, full forward (in software) and backward compatibility only compounds the problem, as again, it leads to unoptimized code and the need for programming on multiple platforms and hardware. We're on our 3rd DX version since DX7, yet the majority of games still run on DX7 engines. Any time a developer wants to implement new features in hardware, they have the vaunting task of exponentially increasing their workload in order to assure compatibility with older parts. The quick and dirty solution would be to have hardware-level requirements to run a game, but considering the slim margins and user-base in PC gaming, that's simply not an option. What we end up with is a bunch of bastardized titles that run on old, unoptimized engines that offer the option of current hardware-level features. Longer product cycles and time between API versions will allow game devs and PCs to catch up to a current APIs and the hardware best suited for it. If you keep raising the bar without need, you'll never reach a common ground.
That's bollocks. Unoptomized code? Daunting task? Global paramaters are set at initilization through the DirectX CAPS. Creating code paths for various features is by no means trivial, but calling it daunting is a tad of a stretch. We're talking about a 20 or 30 instruction fragment programs, not entire libraries worth of code.
What makes you think that? The last major API improvement that we've seen in games is hardware T&L. We haven't seen anything as significant on the hardware level other than the use of shaders, which still haven't seen widespread use in games. Shader ops were introduced with DX8, but they won't even be the standard until DX9 parts and engines become mainstream and take advantage of fully programmable shaders using HLSL, cG etc. I expect a rather acute shift from DX7 compliant hardware straight to DX9 level hardware as the base API for future games, and it seems MS and IHVs are shooiting for the same goal.
DX8 has been on the market for over 2 years, with dozens of PC products and a successfull console all that support DX8. What on earth would lead you to believe that most devs would skip right over DX8 level pixel and vertex shaders. Given the current lighting trend using stencil operations and cube maps, I see no reason why DX8 is any less useful now that DX9 is available.
Again, I'm not following you here. When IHVs talk about increasing product cycles, they are referring to major core revisions that enable a new API feature set. That does not preclude them from coming out with product refreshes or even significant core changes every 6 months based on the same hardware requirements for a given API. A die shrink, increased clockspeeds, faster memory/memory interface, additional rendering pipelines, changes to the logical units can all yield performance boosts w/out changing the underlying technology of a particular GPU family.
Yes, I know exactly what IHV's mean by "increasing product cycles," hence why I said "since the market will be flooded with products that have the lesser featureset." Two years from now we'll be in the same situation. Developers will be coding for DX8 (or 7 or what-have-you), and IHV will start releasing their next generation products. Most devs will continue to develop for the older API until the new products reach significant-enough market saturation.
Hardware support and actual performance aren't synonymous either. Look at the R200 core, a DX 8.1 hardware compliant part vs. NV25(GF4), a DX 8 hardware compliant part. The GF4 requires multiple passes to render a single scene in Doom 3 b/c of PS 1.1, the R8500 can render in a single pass b/c of PS 1.4, yet the GF4 still outperforms the R200 significantly.
That's speculation based on some tidbits of information given by Carmack. You haven't the slightest clue what the final performance will be for the R200 vs GF4
This is a perfect example of why enabling future technology instead of optimizing current technology isn't always preferable in current or future titles. We're seeing the opposite situation with the lower-end FX cards (5600 and 5200); from all indications, they've sacrificed performance in today's games (DX7 w/ some DX8 features) for DX9 hardware compliance. Personally, I feel its a necessary evil in order to push the hardware curve towards DX9, but I'd hate to be the fellow who buys a 5200FX expecting it to run Doom3 better than a GF4Ti simply b/c the FX is a DX9 part.
Two years from now, would you rather have a video card that runs slowly, or a video card that doesn't run at all? I have a Kyro2 and it ran Unreal Tournament and Quake3 unbelievably well. Now, I'm stuck with a video card that doesn't support cube maps, and subsequently won't run any of the DX7, 8, or 9 tests in 3DMark03 and won't run Doom3. I've also had trouble running No One Lives Forever2, among other games. The average consumer would rather have the game run slowly, then not at all.
And again, I have to ask, what's the point of enabling hardware level support of new features when it'll be 1-2 years before they are implemented in games, at which point the product you bought might very well be inadequate to use said features?
A developer will never release a game without hardware support first. Its unfortunate, but that's the way the industry works. Holding back DX9 technology will do nothing but hurt the adoption of the new features.
Stable and non-volatile have nothing to do with this, but I do have an XBox. Btw, you haven't played Splinter Cell "the way its meant to be played" until you've played it on XBox.

Chiz
Splinter Cell was a PC game from the very start. The "the way its meant to be played" was on a high resolution monitor with 60+ fps. You can blame Ubi-Soft for its crappy ATi support.


You're missing the big picture.

If ATi sells 10 million DX8 video cards and 5 million DX9 video cards:
DX8 userbase: 15 million
DX9 userbase: 5 million

If ATi holds back the Radeon 9700 and releases a faster DX8 card, they may sell 15 million DX8-class cards:
DX8 userbase: 15 million
DX9 userbase: 0
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: BoomAM
While both versions are very very good games, i feel that the pc version is better.
Of course, everyone is entitled to their own opinions. :) Those who prefer PC games or don't own consoles will obviously prefer it on their PC.
The controls are far better, easyier to aim, sneek up on people, control speed/character.
I agree, its easier to aim, but I found the game grotesquely easy on the PC b/c of this. Using the joypad takes getting used to if you're a long time PC gamer, particularly if you're an FPSer or if you approach SC like an FPS. I don't consider it an FPS (its not, by definition at the very least) and found the gamepad controls to be much more streamlined, natural, and responsive. The XBox analog sticks offer something ridiculous like 360 degrees of sensitivity (directional and degree of movement). A mouse scroll-wheel can't touch that level of sensitivity in terms of movement. After a few days of playing, I grew quite accustomed to the thumbstick response; SC is a slower paced game based on stealth, so I found myself having plenty of time to pull off headshots when necessary or sneaking up on baddies. The rumble features and trigger button (squeezing a trigger and shooting is fun in and of itself :D) also add to the immersiveness of the game, another dimension the PC falls short.
The sound is about the same, as they are both surround sound games.
I'd have to disagree with you there. I've yet to find a PC game that rivals the DD 5.1 sound of my XBox titles. There are PC games that have exceptional positional sound (assuming you have an Audigy or Audigy 2 that supports EAX3), but the quality of the sound samples and the clarity/response of the LFE and Center channels on the XBox on a proper DD 5.1 system is cinema quality. I run my Audigy 2 through 5.1 analog inputs on the same set-up, so there's no unfair advantage either.

The graphics, although similar, are more crisp on the pc version, due to the fact the a monitor is clearer/sharper and you can run it at a higher resolution. Textures seem the same as well.
Yes, the graphics are sharper when compared to a TV and there is a bit more detail on extraneous objects (magazines, cans, computer monitors etc.), but I had to turn up AA to 4X and AF to 8X quality to get a similar image quality. At that point, I was shocked to see my frame rates drop into the 20's with very noticeable stuttering when scrolling quickly. I also prefer the low resolution natural AA for action games; I found the increased resolution and sharpness on the PC made the game look "artificial". I really don't need to see every flick of interference when I put on my NV goggles. Also, being able to run it on a 48'' HDTV is more visually impressive than running it at 1280x1024 on a 19'' monitor. I also found the lighting effects (which cause a significant penalty on the PC) to be better on the XBox, free of charge.

On a side note, not getting at you chiz.
Is that i really pisses me off when you see Xbox games that say things like "the way its ment to be played" or "designed for Xbox".
Most of those games were ment to be PC games first, but due to a bit of cash from MS, go to the Xbox first, leaving me to argue with ignorant pratts that think Halo and the likes was designed from the ground up to be a Xbox game. Yeah right.
Np, I make a clear distinction between games I would buy for the XBox and what I would buy for the PC, SC is just a game I feel works better on the XBox. "The way its meant....." is actually a nV slogan, and I don't think SC says its an XBox exclusive. Halo would've been great on the PC (probably better IMO) b/c it is an FPS, but honestly, the hardware requirements to run it on a similar level as the XBox would be extreme. Its probably a good thing its been delayed on the PC, b/c if it came out when it did for the XBox, I'm not sure how well it would've done considering the hardware available at the time.

Chiz
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: KnightBreed
But every single DX9 video card will be able to run all DX8 features in hardware, so why hold back hardware development?
What's the point in pushing additional features and new hardware if:
1) The new features won't be implemented for another 2 years b/c the limiting factor is the remnants of the overambitious "cutting-edge" cards from the previous generation.
2) The new hardware doesn't benefit from or isn't any more efficient in running older features.

That's my biggest gripe with new DX-whatever compliant hardware. They claim to have a slew of new features, yet, when push comes to shove, they simply don't live up to their billing (when a game finally comes along that would fully utilize its capabilities).

That still doesn't explain why extending the lifecycle of an API revision will somehow advance feature adoption.
You explained it yourself. Extending the API and hardware lifecycles would simply unify early and late adopters and place them on a more level playing field. Game developers push for maximum compatibility and IHVs push for 6-month product cycles. You don't see a conflict here? When you have a new DX-whatever compliant part coming out every year, you further fragment the user-base.
That's bollocks. Unoptomized code? Daunting task? Global paramaters are set at initilization through the DirectX CAPS. Creating code paths for various features is by no means trivial, but calling it daunting is a tad of a stretch. We're talking about a 20 or 30 instruction fragment programs, not entire libraries worth of code.
Have you bothered reading dev interviews? Instead of focusing on optimizing code for a particular feature set in hardware (a luxury console developers are afforded), they spend their time figuring out what features they can implement on the hardware level without destroying their target market. Any features that need to be done in software need to be addressed to ensure the effect is acceptable visually and on a performance level. Even 20 to 30 fragment programs is considerable (although I think its more than that) when you have code paths for each GPU family from several IHVs. The end result is that game developers end up spending their time on workarounds and compatibility issues instead of pushing any new features.
DX8 has been on the market for over 2 years, with dozens of PC products and a successfull console all that support DX8. What on earth would lead you to believe that most devs would skip right over DX8 level pixel and vertex shaders. Given the current lighting trend using stencil operations and cube maps, I see no reason why DX8 is any less useful now that DX9 is available.
Unlike previous DX releases, DX9 doesn't introduce completely new features, it introduces more efficient and robust improvements on existing features. The writing is on the wall, the new mainstream/value parts are DX9 compliant (9200/9600 and 5200/5600).
That's speculation based on some tidbits of information given by Carmack. You haven't the slightest clue what the final performance will be for the R200 vs GF4
Considering Carmack shifted D3 development to R200 and ARB2 as the reference renderer after scrapping GF2 support, I'd say his comments a year and a half later are a pretty good indication of how the two will perform. Again, advanced technology or features does not always equate to performance, as we'll soon see again in the GF FX 5200/5600.
Two years from now, would you rather have a video card that runs slowly, or a video card that doesn't run at all? I have a Kyro2 and it ran Unreal Tournament and Quake3 unbelievably well. Now, I'm stuck with a video card that doesn't support cube maps, and subsequently won't run any of the DX7, 8, or 9 tests in 3DMark03 and won't run Doom3. I've also had trouble running No One Lives Forever2, among other games. The average consumer would rather have the game run slowly, then not at all.
I'd never be in that situation, but then again, I wouldn't be griping about a tile-based renderer not being able to run games that require cube mapping. Was that Kyro2 ever DX compliant? The average consumer is buying ATi or nVidia, and until Doom3 hits the shelves, their oldest 3D accelerators will run most any game on the market, albeit slow and probably in software.

A developer will never release a game without hardware support first. Its unfortunate, but that's the way the industry works. Holding back DX9 technology will do nothing but hurt the adoption of the new features.
Of course, but I've never advocated holding back API's or technology, I simply stated that expanding their life cycles was a good move. As I said in my original post, IHVs decision to extend development cycles and MS's decision to freeze DX10 until Longhorn is a good thing. Or would you prefer to see DX10 and compliant hardware in 6 months before the first DX9 game hits the shelf???

You're missing the big picture.
If you don't mind if I borrow the illustration:

If ATi/nVidia releases a new featured card every year for 3 years and no games took advantage of either of the top two APIs you might see:

DX6 or lower: 100 million
DX7 userbase: 90 million
DX8 userbase: 15 million
DX9 userbase: 5 million
DX10 userbase: 2 million
DX11 userbase: Bill Gates

If ATi/nVidia/MS extends the life cycle of DX9 and develop higher performing parts based on existing APIs over 3 years and there were actually games that used at least DX8, you might see:

DX7 userbase: 60 million
DX8 userbase: 50 million
DX9 userbase: 55 million

Considering nVidia and ATi are both moving to mainstream DX9 parts, there's a good chance the user base would shift more abruptly to DX8 and DX9 compliant parts, but the end-result would be the same. You'd have a much tighter user base, and as a result, you'd see more advanced features implemented in a much more polished package.

Chiz
 
Jun 18, 2000
11,209
775
126
What's the point in pushing additional features and new hardware if:
1) The new features won't be implemented for another 2 years b/c the limiting factor is the remnants of the overambitious "cutting-edge" cards from the previous generation.
2) The new hardware doesn't benefit from or isn't any more efficient in running older features.
1) You're forgetting one HUGE factor here. It still takes time for new products to saturate the market. If there are 100 million DX8 cards on the market, developers are going to continue making games optimized for DX8 until DX9 cards reach similar sales numbers. I call it the Playstation2 syndrom. The Playstation2 is inferior to the GameCube and Xbox technically, but developers continue to make games for it simply because of the huge userbase. The larger the original userbase, the longer it'll take to move to a new generation.
2) More bullocks. The Radeon 9700 has double the pixel pipelines and a higher core clock. It may not be more "efficient" clock for clock, but it's still an order of magnitude faster than the 8500 simply because it has nearly triple the fillrate.
That's my biggest gripe with new DX-whatever compliant hardware. They claim to have a slew of new features, yet, when push comes to shove, they simply don't live up to their billing (when a game finally comes along that would fully utilize its capabilities).
But at the least the game runs. That is priority #1 when testing for compatibility!! If the game doesn't run, who cares how fast it renders the scene?

You explained it yourself. Extending the API and hardware lifecycles would simply unify early and late adopters and place them on a more level playing field. Game developers push for maximum compatibility and IHVs push for 6-month product cycles. You don't see a conflict here? When you have a new DX-whatever compliant part coming out every year, you further fragment the user-base.
Fragment the user-base? Uh, you have DirectX 7, 8, and 9. That's it. Even with longer API cycles, developers will still have to deal technology stragglers that refuse to spend $100 for a new video card.

Have you bothered reading dev interviews? Instead of focusing on optimizing code for a particular feature set in hardware (a luxury console developers are afforded), they spend their time figuring out what features they can implement on the hardware level without destroying their target market. Any features that need to be done in software need to be addressed to ensure the effect is acceptable visually and on a performance level. Even 20 to 30 fragment programs is considerable (although I think its more than that) when you have code paths for each GPU family from several IHVs. The end result is that game developers end up spending their time on workarounds and compatibility issues instead of pushing any new features.
1) The workarounds and compatibility issues usually stem from driver bugs and drivers that stray from the original DX spec. This is going to be an issue as long as you have several companies making competitive products. This isn't the issue it was in the mid/late 90's simply because nVidia and ATi dominate the market.
2) I think you misread what I wrote, I said 20 to 30 instruction fragment programs. A game may only have dozen (maybe more or less) different fragment programs that need to be converted to the various formats. Realistically, it'll be in DX7, 8, and 8.1 for most current games.

Unlike previous DX releases, DX9 doesn't introduce completely new features, it introduces more efficient and robust improvements on existing features. The writing is on the wall, the new mainstream/value parts are DX9 compliant (9200/9600 and 5200/5600).
1) The jump from PS1.4 -> 2.0 is larger than DX7 -> PS1.0. The difference in implementation and flexibility is massive - this will be especially apparent when Microsoft unveiles PS/VS 3.0 (yep another standard).
2) The Radeon 9200 is just a 9000 with AGP8x compatibility. Its still a DX8 part.

Considering Carmack shifted D3 development to R200 and ARB2 as the reference renderer after scrapping GF2 support, I'd say his comments a year and a half later are a pretty good indication of how the two will perform. Again, advanced technology or features does not always equate to performance, as we'll soon see again in the GF FX 5200/5600.
Uh, where did you hear that rubbish? Doom3 will support 6 different rendering modes: NV10, NV20, NV30, R200, ARB, ARB2.

I'd never be in that situation, but then again, I wouldn't be griping about a tile-based renderer not being able to run games that require cube mapping. Was that Kyro2 ever DX compliant? The average consumer is buying ATi or nVidia, and until Doom3 hits the shelves, their oldest 3D accelerators will run most any game on the market, albeit slow and probably in software.
The only DX7 features it didn't support is cube map and hardware T&L.

Of course, but I've never advocated holding back API's or technology, I simply stated that expanding their life cycles was a good move. As I said in my original post, IHVs decision to extend development cycles and MS's decision to freeze DX10 until Longhorn is a good thing. Or would you prefer to see DX10 and compliant hardware in 6 months before the first DX9 game hits the shelf???
Truth be told, I don't know what we're arguing about.

DX8 was released late 2000.
DX8.1 was released late 2001.
DX9 was released late 2002.
DX9.1 will be released late 2003 (Microsoft may not increment the version number with the release of PS/VS 3.0)
DX10 will be released with Longhorn sometime in 2005

Is the product cycle really that different?

If you don't mind if I borrow the illustration:

If ATi/nVidia releases a new featured card every year for 3 years and no games took advantage of either of the top two APIs you might see:

DX6 or lower: 100 million
DX7 userbase: 90 million
DX8 userbase: 15 million
DX9 userbase: 5 million
DX10 userbase: 2 million
DX11 userbase: Bill Gates

If ATi/nVidia/MS extends the life cycle of DX9 and develop higher performing parts based on existing APIs over 3 years and there were actually games that used at least DX8, you might see:

DX7 userbase: 60 million
DX8 userbase: 50 million
DX9 userbase: 55 million

Considering nVidia and ATi are both moving to mainstream DX9 parts, there's a good chance the user base would shift more abruptly to DX8 and DX9 compliant parts, but the end-result would be the same. You'd have a much tighter user base.
I'm sorry, but those numbers are completely unrealistic. Your numbers assume that when a new API is released, all IHV will completely drop their current generation cards and release their next-gen cards. nVidia still sells TNT2 (DX6) and Geforce2 (DX7) cards simply because they are cheap to manufacture and ship in high volumes. But, I'm sure you know how to manufacture a DX9-based card as cheaply as a TNT2, right?
rolleye.gif
The FX5200 is a perfect example of what happens when you attempt to sell a high-tech part for a cheap price. Its performance is about on par with the Geforce3 Ti500. A card that's over 1.5 years old!
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Absolutely NOT. I believe Doom III will be an unreal disappointment because of expectations. :(

It won't be much more demanding than Unreal II . . . or ID is committing financial suicide.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: KnightBreed
1) You're forgetting one HUGE factor here. It still takes time for new products to saturate the market. If there are 100 million DX8 cards on the market, developers are going to continue making games optimized for DX8 until DX9 cards reach similar sales numbers. [/quote]
No, I haven't, which is why I've repeatedly emphasized the decision to stay with DX9 and the upcoming release of DX9 budget parts from nVidia (NV31) and to a lesser degree, ATi (RV350). The point is, there AREN'T 100 million DX8 cards on the market, because the budget cards have been dominated by lackluster DX7 parts like the GF4 MX series, or worse. The end result is a whole lot of nice looking features on paper and specifications wasted on DX7 games.

2) More bullocks. The Radeon 9700 has double the pixel pipelines and a higher core clock. It may not be more "efficient" clock for clock, but it's still an order of magnitude faster than the 8500 simply because it has nearly triple the fillrate.
Of course, which is why DX9 compatibility is a non-factor IMO for the vast majority of GPUs that claim DX9 compatibility. Take away PS/VS 2.0 from the 9700pro and you still have a part that performs considerably better than its predecessor, simply b/c the feature is unused! Like I said earlier, stabilizing at DX9.0 doesn't mean products will stagnate, there are other ways to improve performance, with brute force/pixel pushing being the most obvious.
That's my biggest gripe with new DX-whatever compliant hardware. They claim to have a slew of new features, yet, when push comes to shove, they simply don't live up to their billing (when a game finally comes along that would fully utilize its capabilities).
But at the least the game runs. That is priority #1 when testing for compatibility!! If the game doesn't run, who cares how fast it renders the scene?
:confused: I wasn't referring to compatibility, I was referring to the industry's fascination with supporting the next progression in features while struggling to remain competitive with its predecessor. Look at the 5600/5200 vs Ti4200/GF4 MX, or the R9000/R9600 vs R8500/R9500. Great, it supports the latest DX features, yet it runs worse than its predecessor in existing games and future games will run like a pig regardless of which card you use.

Fragment the user-base? Uh, you have DirectX 7, 8, and 9. That's it. Even with longer API cycles, developers will still have to deal technology stragglers that refuse to spend $100 for a new video card.
That's where the other half of the equation, the IHVs get involved. Instead of focusing on the next API and throwing their resources in that direction, they can concentrate on bringing the current API into the mainstream by improving existing parts and making lower end versions of those parts available to the mass market at an affordable price point. I'd say $100 is still more than most PC users out there would spend on a video card, but a sub-$100 DX9 part is on the way in the 5200. It'll still take time, but it sure beats the hell out of the previous trickle-down effect of having to buy the top end card and then waiting 2 years for it to become obsolete before a game is released that fully uses the API it was designed around.

The workarounds and compatibility issues usually stem from driver bugs and drivers that stray from the original DX spec. This is going to be an issue as long as you have several companies making competitive products. This isn't the issue it was in the mid/late 90's simply because nVidia and ATi dominate the market.
Its still a significant issue. If you ever bothered to open up a Read.me and checked out a list of supported cards, you'd find much more in there than nVidia and ATi cards. Vendor specific code paths are a fact of life in PC game development; the problem is magnified when each vendor requires even more specific "fixes" b/c it has 5 different GPU families that require their own code paths b/c someone figured planning 2 DX revisions ahead was a good idea.
rolleye.gif


The jump from PS1.4 -> 2.0 is larger than DX7 -> PS1.0. The difference in implementation and flexibility is massive - this will be especially apparent when Microsoft unveiles PS/VS 3.0 (yep another standard).
Sweet, so we should see a game that implements PS/VS 3.0 in what? 2007? In reality, the difference in implementation and flexibility is zero, because by the time we see PS/VS 3.0 it'll be 2005 (DX10) and we'll be having this discussion again with R600/V60 vs NV30/R300 instead of R200 vs NV10.

Truth be told, I don't know what we're arguing about.
I don't either.

DX8 was released late 2000.
DX8.1 was released late 2001.
DX9 was released late 2002.
DX9.1 will be released late 2003 (Microsoft may not increment the version number with the release of PS/VS 3.0)
DX10 will be released with Longhorn sometime in 2005

Is the product cycle really that different?
It is in my opinion. Its 2x as long between API switches, which gets you away from the typical API enabling hardware overhaul and 6 month refresh that nVidia made so famous. What does that mean? Instead of a new GPU family every year, you have one every 2 years, with 3 refreshes and 1 hardware enabling pioneer. Each refresh would build on the previous versions performance at the same rate any new hardware enabling core revision would for the API it was designed for. By the time the last refresh rolls out, a game for that API would be on the market, and all 4 revisions would benefit from a more polished game.

I'm sorry, but those numbers are completely unrealistic. Your numbers assume that when a new API is released, all IHV will completely drop their current generation cards and release their next-gen cards.
Er, that's what they do. Seen any GF3 Ti500's lately? How about GF2 Ultra's? Think the 9500pro (a full-blown R300 core) will be around once the 9600pro is released? Ti4600s are already going out of circulation with the impending launch of the FX 5600 line. Once they're gone, they'll be gone forever from retail channels.
nVidia still sells TNT2 (DX6) and Geforce2 (DX7) cards simply because they are cheap to manufacture and ship in high volumes. But, I'm sure you know how to manufacture a DX9-based card as cheaply as a TNT2, right?
rolleye.gif
Yep, that's the name of the game, package low-end minimum spec cards in OEM boxes, which comprise the overwhelming majority of PCs. The low-end chips are cheap to produce and sold in mass quantities perpetually regardless of the latest API. The rest of your higher performing parts fill in as a trickle-down effect with lower prices enabling accessibility (and the resulting steep curve in my illustration). You'll never find a former flagship sell for budget card prices or shipped for next to nothing in a baseline OEM box, that's not how it works. You'll get the latest and greatest budget GPU, or you'll pay a premium to upgrade to a current part. If you're OEM doesn't offer the choice, you're going into the underwhelming AIB market. As for selling a DX9-based card on the cheap, the 5200 might not be as cheap as a TNT2 to produce now, but more cores per wafer and improved yields should drive costs down over time.
The FX5200 is a perfect example of what happens when you attempt to sell a high-tech part for a cheap price. Its performance is about on par with the Geforce3 Ti500. A card that's over 1.5 years old!
So, you're complaining about technology stragglers and underperforming budget parts, yet a low-cost DX9 part that runs comparably to a former flagship GPU (that still runs most current games extremely well) is a bad thing? Some people are impossible to please! BRING ON DX15 and R1000!!!!
rolleye.gif


Chiz

 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Snatchface-

A recent example that has convinced me further of this is Rayman 3, which has gotten rave reviews in terms of graphics and gameplay on the console platform. PC Gamer just reviewed it and it got like a 50%. What this tells me is that the bar is just so much lower for the console platform that marginal games get higher reviews and kudos whereas when compared to the typical PC fare they look weak.

JediKnightII got glowing reviews from the PC press and mediocre ones from the console reviews. Obviously as a game that has a lot of FPS elements in it the PC has an advantage. Rayman3 is a platformer. Platformers suck on PCs. What kind of reviews do you think CivIII would get on consoles? Same type of situation looking at Rayman3.
 
Jun 18, 2000
11,209
775
126
This discussion is getting difficult to follow. Forgive me if I don't reply to all the points.:)
Originally posted by: chizow
No, I haven't, which is why I've repeatedly emphasized the decision to stay with DX9 and the upcoming release of DX9 budget parts from nVidia (NV31) and to a lesser degree, ATi (RV350). The point is, there AREN'T 100 million DX8 cards on the market, because the budget cards have been dominated by lackluster DX7 parts like the GF4 MX series, or worse. The end result is a whole lot of nice looking features on paper and specifications wasted on DX7 games.
But how can you say the paper specs are wasted, though? Even if the card gets replaced before its features are utilized, those paper specs are important to introduce the technology to the market.

Of course, which is why DX9 compatibility is a non-factor IMO for the vast majority of GPUs that claim DX9 compatibility. Take away PS/VS 2.0 from the 9700pro and you still have a part that performs considerably better than its predecessor, simply b/c the feature is unused! Like I said earlier, stabilizing at DX9.0 doesn't mean products will stagnate, there are other ways to improve performance, with brute force/pixel pushing being the most obvious.
That is so shortsighted. Take away the DX9 capability? Why would you do that? ATi released a product that is leaps and bounds faster than all DX8 products on the market and it has DX9 compliance to boot. What are you complaining about? Segmenting the market is an extremely weak argument simply because people who purchase the Radeon 9700 subsequently increase both the DX8 and DX9 userbase - which is important when a developer gets market information about product install-base. As long as the Radeon drivers adhere to DX WHQL standards, coding for the different chipsets should be a non-issue. Granted, that may be an ideal situation, but that's how it should be.

:confused: I wasn't referring to compatibility, I was referring to the industry's fascination with supporting the next progression in features while struggling to remain competitive with its predecessor. Look at the 5600/5200 vs Ti4200/GF4 MX, or the R9000/R9600 vs R8500/R9500. Great, it supports the latest DX features, yet it runs worse than its predecessor in existing games and future games will run like a pig regardless of which card you use.
Tis the name of the game. Push your flagship out as soon as possible to get attention to your company. Its similar to the old "Win on Sunday, sell on Monday" mantra in the auto industry. Having a top flagship product will entice people to purchase your lower end products.

Its still a significant issue. If you ever bothered to open up a Read.me and checked out a list of supported cards, you'd find much more in there than nVidia and ATi cards. Vendor specific code paths are a fact of life in PC game development; the problem is magnified when each vendor requires even more specific "fixes" b/c it has 5 different GPU families that require their own code paths b/c someone figured planning 2 DX revisions ahead was a good idea.
rolleye.gif
Two DX revisions ahead? :confused: DX7 was released alongside the Geforce2 SDR. DX8 was released alongside the Geforce3. DX9 was released alongside (more or less) the Radeon 9700.

Sweet, so we should see a game that implements PS/VS 3.0 in what? 2007? In reality, the difference in implementation and flexibility is zero, because by the time we see PS/VS 3.0 it'll be 2005 (DX10) and we'll be having this discussion again with R600/V60 vs NV30/R300 instead of R200 vs NV10.
I agree, it'll be years before we see games utilizing PS/VS 3.0 to any appreciable degree. What do you propose? The hardware has to be available first, and the economics of designing a DX9 part prevent ATi and nVidia from releasing speedy chips at good prices.

Is the product cycle really that different?
It is in my opinion. Its 2x as long between API switches, which gets you away from the typical API enabling hardware overhaul and 6 month refresh that nVidia made so famous. What does that mean? Instead of a new GPU family every year, you have one every 2 years, with 3 refreshes and 1 hardware enabling pioneer. Each refresh would build on the previous versions performance at the same rate any new hardware enabling core revision would for the API it was designed for. By the time the last refresh rolls out, a game for that API would be on the market, and all 4 revisions would benefit from a more polished game.
2x as long? DX8 lasted 2 years with a single evolution after the first year. DX9 should last 3 years with a single evolution after the first year.

Er, that's what they do. Seen any GF3 Ti500's lately? How about GF2 Ultra's? Think the 9500pro will be around once the 9600pro is released?
The Ti500 was available long after the Geforce4 cards were released and the GF2 Ultra's were available long after the Geforce3 was released. Also, I can purchase a Radeon 9100 which is based off the 1.5 year old Radeon 8500. Unfortunately, manufacturers can't just drop their current product line. It isn't that simple, as it can potentially take months or years to clear the distribution channels of old product.

So, you're complaining about technology stragglers and underperforming budget parts, yet a low-cost DX9 part that runs comparably to a former flagship GPU (that still runs most current games extremely well) is a bad thing? Some people are impossible to please! BRING ON DX15 and R1000!!!!
rolleye.gif
I've said before I'd prefer to buy a card that supports future technology, even it runs it slowly. The FX 5200, IMO, is the exception to that rule. Its a new card that runs current technology games slower than an old card released almost 2 years ago. Even with DX9 features, that is simply unacceptable.
 
Jun 18, 2000
11,209
775
126
One last thing. All this talk about longer product cycles is rather confusing.The Geforce2 core was on the market for 1.5 years before the Geforce3 was released. And, here we are over 2 years after the introduction the Geforce3, the GeforceFX is finally hitting the market.

Long product cycles is not a new concept... not even to ATi. Ati milked the Rage Pro and Fury chips for some time. The Radeon 1 and 2 had short-lived product cycles simply because they were getting beat by nVidia, and had no choice but to strike with new products.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
This discussion is getting difficult to follow. Forgive me if I don't reply to all the points.
I gave up trying to reply to everything a while ago. :D

Originally posted by: KnightBreedBut how can you say the paper specs are wasted, though? Even if the card gets replaced before its features are utilized, those paper specs are important to introduce the technology to the market.
Once again, its wasted on paper b/c technology is useless unless its implemented. Its a vicious cycle (much like this argument :p ;)), you introduce new technology before the hardware curve, and you find yourself twiddling your thumbs so you decide to push more technology out the door before there's any real need. Instead of focusing efforts on a particular feature set, game developers find themselves constantly chasing a moving target. I'm just going to start spamming developer interviews, as the single point driven home in every single one I've read is that devs don't want constant DX and hardware changes, they want IHVs to focus on improving hardware around current features and API libraries to maximize the features of existing hardware.

That is so shortsighted. Take away the DX9 capability? Why would you do that? ATi released a product that is leaps and bounds faster than all DX8 products on the market and it has DX9 compliance to boot. What are you complaining about? Segmenting the market is an extremely weak argument simply because people who purchase the Radeon 9700 subsequently increase both the DX8 and DX9 userbase - which is important when a developer gets market information about product install-base. As long as the Radeon drivers adhere to DX WHQL standards, coding for the different chipsets should be a non-issue. Granted, that may be an ideal situation, but that's how it should be.
I was speaking figuratively, but the point I was trying to make is that the R9700pro would've been an outstanding part regardless of whether or not it supported PS/VS 2.0 and DX9. Why do people upgrade their video cards? Because they want better performance TODAY. Its short-sighted IMO to upgrade based on features that may or may not be implemented a year down the road with the expectation the part will perform to even your current expectations once a new feature set is implemented.

As for market segmentation being a weak argument.
  • -The US population based on the 2000 Census is 281 million and change;
    -PC sales in the US are expected to hit 220 million in 2003.
    -ATi shipped a whopping 1 million R300 GPUs GLOBALLY. Btw, they didn't distinguish if they were 9700's, as we all know much cheaper 9500's are still R300.
    -nVidia, still the market leader, has yet to introduce their DX9 parts in quantity, but expected to ship 1.5 million FX units in March.
    -You can count on one hand the number of PC titles released annually that sell 1 million copies in their lifetime
The reality is that the installed DX9 userbase isn't even a blip on game devs. radar, or even DX8 for that matter. All they see are the lowest common denominator, and until that denominator changes, that's what you're stuck with. Pushing future technology without bringing it to the masses only makes the problem worse, as there will never be a compelling reason to implement new features until an overwhelming majority of the target base meets the minimum specifications. What you end up with is a segmented user base and games that lag behind by a few feature sets, which is the situation we're in now.

What do you propose? The hardware has to be available first, and the economics of designing a DX9 part prevent ATi and nVidia from releasing speedy chips at good prices.
I don't need to propose anything (not like it would matter anyways :)), as it looks like the hardware industry and MS are finally listening to what game developers have been saying for some time. There's no need to keep pushing technology for technolgy's sake (actually, that's a quote from AMD's Ruiz), focus on maximizing the potential of current technology. Once that happens, we'll see more quality titles that can be universally accepted by a larger portion of the PC gaming base, which will lead to better titles down the road (instead of the hit-or-miss motley crew of titles we see today). If ATi and nVidia devoted resources to improving current DX parts and extending the life cycle of the underlying technology instead of constantly looking forward (leaving unpolished products in the rear-view mirror), the economics of producing cheaper parts (through die-shrinks or refreshes) would improve while simultaneously increasing the installed user-base for any given DX standard.

The Ti500 was available long after the Geforce4 cards were released and the GF2 Ultra's were available long after the Geforce3 was released. Also, I can purchase a Radeon 9100 which is based off the 1.5 year old Radeon 8500. Unfortunately, manufacturers can't just drop their current product line. It isn't that simple, as it can potentially take months or years to clear the distribution channels of old product.
Again, you're looking through the glasses of a typical AT/BBS user. As soon as the next generation part is released, B&M's and major OEMs will clear their inventory of older parts and restock with the newer parts. Of course online e-tailers and liquidators might carry the part for an extended period of time before their inventory turns, but that's an issue that IHVs really can't control. Ideally, IHVs would like every GPU fab'd to be spoken for before its printed, but instead we get companies like 3Dfx going under and huge write-offs or adjustments on the financial statements of even the most established IHVs (check ATi's latest financials). All IHVs can do is shift their fabs to the new GPU, what's out there is out there, and if the product is selling, it won't be out there for long. The Radeon 9100 isn't an 8500 (different process w/ core changes), and certainly isn't the same retail card that was initially released. Again, improved yields and the lower costs associated with mature products along with the need to fill a particular market segment can result in product refreshes of older technology as long as the new part makes sense financially. Cost-cutting core revisions, RAM and PCB changes coupled with improved yields bring a high performing part to the masses at a fraction of the price. The same could be accomplished with high-end parts today if IHVs weren't constantly pushing the technology envelope.

I've said before I'd prefer to buy a card that supports future technology, even it runs it slowly. The FX 5200, IMO, is the exception to that rule. Its a new card that runs current technology games slower than an old card released almost 2 years ago. Even with DX9 features, that is simply unacceptable.
I'd personally never buy a 5200 simply b/c there was never a time I would've found its performance acceptable. However, I still feel its a necessary evil to consolidate the user base and give game developers some semblance of target hardware.

Chiz
 

Curley

Senior member
Oct 30, 1999
368
3
76
To get back to the basic question, I don't think it will "revive" pc gaming or pc sales. Most of the PCs that come across my bench are for upgrades in memory and hard drives. Parents have been telling me that they can't afford a vid card with every new game that comes out and especially if it involves a major upgrade, i.e. mobo/proc.

They are instead buying consoles that keep the kids more involve with games than the pc. I think we as enthusists are the big kids of the industry who want and can deal with the patches, tweaks, and technological prowness required for PC gaming.

The upside to console gaming for children is the ease and ability to play together.
 

Looney

Lifer
Jun 13, 2000
21,938
5
0
I think it's possible for a game to spur the revival of the computer hardware industry, but i don't think Doom 3 is necessarily that game. Why do i believe it's possible? Because i've seen first hand a demographic of people upgrading their computer hardware to play or enjoy a game better. And these aren't your teenagers either, they're your housewives and every day neighbor. What game is this? It's Everquest.

I still remember how with each expansion, people would rush out to upgrade their computer... with the Shadows of Luclin expansion being the one where this was most evident. SoL expansion included a lot of new models and graphics, and nearly doubled the system requirement of EQ. A lot of people i were playing with at the time were instantly upgrading their system to handle the new expansion... and most of those i helped knew absolutely nothing about computers. So yes, i believe it's possible... but most likely it would be a game like EQ2.
 

vss1980

Platinum Member
Feb 29, 2000
2,944
0
76
Will Doom3 spur a revival in the computer industry?

Well, did Unreal Tournament 2003 and Unreal 2 cause a major uptake in purchases...... not particularly....
Sure, a lot of people did upgrade something to be able to play the game better, but its hard to gauge that against the fact that some people upgrade to move with the times anyway regardless of games releases (although they do sometimes encourage an upgrade sooner than later).

I think Doom3 will have more of an effect than the Unreal games releases mainly because there is a bigger interest in the community overall and because of the Doom/Quake/id games brand name and marketing behind it. I do think it will make some people upgrade but just some, not most and definitely not all people. I think the most likely upgraders though will be those planning to upgrade anyway bringing their upgrade forward a bit.

So yeah, I think sales will increase, but it will not be a huge peak, just a noticable one.
 

Curley

Senior member
Oct 30, 1999
368
3
76
I think Doom3 will play satisfactory on way more computers than we think. I really don't think a game developer would sell a product that catered to 10% of the computer users. If I were a game developer, I would want my game to play on as many computers as possible to ensure sales would at least cover the cost of making the game.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
I really don't think a game developer would sell a product that catered to 10% of the computer users.

We aren't talking about 'a' game developer though, we are talking about Carmack ;) Look back to Quake3. When that hit no rig out could handle it at High Quality settings 1024x768 32bit(and that isn't the highest quality settings) at 60FPS. It took the GeForce DDR hitting a few months later to do that. In today's terms, think of a R9800Pro not being able to hit 60FPS running 10x7 paired with a 3GHZ P4 or AthlonXP3000 and a GB of the fastest RAM you can buy. If the same situation holds, you would need a NV40 or R400 to hit 60FPS running 1024x768x32bits, and that is just assuming that he is only pushing as hard as he did last time and not harder.

There will certainly be a code path for older hardware as there was with Quake3(well, not quite a code path in Q3 but the same end effect), but this engine is likely to be in use for at least three years(which id gets a kick back on for every game using it). I would expect it to crush systems when everything is cranked, at least certainly compared to the tripple digit framerates we have grown used to. The game will sell fairly well if for no other reason then showing off the graphics power of all those DX9 boards that will be circulating by then.
 

Smilin

Diamond Member
Mar 4, 2002
7,357
0
0
Originally posted by: apoppin
Absolutely NOT. I believe Doom III will be an unreal disappointment because of expectations. :(

It won't be much more demanding than Unreal II . . . or ID is committing financial suicide.

ID committing financial suicide with Doom? ROFLMFAO That's one of the funnier ones I've heard in a while. They could put a chunk of sh1t in a collectors tin box and it would still get carmack a new ferrari.

Doom will be financially successful regardless of how the game actually turns out.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: BenSkywalker
I really don't think a game developer would sell a product that catered to 10% of the computer users.

We aren't talking about 'a' game developer though, we are talking about Carmack ;) Look back to Quake3. When that hit no rig out could handle it at High Quality settings 1024x768 32bit(and that isn't the highest quality settings) at 60FPS. It took the GeForce DDR hitting a few months later to do that. In today's terms, think of a R9800Pro not being able to hit 60FPS running 10x7 paired with a 3GHZ P4 or AthlonXP3000 and a GB of the fastest RAM you can buy. If the same situation holds, you would need a NV40 or R400 to hit 60FPS running 1024x768x32bits, and that is just assuming that he is only pushing as hard as he did last time and not harder.

There will certainly be a code path for older hardware as there was with Quake3(well, not quite a code path in Q3 but the same end effect), but this engine is likely to be in use for at least three years(which id gets a kick back on for every game using it). I would expect it to crush systems when everything is cranked, at least certainly compared to the tripple digit framerates we have grown used to. The game will sell fairly well if for no other reason then showing off the graphics power of all those DX9 boards that will be circulating by then.
Quake III didn't run badly when it was released. Doom III won't run badly "either" an an average quality system.

Carmack isn't "suicidal" or at least Id isn't.

And I maintain Doom III will be a disappointment because of unreal expectations.
rolleye.gif


 

Curley

Senior member
Oct 30, 1999
368
3
76
I have some guys playing 1942 on PIII 750s with Geforce 2 GTS 64MB at 800x600 everything turned off and they are loving it. I also have some computers out there that are running Geforce 4 MX440 64MB DDR that are running all games satisfactory and at the price of a 9800 or 5800FX, they really don't care if they can see the individual fur on animals or the creases in some guys head. Take a step back and look at yourself when we didn't have AA, AF, z-buffering, DXT texture formats. Most people put in the disk, load it, see it, play it.