Thanks for the news flash.Originally posted by: KnightBreed
When DX9 was released, every DX8 video card didn't suddenly stop working.
That's a good question, you should ask them since I haven't seen a DX8-optimized title to date. My guess however, is because dev houses know that the overwhelming majority of GPUs and systems based on sales figures, market research, surveys (like when your system specs are polled when you install a game) and online feedback (such as 3DMark ORB) are incapable of running a game requiring a fully hardware compliant DX8 based engine.What's stopping PC developers from focusing heavily on a DX8-optimized title?
Where in my post did I give that impression? If anything, full forward (in software) and backward compatibility only compounds the problem, as again, it leads to unoptimized code and the need for programming on multiple platforms and hardware. We're on our 3rd DX version since DX7, yet the majority of games still run on DX7 engines. Any time a developer wants to implement new features in hardware, they have the vaunting task of exponentially increasing their workload in order to assure compatibility with older parts. The quick and dirty solution would be to have hardware-level requirements to run a game, but considering the slim margins and user-base in PC gaming, that's simply not an option. What we end up with is a bunch of bastardized titles that run on old, unoptimized engines that offer the option of current hardware-level features. Longer product cycles and time between API versions will allow game devs and PCs to catch up to a current APIs and the hardware best suited for it. If you keep raising the bar without need, you'll never reach a common ground.You seem to forget that DirectX revisions are backwards compatible. The userbase for a particular DX revision is cumulative with future revisions.
What makes you think that? The last major API improvement that we've seen in games is hardware T&L. We haven't seen anything as significant on the hardware level other than the use of shaders, which still haven't seen widespread use in games. Shader ops were introduced with DX8, but they won't even be the standard until DX9 parts and engines become mainstream and take advantage of fully programmable shaders using HLSL, cG etc. I expect a rather acute shift from DX7 compliant hardware straight to DX9 level hardware as the base API for future games, and it seems MS and IHVs are shooiting for the same goal.1) Longer times between API revisions will make the transition from one revision to another that much more difficult.
Again, I'm not following you here. When IHVs talk about increasing product cycles, they are referring to major core revisions that enable a new API feature set. That does not preclude them from coming out with product refreshes or even significant core changes every 6 months based on the same hardware requirements for a given API. A die shrink, increased clockspeeds, faster memory/memory interface, additional rendering pipelines, changes to the logical units can all yield performance boosts w/out changing the underlying technology of a particular GPU family.2) Longer product cycles will slow the advancement of new features, since the market will be flooded with products that have the lesser featurset.
Stable and non-volatile have nothing to do with this, but I do have an XBox. Btw, you haven't played Splinter Cell "the way its meant to be played" until you've played it on XBox.If you want a stable, non-volatile platform, buy a console.
Originally posted by: chizowBtw, you haven't played Splinter Cell "the way its meant to be played" until you've played it on XBox.
Chiz
Btw, you haven't played Splinter Cell "the way its meant to be played" until you've played it on XBox.
Originally posted by: moonshinemadness
It will either make people go out and buy new hardware...or sign up here and gripe about the fact there hardware wont run Doom 3![]()
But every single DX9 video card will be able to run all DX8 features in hardware, so why hold back hardware development?Originally posted by: chizow
Thanks for the news flash.How is that any different from any previous DX release? It still doesn't change the fact that DX8 hardware will not be able to run DX9 features in hardware.![]()
You can blame the consumer for not buying new video cards, or you can blame the developers for not making games that need new or faster video cards. That still doesn't explain why extending the lifecycle of an API revision will somehow advance feature adoption.That's a good question, you should ask them since I haven't seen a DX8-optimized title to date. My guess however, is because dev houses know that the overwhelming majority of GPUs and systems based on sales figures, market research, surveys (like when your system specs are polled when you install a game) and online feedback (such as 3DMark ORB) are incapable of running a game requiring a fully hardware compliant DX8 based engine.
That's bollocks. Unoptomized code? Daunting task? Global paramaters are set at initilization through the DirectX CAPS. Creating code paths for various features is by no means trivial, but calling it daunting is a tad of a stretch. We're talking about a 20 or 30 instruction fragment programs, not entire libraries worth of code.Where in my post did I give that impression? If anything, full forward (in software) and backward compatibility only compounds the problem, as again, it leads to unoptimized code and the need for programming on multiple platforms and hardware. We're on our 3rd DX version since DX7, yet the majority of games still run on DX7 engines. Any time a developer wants to implement new features in hardware, they have the vaunting task of exponentially increasing their workload in order to assure compatibility with older parts. The quick and dirty solution would be to have hardware-level requirements to run a game, but considering the slim margins and user-base in PC gaming, that's simply not an option. What we end up with is a bunch of bastardized titles that run on old, unoptimized engines that offer the option of current hardware-level features. Longer product cycles and time between API versions will allow game devs and PCs to catch up to a current APIs and the hardware best suited for it. If you keep raising the bar without need, you'll never reach a common ground.
DX8 has been on the market for over 2 years, with dozens of PC products and a successfull console all that support DX8. What on earth would lead you to believe that most devs would skip right over DX8 level pixel and vertex shaders. Given the current lighting trend using stencil operations and cube maps, I see no reason why DX8 is any less useful now that DX9 is available.What makes you think that? The last major API improvement that we've seen in games is hardware T&L. We haven't seen anything as significant on the hardware level other than the use of shaders, which still haven't seen widespread use in games. Shader ops were introduced with DX8, but they won't even be the standard until DX9 parts and engines become mainstream and take advantage of fully programmable shaders using HLSL, cG etc. I expect a rather acute shift from DX7 compliant hardware straight to DX9 level hardware as the base API for future games, and it seems MS and IHVs are shooiting for the same goal.
Yes, I know exactly what IHV's mean by "increasing product cycles," hence why I said "since the market will be flooded with products that have the lesser featureset." Two years from now we'll be in the same situation. Developers will be coding for DX8 (or 7 or what-have-you), and IHV will start releasing their next generation products. Most devs will continue to develop for the older API until the new products reach significant-enough market saturation.Again, I'm not following you here. When IHVs talk about increasing product cycles, they are referring to major core revisions that enable a new API feature set. That does not preclude them from coming out with product refreshes or even significant core changes every 6 months based on the same hardware requirements for a given API. A die shrink, increased clockspeeds, faster memory/memory interface, additional rendering pipelines, changes to the logical units can all yield performance boosts w/out changing the underlying technology of a particular GPU family.
That's speculation based on some tidbits of information given by Carmack. You haven't the slightest clue what the final performance will be for the R200 vs GF4Hardware support and actual performance aren't synonymous either. Look at the R200 core, a DX 8.1 hardware compliant part vs. NV25(GF4), a DX 8 hardware compliant part. The GF4 requires multiple passes to render a single scene in Doom 3 b/c of PS 1.1, the R8500 can render in a single pass b/c of PS 1.4, yet the GF4 still outperforms the R200 significantly.
Two years from now, would you rather have a video card that runs slowly, or a video card that doesn't run at all? I have a Kyro2 and it ran Unreal Tournament and Quake3 unbelievably well. Now, I'm stuck with a video card that doesn't support cube maps, and subsequently won't run any of the DX7, 8, or 9 tests in 3DMark03 and won't run Doom3. I've also had trouble running No One Lives Forever2, among other games. The average consumer would rather have the game run slowly, then not at all.This is a perfect example of why enabling future technology instead of optimizing current technology isn't always preferable in current or future titles. We're seeing the opposite situation with the lower-end FX cards (5600 and 5200); from all indications, they've sacrificed performance in today's games (DX7 w/ some DX8 features) for DX9 hardware compliance. Personally, I feel its a necessary evil in order to push the hardware curve towards DX9, but I'd hate to be the fellow who buys a 5200FX expecting it to run Doom3 better than a GF4Ti simply b/c the FX is a DX9 part.
A developer will never release a game without hardware support first. Its unfortunate, but that's the way the industry works. Holding back DX9 technology will do nothing but hurt the adoption of the new features.And again, I have to ask, what's the point of enabling hardware level support of new features when it'll be 1-2 years before they are implemented in games, at which point the product you bought might very well be inadequate to use said features?
Splinter Cell was a PC game from the very start. The "the way its meant to be played" was on a high resolution monitor with 60+ fps. You can blame Ubi-Soft for its crappy ATi support.Stable and non-volatile have nothing to do with this, but I do have an XBox. Btw, you haven't played Splinter Cell "the way its meant to be played" until you've played it on XBox.
Chiz
Of course, everyone is entitled to their own opinions.Originally posted by: BoomAM
While both versions are very very good games, i feel that the pc version is better.
I agree, its easier to aim, but I found the game grotesquely easy on the PC b/c of this. Using the joypad takes getting used to if you're a long time PC gamer, particularly if you're an FPSer or if you approach SC like an FPS. I don't consider it an FPS (its not, by definition at the very least) and found the gamepad controls to be much more streamlined, natural, and responsive. The XBox analog sticks offer something ridiculous like 360 degrees of sensitivity (directional and degree of movement). A mouse scroll-wheel can't touch that level of sensitivity in terms of movement. After a few days of playing, I grew quite accustomed to the thumbstick response; SC is a slower paced game based on stealth, so I found myself having plenty of time to pull off headshots when necessary or sneaking up on baddies. The rumble features and trigger button (squeezing a trigger and shooting is fun in and of itselfThe controls are far better, easyier to aim, sneek up on people, control speed/character.
I'd have to disagree with you there. I've yet to find a PC game that rivals the DD 5.1 sound of my XBox titles. There are PC games that have exceptional positional sound (assuming you have an Audigy or Audigy 2 that supports EAX3), but the quality of the sound samples and the clarity/response of the LFE and Center channels on the XBox on a proper DD 5.1 system is cinema quality. I run my Audigy 2 through 5.1 analog inputs on the same set-up, so there's no unfair advantage either.The sound is about the same, as they are both surround sound games.
Yes, the graphics are sharper when compared to a TV and there is a bit more detail on extraneous objects (magazines, cans, computer monitors etc.), but I had to turn up AA to 4X and AF to 8X quality to get a similar image quality. At that point, I was shocked to see my frame rates drop into the 20's with very noticeable stuttering when scrolling quickly. I also prefer the low resolution natural AA for action games; I found the increased resolution and sharpness on the PC made the game look "artificial". I really don't need to see every flick of interference when I put on my NV goggles. Also, being able to run it on a 48'' HDTV is more visually impressive than running it at 1280x1024 on a 19'' monitor. I also found the lighting effects (which cause a significant penalty on the PC) to be better on the XBox, free of charge.The graphics, although similar, are more crisp on the pc version, due to the fact the a monitor is clearer/sharper and you can run it at a higher resolution. Textures seem the same as well.
Np, I make a clear distinction between games I would buy for the XBox and what I would buy for the PC, SC is just a game I feel works better on the XBox. "The way its meant....." is actually a nV slogan, and I don't think SC says its an XBox exclusive. Halo would've been great on the PC (probably better IMO) b/c it is an FPS, but honestly, the hardware requirements to run it on a similar level as the XBox would be extreme. Its probably a good thing its been delayed on the PC, b/c if it came out when it did for the XBox, I'm not sure how well it would've done considering the hardware available at the time.On a side note, not getting at you chiz.
Is that i really pisses me off when you see Xbox games that say things like "the way its ment to be played" or "designed for Xbox".
Most of those games were ment to be PC games first, but due to a bit of cash from MS, go to the Xbox first, leaving me to argue with ignorant pratts that think Halo and the likes was designed from the ground up to be a Xbox game. Yeah right.
Originally posted by: KnightBreed
What's the point in pushing additional features and new hardware if:But every single DX9 video card will be able to run all DX8 features in hardware, so why hold back hardware development?
1) The new features won't be implemented for another 2 years b/c the limiting factor is the remnants of the overambitious "cutting-edge" cards from the previous generation.
2) The new hardware doesn't benefit from or isn't any more efficient in running older features.
That's my biggest gripe with new DX-whatever compliant hardware. They claim to have a slew of new features, yet, when push comes to shove, they simply don't live up to their billing (when a game finally comes along that would fully utilize its capabilities).
You explained it yourself. Extending the API and hardware lifecycles would simply unify early and late adopters and place them on a more level playing field. Game developers push for maximum compatibility and IHVs push for 6-month product cycles. You don't see a conflict here? When you have a new DX-whatever compliant part coming out every year, you further fragment the user-base.That still doesn't explain why extending the lifecycle of an API revision will somehow advance feature adoption.
Have you bothered reading dev interviews? Instead of focusing on optimizing code for a particular feature set in hardware (a luxury console developers are afforded), they spend their time figuring out what features they can implement on the hardware level without destroying their target market. Any features that need to be done in software need to be addressed to ensure the effect is acceptable visually and on a performance level. Even 20 to 30 fragment programs is considerable (although I think its more than that) when you have code paths for each GPU family from several IHVs. The end result is that game developers end up spending their time on workarounds and compatibility issues instead of pushing any new features.That's bollocks. Unoptomized code? Daunting task? Global paramaters are set at initilization through the DirectX CAPS. Creating code paths for various features is by no means trivial, but calling it daunting is a tad of a stretch. We're talking about a 20 or 30 instruction fragment programs, not entire libraries worth of code.
Unlike previous DX releases, DX9 doesn't introduce completely new features, it introduces more efficient and robust improvements on existing features. The writing is on the wall, the new mainstream/value parts are DX9 compliant (9200/9600 and 5200/5600).DX8 has been on the market for over 2 years, with dozens of PC products and a successfull console all that support DX8. What on earth would lead you to believe that most devs would skip right over DX8 level pixel and vertex shaders. Given the current lighting trend using stencil operations and cube maps, I see no reason why DX8 is any less useful now that DX9 is available.
Considering Carmack shifted D3 development to R200 and ARB2 as the reference renderer after scrapping GF2 support, I'd say his comments a year and a half later are a pretty good indication of how the two will perform. Again, advanced technology or features does not always equate to performance, as we'll soon see again in the GF FX 5200/5600.That's speculation based on some tidbits of information given by Carmack. You haven't the slightest clue what the final performance will be for the R200 vs GF4
I'd never be in that situation, but then again, I wouldn't be griping about a tile-based renderer not being able to run games that require cube mapping. Was that Kyro2 ever DX compliant? The average consumer is buying ATi or nVidia, and until Doom3 hits the shelves, their oldest 3D accelerators will run most any game on the market, albeit slow and probably in software.Two years from now, would you rather have a video card that runs slowly, or a video card that doesn't run at all? I have a Kyro2 and it ran Unreal Tournament and Quake3 unbelievably well. Now, I'm stuck with a video card that doesn't support cube maps, and subsequently won't run any of the DX7, 8, or 9 tests in 3DMark03 and won't run Doom3. I've also had trouble running No One Lives Forever2, among other games. The average consumer would rather have the game run slowly, then not at all.
Of course, but I've never advocated holding back API's or technology, I simply stated that expanding their life cycles was a good move. As I said in my original post, IHVs decision to extend development cycles and MS's decision to freeze DX10 until Longhorn is a good thing. Or would you prefer to see DX10 and compliant hardware in 6 months before the first DX9 game hits the shelf???A developer will never release a game without hardware support first. Its unfortunate, but that's the way the industry works. Holding back DX9 technology will do nothing but hurt the adoption of the new features.
If you don't mind if I borrow the illustration:You're missing the big picture.
If ATi/nVidia releases a new featured card every year for 3 years and no games took advantage of either of the top two APIs you might see:
DX6 or lower: 100 million
DX7 userbase: 90 million
DX8 userbase: 15 million
DX9 userbase: 5 million
DX10 userbase: 2 million
DX11 userbase: Bill Gates
If ATi/nVidia/MS extends the life cycle of DX9 and develop higher performing parts based on existing APIs over 3 years and there were actually games that used at least DX8, you might see:
DX7 userbase: 60 million
DX8 userbase: 50 million
DX9 userbase: 55 million
Considering nVidia and ATi are both moving to mainstream DX9 parts, there's a good chance the user base would shift more abruptly to DX8 and DX9 compliant parts, but the end-result would be the same. You'd have a much tighter user base, and as a result, you'd see more advanced features implemented in a much more polished package.
Chiz
1) You're forgetting one HUGE factor here. It still takes time for new products to saturate the market. If there are 100 million DX8 cards on the market, developers are going to continue making games optimized for DX8 until DX9 cards reach similar sales numbers. I call it the Playstation2 syndrom. The Playstation2 is inferior to the GameCube and Xbox technically, but developers continue to make games for it simply because of the huge userbase. The larger the original userbase, the longer it'll take to move to a new generation.What's the point in pushing additional features and new hardware if:
1) The new features won't be implemented for another 2 years b/c the limiting factor is the remnants of the overambitious "cutting-edge" cards from the previous generation.
2) The new hardware doesn't benefit from or isn't any more efficient in running older features.
But at the least the game runs. That is priority #1 when testing for compatibility!! If the game doesn't run, who cares how fast it renders the scene?That's my biggest gripe with new DX-whatever compliant hardware. They claim to have a slew of new features, yet, when push comes to shove, they simply don't live up to their billing (when a game finally comes along that would fully utilize its capabilities).
Fragment the user-base? Uh, you have DirectX 7, 8, and 9. That's it. Even with longer API cycles, developers will still have to deal technology stragglers that refuse to spend $100 for a new video card.You explained it yourself. Extending the API and hardware lifecycles would simply unify early and late adopters and place them on a more level playing field. Game developers push for maximum compatibility and IHVs push for 6-month product cycles. You don't see a conflict here? When you have a new DX-whatever compliant part coming out every year, you further fragment the user-base.
1) The workarounds and compatibility issues usually stem from driver bugs and drivers that stray from the original DX spec. This is going to be an issue as long as you have several companies making competitive products. This isn't the issue it was in the mid/late 90's simply because nVidia and ATi dominate the market.Have you bothered reading dev interviews? Instead of focusing on optimizing code for a particular feature set in hardware (a luxury console developers are afforded), they spend their time figuring out what features they can implement on the hardware level without destroying their target market. Any features that need to be done in software need to be addressed to ensure the effect is acceptable visually and on a performance level. Even 20 to 30 fragment programs is considerable (although I think its more than that) when you have code paths for each GPU family from several IHVs. The end result is that game developers end up spending their time on workarounds and compatibility issues instead of pushing any new features.
1) The jump from PS1.4 -> 2.0 is larger than DX7 -> PS1.0. The difference in implementation and flexibility is massive - this will be especially apparent when Microsoft unveiles PS/VS 3.0 (yep another standard).Unlike previous DX releases, DX9 doesn't introduce completely new features, it introduces more efficient and robust improvements on existing features. The writing is on the wall, the new mainstream/value parts are DX9 compliant (9200/9600 and 5200/5600).
Uh, where did you hear that rubbish? Doom3 will support 6 different rendering modes: NV10, NV20, NV30, R200, ARB, ARB2.Considering Carmack shifted D3 development to R200 and ARB2 as the reference renderer after scrapping GF2 support, I'd say his comments a year and a half later are a pretty good indication of how the two will perform. Again, advanced technology or features does not always equate to performance, as we'll soon see again in the GF FX 5200/5600.
The only DX7 features it didn't support is cube map and hardware T&L.I'd never be in that situation, but then again, I wouldn't be griping about a tile-based renderer not being able to run games that require cube mapping. Was that Kyro2 ever DX compliant? The average consumer is buying ATi or nVidia, and until Doom3 hits the shelves, their oldest 3D accelerators will run most any game on the market, albeit slow and probably in software.
Truth be told, I don't know what we're arguing about.Of course, but I've never advocated holding back API's or technology, I simply stated that expanding their life cycles was a good move. As I said in my original post, IHVs decision to extend development cycles and MS's decision to freeze DX10 until Longhorn is a good thing. Or would you prefer to see DX10 and compliant hardware in 6 months before the first DX9 game hits the shelf???
I'm sorry, but those numbers are completely unrealistic. Your numbers assume that when a new API is released, all IHV will completely drop their current generation cards and release their next-gen cards. nVidia still sells TNT2 (DX6) and Geforce2 (DX7) cards simply because they are cheap to manufacture and ship in high volumes. But, I'm sure you know how to manufacture a DX9-based card as cheaply as a TNT2, right?If you don't mind if I borrow the illustration:
If ATi/nVidia releases a new featured card every year for 3 years and no games took advantage of either of the top two APIs you might see:
DX6 or lower: 100 million
DX7 userbase: 90 million
DX8 userbase: 15 million
DX9 userbase: 5 million
DX10 userbase: 2 million
DX11 userbase: Bill Gates
If ATi/nVidia/MS extends the life cycle of DX9 and develop higher performing parts based on existing APIs over 3 years and there were actually games that used at least DX8, you might see:
DX7 userbase: 60 million
DX8 userbase: 50 million
DX9 userbase: 55 million
Considering nVidia and ATi are both moving to mainstream DX9 parts, there's a good chance the user base would shift more abruptly to DX8 and DX9 compliant parts, but the end-result would be the same. You'd have a much tighter user base.
1) You're forgetting one HUGE factor here. It still takes time for new products to saturate the market. If there are 100 million DX8 cards on the market, developers are going to continue making games optimized for DX8 until DX9 cards reach similar sales numbers. [/quote]Originally posted by: KnightBreed
Of course, which is why DX9 compatibility is a non-factor IMO for the vast majority of GPUs that claim DX9 compatibility. Take away PS/VS 2.0 from the 9700pro and you still have a part that performs considerably better than its predecessor, simply b/c the feature is unused! Like I said earlier, stabilizing at DX9.0 doesn't mean products will stagnate, there are other ways to improve performance, with brute force/pixel pushing being the most obvious.2) More bullocks. The Radeon 9700 has double the pixel pipelines and a higher core clock. It may not be more "efficient" clock for clock, but it's still an order of magnitude faster than the 8500 simply because it has nearly triple the fillrate.
But at the least the game runs. That is priority #1 when testing for compatibility!! If the game doesn't run, who cares how fast it renders the scene?That's my biggest gripe with new DX-whatever compliant hardware. They claim to have a slew of new features, yet, when push comes to shove, they simply don't live up to their billing (when a game finally comes along that would fully utilize its capabilities).
That's where the other half of the equation, the IHVs get involved. Instead of focusing on the next API and throwing their resources in that direction, they can concentrate on bringing the current API into the mainstream by improving existing parts and making lower end versions of those parts available to the mass market at an affordable price point. I'd say $100 is still more than most PC users out there would spend on a video card, but a sub-$100 DX9 part is on the way in the 5200. It'll still take time, but it sure beats the hell out of the previous trickle-down effect of having to buy the top end card and then waiting 2 years for it to become obsolete before a game is released that fully uses the API it was designed around.Fragment the user-base? Uh, you have DirectX 7, 8, and 9. That's it. Even with longer API cycles, developers will still have to deal technology stragglers that refuse to spend $100 for a new video card.
Its still a significant issue. If you ever bothered to open up a Read.me and checked out a list of supported cards, you'd find much more in there than nVidia and ATi cards. Vendor specific code paths are a fact of life in PC game development; the problem is magnified when each vendor requires even more specific "fixes" b/c it has 5 different GPU families that require their own code paths b/c someone figured planning 2 DX revisions ahead was a good idea.The workarounds and compatibility issues usually stem from driver bugs and drivers that stray from the original DX spec. This is going to be an issue as long as you have several companies making competitive products. This isn't the issue it was in the mid/late 90's simply because nVidia and ATi dominate the market.
Sweet, so we should see a game that implements PS/VS 3.0 in what? 2007? In reality, the difference in implementation and flexibility is zero, because by the time we see PS/VS 3.0 it'll be 2005 (DX10) and we'll be having this discussion again with R600/V60 vs NV30/R300 instead of R200 vs NV10.The jump from PS1.4 -> 2.0 is larger than DX7 -> PS1.0. The difference in implementation and flexibility is massive - this will be especially apparent when Microsoft unveiles PS/VS 3.0 (yep another standard).
I don't either.Truth be told, I don't know what we're arguing about.
It is in my opinion. Its 2x as long between API switches, which gets you away from the typical API enabling hardware overhaul and 6 month refresh that nVidia made so famous. What does that mean? Instead of a new GPU family every year, you have one every 2 years, with 3 refreshes and 1 hardware enabling pioneer. Each refresh would build on the previous versions performance at the same rate any new hardware enabling core revision would for the API it was designed for. By the time the last refresh rolls out, a game for that API would be on the market, and all 4 revisions would benefit from a more polished game.DX8 was released late 2000.
DX8.1 was released late 2001.
DX9 was released late 2002.
DX9.1 will be released late 2003 (Microsoft may not increment the version number with the release of PS/VS 3.0)
DX10 will be released with Longhorn sometime in 2005
Is the product cycle really that different?
Er, that's what they do. Seen any GF3 Ti500's lately? How about GF2 Ultra's? Think the 9500pro (a full-blown R300 core) will be around once the 9600pro is released? Ti4600s are already going out of circulation with the impending launch of the FX 5600 line. Once they're gone, they'll be gone forever from retail channels.I'm sorry, but those numbers are completely unrealistic. Your numbers assume that when a new API is released, all IHV will completely drop their current generation cards and release their next-gen cards.
Yep, that's the name of the game, package low-end minimum spec cards in OEM boxes, which comprise the overwhelming majority of PCs. The low-end chips are cheap to produce and sold in mass quantities perpetually regardless of the latest API. The rest of your higher performing parts fill in as a trickle-down effect with lower prices enabling accessibility (and the resulting steep curve in my illustration). You'll never find a former flagship sell for budget card prices or shipped for next to nothing in a baseline OEM box, that's not how it works. You'll get the latest and greatest budget GPU, or you'll pay a premium to upgrade to a current part. If you're OEM doesn't offer the choice, you're going into the underwhelming AIB market. As for selling a DX9-based card on the cheap, the 5200 might not be as cheap as a TNT2 to produce now, but more cores per wafer and improved yields should drive costs down over time.nVidia still sells TNT2 (DX6) and Geforce2 (DX7) cards simply because they are cheap to manufacture and ship in high volumes. But, I'm sure you know how to manufacture a DX9-based card as cheaply as a TNT2, right?![]()
So, you're complaining about technology stragglers and underperforming budget parts, yet a low-cost DX9 part that runs comparably to a former flagship GPU (that still runs most current games extremely well) is a bad thing? Some people are impossible to please! BRING ON DX15 and R1000!!!!The FX5200 is a perfect example of what happens when you attempt to sell a high-tech part for a cheap price. Its performance is about on par with the Geforce3 Ti500. A card that's over 1.5 years old!
A recent example that has convinced me further of this is Rayman 3, which has gotten rave reviews in terms of graphics and gameplay on the console platform. PC Gamer just reviewed it and it got like a 50%. What this tells me is that the bar is just so much lower for the console platform that marginal games get higher reviews and kudos whereas when compared to the typical PC fare they look weak.
But how can you say the paper specs are wasted, though? Even if the card gets replaced before its features are utilized, those paper specs are important to introduce the technology to the market.Originally posted by: chizow
No, I haven't, which is why I've repeatedly emphasized the decision to stay with DX9 and the upcoming release of DX9 budget parts from nVidia (NV31) and to a lesser degree, ATi (RV350). The point is, there AREN'T 100 million DX8 cards on the market, because the budget cards have been dominated by lackluster DX7 parts like the GF4 MX series, or worse. The end result is a whole lot of nice looking features on paper and specifications wasted on DX7 games.
That is so shortsighted. Take away the DX9 capability? Why would you do that? ATi released a product that is leaps and bounds faster than all DX8 products on the market and it has DX9 compliance to boot. What are you complaining about? Segmenting the market is an extremely weak argument simply because people who purchase the Radeon 9700 subsequently increase both the DX8 and DX9 userbase - which is important when a developer gets market information about product install-base. As long as the Radeon drivers adhere to DX WHQL standards, coding for the different chipsets should be a non-issue. Granted, that may be an ideal situation, but that's how it should be.Of course, which is why DX9 compatibility is a non-factor IMO for the vast majority of GPUs that claim DX9 compatibility. Take away PS/VS 2.0 from the 9700pro and you still have a part that performs considerably better than its predecessor, simply b/c the feature is unused! Like I said earlier, stabilizing at DX9.0 doesn't mean products will stagnate, there are other ways to improve performance, with brute force/pixel pushing being the most obvious.
Tis the name of the game. Push your flagship out as soon as possible to get attention to your company. Its similar to the old "Win on Sunday, sell on Monday" mantra in the auto industry. Having a top flagship product will entice people to purchase your lower end products.I wasn't referring to compatibility, I was referring to the industry's fascination with supporting the next progression in features while struggling to remain competitive with its predecessor. Look at the 5600/5200 vs Ti4200/GF4 MX, or the R9000/R9600 vs R8500/R9500. Great, it supports the latest DX features, yet it runs worse than its predecessor in existing games and future games will run like a pig regardless of which card you use.
Two DX revisions ahead?Its still a significant issue. If you ever bothered to open up a Read.me and checked out a list of supported cards, you'd find much more in there than nVidia and ATi cards. Vendor specific code paths are a fact of life in PC game development; the problem is magnified when each vendor requires even more specific "fixes" b/c it has 5 different GPU families that require their own code paths b/c someone figured planning 2 DX revisions ahead was a good idea.![]()
I agree, it'll be years before we see games utilizing PS/VS 3.0 to any appreciable degree. What do you propose? The hardware has to be available first, and the economics of designing a DX9 part prevent ATi and nVidia from releasing speedy chips at good prices.Sweet, so we should see a game that implements PS/VS 3.0 in what? 2007? In reality, the difference in implementation and flexibility is zero, because by the time we see PS/VS 3.0 it'll be 2005 (DX10) and we'll be having this discussion again with R600/V60 vs NV30/R300 instead of R200 vs NV10.
2x as long? DX8 lasted 2 years with a single evolution after the first year. DX9 should last 3 years with a single evolution after the first year.It is in my opinion. Its 2x as long between API switches, which gets you away from the typical API enabling hardware overhaul and 6 month refresh that nVidia made so famous. What does that mean? Instead of a new GPU family every year, you have one every 2 years, with 3 refreshes and 1 hardware enabling pioneer. Each refresh would build on the previous versions performance at the same rate any new hardware enabling core revision would for the API it was designed for. By the time the last refresh rolls out, a game for that API would be on the market, and all 4 revisions would benefit from a more polished game.Is the product cycle really that different?
The Ti500 was available long after the Geforce4 cards were released and the GF2 Ultra's were available long after the Geforce3 was released. Also, I can purchase a Radeon 9100 which is based off the 1.5 year old Radeon 8500. Unfortunately, manufacturers can't just drop their current product line. It isn't that simple, as it can potentially take months or years to clear the distribution channels of old product.Er, that's what they do. Seen any GF3 Ti500's lately? How about GF2 Ultra's? Think the 9500pro will be around once the 9600pro is released?
I've said before I'd prefer to buy a card that supports future technology, even it runs it slowly. The FX 5200, IMO, is the exception to that rule. Its a new card that runs current technology games slower than an old card released almost 2 years ago. Even with DX9 features, that is simply unacceptable.So, you're complaining about technology stragglers and underperforming budget parts, yet a low-cost DX9 part that runs comparably to a former flagship GPU (that still runs most current games extremely well) is a bad thing? Some people are impossible to please! BRING ON DX15 and R1000!!!!![]()
I gave up trying to reply to everything a while ago.This discussion is getting difficult to follow. Forgive me if I don't reply to all the points.
Once again, its wasted on paper b/c technology is useless unless its implemented. Its a vicious cycle (much like this argumentOriginally posted by: KnightBreedBut how can you say the paper specs are wasted, though? Even if the card gets replaced before its features are utilized, those paper specs are important to introduce the technology to the market.
I was speaking figuratively, but the point I was trying to make is that the R9700pro would've been an outstanding part regardless of whether or not it supported PS/VS 2.0 and DX9. Why do people upgrade their video cards? Because they want better performance TODAY. Its short-sighted IMO to upgrade based on features that may or may not be implemented a year down the road with the expectation the part will perform to even your current expectations once a new feature set is implemented.That is so shortsighted. Take away the DX9 capability? Why would you do that? ATi released a product that is leaps and bounds faster than all DX8 products on the market and it has DX9 compliance to boot. What are you complaining about? Segmenting the market is an extremely weak argument simply because people who purchase the Radeon 9700 subsequently increase both the DX8 and DX9 userbase - which is important when a developer gets market information about product install-base. As long as the Radeon drivers adhere to DX WHQL standards, coding for the different chipsets should be a non-issue. Granted, that may be an ideal situation, but that's how it should be.
I don't need to propose anything (not like it would matter anywaysWhat do you propose? The hardware has to be available first, and the economics of designing a DX9 part prevent ATi and nVidia from releasing speedy chips at good prices.
Again, you're looking through the glasses of a typical AT/BBS user. As soon as the next generation part is released, B&M's and major OEMs will clear their inventory of older parts and restock with the newer parts. Of course online e-tailers and liquidators might carry the part for an extended period of time before their inventory turns, but that's an issue that IHVs really can't control. Ideally, IHVs would like every GPU fab'd to be spoken for before its printed, but instead we get companies like 3Dfx going under and huge write-offs or adjustments on the financial statements of even the most established IHVs (check ATi's latest financials). All IHVs can do is shift their fabs to the new GPU, what's out there is out there, and if the product is selling, it won't be out there for long. The Radeon 9100 isn't an 8500 (different process w/ core changes), and certainly isn't the same retail card that was initially released. Again, improved yields and the lower costs associated with mature products along with the need to fill a particular market segment can result in product refreshes of older technology as long as the new part makes sense financially. Cost-cutting core revisions, RAM and PCB changes coupled with improved yields bring a high performing part to the masses at a fraction of the price. The same could be accomplished with high-end parts today if IHVs weren't constantly pushing the technology envelope.The Ti500 was available long after the Geforce4 cards were released and the GF2 Ultra's were available long after the Geforce3 was released. Also, I can purchase a Radeon 9100 which is based off the 1.5 year old Radeon 8500. Unfortunately, manufacturers can't just drop their current product line. It isn't that simple, as it can potentially take months or years to clear the distribution channels of old product.
I'd personally never buy a 5200 simply b/c there was never a time I would've found its performance acceptable. However, I still feel its a necessary evil to consolidate the user base and give game developers some semblance of target hardware.I've said before I'd prefer to buy a card that supports future technology, even it runs it slowly. The FX 5200, IMO, is the exception to that rule. Its a new card that runs current technology games slower than an old card released almost 2 years ago. Even with DX9 features, that is simply unacceptable.
I really don't think a game developer would sell a product that catered to 10% of the computer users.
Originally posted by: apoppin
Absolutely NOT. I believe Doom III will be an unreal disappointment because of expectations.
It won't be much more demanding than Unreal II . . . or ID is committing financial suicide.
Quake III didn't run badly when it was released. Doom III won't run badly "either" an an average quality system.Originally posted by: BenSkywalker
I really don't think a game developer would sell a product that catered to 10% of the computer users.
We aren't talking about 'a' game developer though, we are talking about CarmackLook back to Quake3. When that hit no rig out could handle it at High Quality settings 1024x768 32bit(and that isn't the highest quality settings) at 60FPS. It took the GeForce DDR hitting a few months later to do that. In today's terms, think of a R9800Pro not being able to hit 60FPS running 10x7 paired with a 3GHZ P4 or AthlonXP3000 and a GB of the fastest RAM you can buy. If the same situation holds, you would need a NV40 or R400 to hit 60FPS running 1024x768x32bits, and that is just assuming that he is only pushing as hard as he did last time and not harder.
There will certainly be a code path for older hardware as there was with Quake3(well, not quite a code path in Q3 but the same end effect), but this engine is likely to be in use for at least three years(which id gets a kick back on for every game using it). I would expect it to crush systems when everything is cranked, at least certainly compared to the tripple digit framerates we have grown used to. The game will sell fairly well if for no other reason then showing off the graphics power of all those DX9 boards that will be circulating by then.
