Originally posted by: VirtualLarry
	
	
		
		
			Originally posted by: apoppin
The only think i can say to ease your concerns, 
VirtualLarry, is to say SM 3.0 will be implemented gradually just as 2.0 was and 1.4 before it.  Game designers know better than to bring out a game with gfx advanced that no one can play.  
		 
		
	 
But see, that's the thing. The real, true value of the SM3.0 spec, is that it is presented as the sort of "new universal baseline standard". The idea is that coding for SM3.0 makes the dev's lives easier and allow them to get their jobs done faster. If the devs still have to pander to prior specs, and the whole ATI=24 bits of FP precision and NV=16/32 bits thing, then nearly the entire value of SM3.0 (to the devs) is lost. Remember, there's nothing (AFAIK) inherent  in SM3.0 that cannot be done in SM2.0, in terms of IQ or eventual final effect. Just a (potentially) faster or easier way to get there for the devs. IOW, full, true, SM3.0 support, is nearly a sort of "SM3.0 or bust" thing. Now, I don't see game devs ignoring the entire installed-base of hardware out there, at least for the near-future crop of games, so I think that they will adopt a "SM2.0++" usage, essentially, SM2, but possibly using branching shader code (SM3.0 required) in certain places, when they can do things with it like collapse multiple rendering passes. That would seem to be the most prudent and smartest thing to do. But that also implies that those games will not be using "SM3.0 from the ground up", but rather just SM2.0 with some uses of SM3.0 thrown in. IOW, they wouldn't really be considered "true" SM3.0-using games. Once that happens, I feel that they will use SM3.0 as the baseline standard, and not look back. I see that as happening in around 1 - 1.5 years, possibly slightly sooner, depending on how fast the installed-base upgrades. The next-gen mid-range parts from both ATI and NV, and their availability and pricing should be a big factor in the game devs' directions.
	
	
		
		
			Originally posted by: apoppin
SM 3.0 is supposed to make shaders more efficient, not less efficient.  Games are typically 1-1/2 years -3 years behind development.  We just got SM 3.0 in the 6800 series  . . . . we see that we already got some impressive number of titles and the card is only ONE year old . . . . now ADD ati's support for SM 3.0 and i'd say 'case closed' for 3.0 being quickly adopted.
		
		
	 
Most of the "efficiency" of SM3.0, is in terms of developers, not in terms of the low-level hardware execution. It makes it easier for the devs to write the code in the first place. They just leave it up to the hardware guys to "make it faster" in many cases, and that will likely require newer hardware than is currently out. The biggest, IMHO, advantage to ATI adopting SM3.0, is: 1) the devs no longer have to worry about the 16/24/32-bit FP thing, just code for 32-bit FP on all hardware, and 2) secondarily, ATI can "properly" expose their Geometry-instancing support in their DirectX drivers now.
	
	
		
		
			Originally posted by: apoppin
the final 'kicker' to my case is that it does not "hurt" if your "gamble" is wrong and you have a slow or useless SM 3.0 feature . . . it does not affect the rest of the videocard iany way.
		
		
	 
For the most part, that's true, speaking of performance only... but you do pay extra up-front for the additional cost, so the gamble isn't entirely "free". It's kind of like buying a S939 mobo, instead of a S745, because of rumors that AMD would introduce dual-core S939 CPUs in the future that would run on them. It could well be, that to get "proper" performance from a dual-core chip, you would need to move to DDR2 memory or something. So while it would "run" on your existing board, it wouldn't deliver the level of performance that one would ordinarily expect. I hypothesize that current existing SM3.0-capable hardware is much the same way.