Nvidia and AMD have constantly been regulated by MS and then only half of what is left up to them is any good... and on Microsoft's part, they're always been behind the times.  For example, the ROPs and depth units should've been programmable from the beginning (now they should be vector FPunits with FMA5 or FMA7, really fast FP64 and FX64 precision, and completely programmable) because those are where the user would have more of a choice simply because the drivers would operate at the low level that way... the drivers wouldn't have to have uniform functions and it wouldn't always be slower.  It would help backward compatibility without hurting the future or uniformity.  The one with the better texture units (the only thing that should be hardware function) would win out or the better texture units would be copied.  Larrabbee was probably scrapped in part due to the focus on frames per second rather than the individual wants and because the FP32 vector units may not have been enough precision.  It was also something intel probably didn't have faith in and perhaps they thought they could make more money in the short term by making crap iGPUs.  AMD also wanted to do a more software based design, but the wrong way... they wanted the ROPs and depth units to be hardware while the texture units were to be shaders.  There is less flexibility for the end user that way and for the programmers it won't be a net gain for all programmers.  Modern ROPs are also more scalable and have more features, so they're more transistors than nvidia's texture units... which would be even better if they dropped all lossy formats from hardware and replaced them with a single lossless texture compression format and had support for raw uncompressed maps where necessary or when something new came up... the DXT and other lossy format maps could be comfortably done by the compute cores.  Maybe they could even increase the trilinear mipmapping calculation precision to quadruple extended fp precision (i.e., FP160).   
Anyway, I think this all has roots back to 2002, when MS made it so ATi would get off the ground (if you'll remember, ATi did nothing new of their own with R300; they simply used clever marketing while making the filtering no better overall, worse in my opinion than what the 8500 was capable of, kept the 8500's IQ/compatibility problems by keeping the very aggressive and IQ-decreasing back buffer optimizations and used the bare min specs specified by microsoft, like FP24 PS precision, short shader inst length, 24 bit fixed point max z buffer format, no practical way to run games that would use the w-buffer, and they even used an integrated DAC before DVI had taken off instead of using excellent circuitry like Matrox did), then nvidia made too many dumb decisions with the GeForce FX so after that they started listening to MS more and more but they also partnered with industrial L&M (yet ignored the best possible AA for that part they spent so much effort on) and then finally created CUDA. CUDA was not vulnerable to the market and therefore not open because of patents and no applications even use it probably because nv keeps such a tight wrap on it... that is why OpenCL or a better OpenGL could never get ahead because of that and because OpenCL wanted to co-operate. That said, I think nvidia was probably more innovative than ATi, but they were also just as wasteful not in spite of, but because of top down and hyper stable management.
AMD and MS never got the uniform standards they wanted because they sucked so bad and got hooked on recycling/reforming old designs and ideas, used an inappropriate mishmash of hw and sw function, and they were behind the times... they couldn't do anything outside of the box.
OpenGL would've worked better because it was run by a board of individuals that shared power/worked with the IHVs... it wasn't run from the top down. That's largely why it would've been better and closer to uniform if MS hadn't tried to micromanage... they really sucked at it, but they sure did con most people into thinking DX was the best thing since sliced bread. I don't know whether MS thought it could make everything uniform and that everyone would be happy, if they were just trying to be manipulative and make money (or both)... my money is on both and it's not really their fault since they were corrupted by the state like everyone is, but they were more corrupted by the state than individuals since they were institutions and were aggressive and even somewhat pro-state almost from the very beginning. Bill Gates said he would sue for people copying him even before he was taken to court by Apple, IIRC.
Sorry for the incoherence, but I wouldn't believe I'm the only one who sees it this way. I just think that the GPU industry is something incredibly corrupted by the State and by IP... it has held back innovation and it's a shame. It may have pleased some people, but I've never thought nv and AMD had it more than 1/2 right. They wouldn't have been able to make as much money, but then they waste money and resources more than I do.
Your thoughts?
			
			Anyway, I think this all has roots back to 2002, when MS made it so ATi would get off the ground (if you'll remember, ATi did nothing new of their own with R300; they simply used clever marketing while making the filtering no better overall, worse in my opinion than what the 8500 was capable of, kept the 8500's IQ/compatibility problems by keeping the very aggressive and IQ-decreasing back buffer optimizations and used the bare min specs specified by microsoft, like FP24 PS precision, short shader inst length, 24 bit fixed point max z buffer format, no practical way to run games that would use the w-buffer, and they even used an integrated DAC before DVI had taken off instead of using excellent circuitry like Matrox did), then nvidia made too many dumb decisions with the GeForce FX so after that they started listening to MS more and more but they also partnered with industrial L&M (yet ignored the best possible AA for that part they spent so much effort on) and then finally created CUDA. CUDA was not vulnerable to the market and therefore not open because of patents and no applications even use it probably because nv keeps such a tight wrap on it... that is why OpenCL or a better OpenGL could never get ahead because of that and because OpenCL wanted to co-operate. That said, I think nvidia was probably more innovative than ATi, but they were also just as wasteful not in spite of, but because of top down and hyper stable management.
AMD and MS never got the uniform standards they wanted because they sucked so bad and got hooked on recycling/reforming old designs and ideas, used an inappropriate mishmash of hw and sw function, and they were behind the times... they couldn't do anything outside of the box.
OpenGL would've worked better because it was run by a board of individuals that shared power/worked with the IHVs... it wasn't run from the top down. That's largely why it would've been better and closer to uniform if MS hadn't tried to micromanage... they really sucked at it, but they sure did con most people into thinking DX was the best thing since sliced bread. I don't know whether MS thought it could make everything uniform and that everyone would be happy, if they were just trying to be manipulative and make money (or both)... my money is on both and it's not really their fault since they were corrupted by the state like everyone is, but they were more corrupted by the state than individuals since they were institutions and were aggressive and even somewhat pro-state almost from the very beginning. Bill Gates said he would sue for people copying him even before he was taken to court by Apple, IIRC.
Sorry for the incoherence, but I wouldn't believe I'm the only one who sees it this way. I just think that the GPU industry is something incredibly corrupted by the State and by IP... it has held back innovation and it's a shame. It may have pleased some people, but I've never thought nv and AMD had it more than 1/2 right. They wouldn't have been able to make as much money, but then they waste money and resources more than I do.
Your thoughts?
			
				Last edited: 
				
		
	
										
										
											
	
										
									
								
				
		
			
	
	