i think the g400 was better because it had better feature set (better circuitry including the DAC, full fixed point z precision that could be forced via the driver, full trilinear mipmapping that could be forced via the driver; all it really lacked was glide and it was actually a good move not to design it on DX6.1 because matrox's opengl icd came to give great performance and because there was no risk of them implementing DXT poorly). T&L wouldnt have been necessary if intel had tried to design CPUs to be better for gaming (they had very little competition because of superior marketing which meant they didnt have to make the best and most ground breaking products) although that applies to all graphics processors with hardware features. we would be a lot better off if we had just had systems with multiple general purpose processor dies with some having an architecture designed for graphics (more than one core per die was a huge mistake and it is a shame that cant be reversed) and some being balanced (like the Gamecube's CPU). there is really no point in having hardware anyway if you want maximum framerates because software can potentially be made as fast as the end user wants... it is more versatile even if it is more expensive (but then lower prices, more popular products, and more efficiency would result without IP as individuals would then pay to make what they want rather than having a business model where the inventor cant really know whether his product will give him a profit).
nvidia wouldnt have a feature set not biased towards pure performance until the Geforce FX (although it was still not done right) and G80 was really the first time they went all out with feature set (nvi/o or whatever the new digital transmitter was called, finally could do full fixed point log z buffers, AF angle variancy was done just right for the first time and to date no change has been necessary to that all, trilinear filtering was perfect (and i cant imagine how that could be any better 8 years later other than that i wish they still allowed the end user to force trilinear mipmaps and clamp texture negative lod bias), added SGSSAA (even though it couldnt selected until much later) and could do floating point render targets with any AA mode at the time, and it could do full precision integer precision (if i am not mistaken, AMD could do FP32, but only 24 bit for fixed point). it is just a shame how nvidia's drivers have fallen and that they put a programmable fuse into the GK110 that can cripple FP64... GTX 780s and 780 Tis are basically all damaged. and they're probably going to charge at least $1.1k for a maxwell with regular fp64 and that's really low class because that will hold back progress.
as for ATi, there would've been none for AMD to buy if performance benchmarks hadnt been what sold cards rather than much analysis of IQ and compatibility in reviews... ATi and now AMD can thank all the tech sites for that because i wouldve made an analysis on the lack of features and given the R300 series not more than a 50% if i had the brains to do an expert review of it. performance was no doubt ground breaking and its increase is legendary to this day, but the only good feature it had was properly rotated grid AA.
anyway, which of the two do YOU think was the better product?
nvidia wouldnt have a feature set not biased towards pure performance until the Geforce FX (although it was still not done right) and G80 was really the first time they went all out with feature set (nvi/o or whatever the new digital transmitter was called, finally could do full fixed point log z buffers, AF angle variancy was done just right for the first time and to date no change has been necessary to that all, trilinear filtering was perfect (and i cant imagine how that could be any better 8 years later other than that i wish they still allowed the end user to force trilinear mipmaps and clamp texture negative lod bias), added SGSSAA (even though it couldnt selected until much later) and could do floating point render targets with any AA mode at the time, and it could do full precision integer precision (if i am not mistaken, AMD could do FP32, but only 24 bit for fixed point). it is just a shame how nvidia's drivers have fallen and that they put a programmable fuse into the GK110 that can cripple FP64... GTX 780s and 780 Tis are basically all damaged. and they're probably going to charge at least $1.1k for a maxwell with regular fp64 and that's really low class because that will hold back progress.
as for ATi, there would've been none for AMD to buy if performance benchmarks hadnt been what sold cards rather than much analysis of IQ and compatibility in reviews... ATi and now AMD can thank all the tech sites for that because i wouldve made an analysis on the lack of features and given the R300 series not more than a 50% if i had the brains to do an expert review of it. performance was no doubt ground breaking and its increase is legendary to this day, but the only good feature it had was properly rotated grid AA.
anyway, which of the two do YOU think was the better product?