5150Joker
Diamond Member
Originally posted by: Cookie Monster
I got to agree with 5150joker on this one. The architectural side of the G70 is infact different to the NV40, by far. Different clock domains, functions differently from the NV40 architecture(shown by the review from beyond3d that a 7800GTX reduced to 16 pipes perform less than the 6800 ultra), and i could go on for more differences. Simply put you cant judge the book by its cover.
The 7800 isnt primitve , its infact more efficent because based on performance/watt ratio the 7800 beats the X1800. Not only that but it took ATi to use 320 million trannies for a 16 pipe card against a 302 million trannies 24pp GTX. The guys at Nvidia worked on efficeny, and used every trick in the book to increase performance NOT by insane clock speeds like the X1800 XTs 625mhz.
What features? Avivo? the 7 series Pure Video is fine, if not better. (more to come from AT but nvidia is in the lead in de-interlacing)
Adaptive AA? transparcey AA which is infact better. Some reviewers mention that the AA on the 7800GTX is better than the X1 series. (xbitlabs, hothardware for instance)
Big performance hit? the XT has 512mb at 1500mhz, comparing to the GTXs 256mb 1200mhz. Wait til the 512mb GTX with some faster memory then conclude which card takes more of a hit.
How can you compare the availablity/price of the 6800 ultra 512mb to the 512mb GTX? looking at yields/availablity nvidia isnt suffering from such issue. And as they themselves set the standard high on availablity i dont think they will shoot themselves on the foot.
IQ? have you tried playing HL2 with 8xS? Of course their is differences, differences in IQ from different reviewers, but to most people its the same. Some say AF on the 7 series is better, some say the X1 series do, so i dont see why people argue over IQ, when the differences is minimal.
Watch him jump in here and tell us how having HDR+AA is so important these days and that the "primitive" 7800 lacking this is just horrible. Or he may even try to grasp at straws by saying ATi did SM 3.0 "right" because R520 has an ultra-threaded architecture and can execute one flow control instruction per cycle.
Of course none of that means jack sh!t when you look at the hard facts and those are the benchmark results. In OpenGL, the "primitive" G70 wipes the floor with R520, HDR both take about the same hit (effectively making HDR+AA too much of a penalty), and in D3D the results go back and forth depending on which review you read because some used reference clocked G70 cards (which you can't even buy) while others used "stock" retail products with 450 core.
Like you already mentioned, nVidia FSAA IQ is still the winner due to 8x+SS and the fact that adaptive AA doesn't look any better than TSAA. Finally, Driver Heaven (ATi stronghold) themselves said 16x HQ AF showed no visual gains over standard 16x AF. So I'm left wondering, what is so special about x1800xt aside from it's high power consumption, 2 slot cooler design and lack of availability?
Edit: Nice find on the missing vertex texture fetch. So much for the advanced R520.
Edit 2: AH it just hit me, he thinks R520 is advanced because it can do a lot more than just play games. For example, it can double as a compact leaf blower with the proper accessories: http://clan786.com/modules/coppermine/albums/userpics/10002/normal_ibiza2.jpg