3DMark06 score is the following: default 8800GTX scored 12.515, EVGA's ACS3 board scored 13.115, while the latest baby from Nvidia scored 13.191, with the main performance difference between eVGA and Nvidia being the clock of 128 scalar shaders (even though the eVGA has 14 MHz performance advantage, nV has higher shader clock - 1.456 compared to 1.500 MHz).
When divided by segments, SM2.0 test was 5131 for 8800GTX, 5354 for EVGA's ACS3 and 5438 for 8800Ultra. SM3.0 test yielded in 5.418 for both default 8800GTX and 8800Ultra, while 8800ACS3 scored 5.431, taking the lead here. CPU Score was practically equal, with 8800GTX and Ultra sharing the very same CPU score (4.556), and ACS3 botched this score (4.546).
Where it gets interesting is the fact that ACS3 board has higher fill-rate, when compared to 8800Ultra: ACS3 will churn out 7541.83 Mtexel/s in Single- and 19.499,91 MTexel/s in Multi-Texturing mode. 8800Ultra cannot count on higher shader-clock here, so the board churned out 7.370,04 MTexel/s in Single- and 19.085,74 Mtexel/s in Multi-Texturing mode.
Things also got heated in both simple (and complex Vertex Shader operations, where 8800GTX scored 107.29 fps, 8800ACS3 scored 118.12 and 8800Ultra trailed with 115.66 fps.
8800Ultra's muscle showed in Pixel Shader test with 9fps lead (518.54 vs. 507.70 - regular 8800GTX scored 478.25 fps), Shader Particles showed who is the boss (Ultra scored 180.90fps, compared to 166.67 fps on the ACS3 and 161.19fps for the default 8800GTX).