- Jul 18, 2003
- 12,395
- 1,067
- 126
For my own benefit I decided to run some Batman Arkham City benchmarks using some hardware I have laying around. My main rig is in my signature. I added a Zotac GT 430 (128bit, 96 shaders, 16 TU, 700Mhz/1400Mhz) and an eVGA 570 SC (320bit, 480 shaders, 60 TU, 732Mhz/1464Mhz) to the system, to see which card I wanted to install for full time duty. My thinking was the GT 430 would be enough for Physx only and I could sell the GTX 570. Boy was I wrong!
Given as (Min, Max, Avg) FPS. Average of 2 benchmark runs. All graphics options set to their max values, resolution set to 1920x1080, and Physx set to High. I should probably do the benchmark without Vsync on to get a realistic Max value, but I was more interested in Min and Avg results because the game was unplayable at times with Physx set to high on the Titan.
Titan Alone = 16, 60, 42 FPS
Titan + GT 430 = 13, 60, 49 FPS
Titan + GTX 650 = 32, 60, 54 FPS
Titan + GTX 570 SC = 34, 60, 55 FPS
Titan + GTX 770 SC = 35, 60, 56 FPS
In practice, the jump in Min FPS when adding the GTX 570 SC turned the benchmark from looking like a slideshow at times to perfectly smooth. As an added bonus, the 570 SC will be doing Folding@home work when summertime finally passes us by. Right now it can make the room a bit too hot to leave Folding@home on 24/7 with both cards chugging away
.
Given that the cards are running almost the same GPU and Processor clock speeds, I would theorize the number of shader units on the cards is making the large difference in FPS. However, some benchmarks I could find Online using a GTX 650 vs GTX 650 TI showed the 650 coming out ahead because it was clocked higher. Difference in architectures? I dunno. If anyone could shed some light for me, it would be much appreciated.
Given as (Min, Max, Avg) FPS. Average of 2 benchmark runs. All graphics options set to their max values, resolution set to 1920x1080, and Physx set to High. I should probably do the benchmark without Vsync on to get a realistic Max value, but I was more interested in Min and Avg results because the game was unplayable at times with Physx set to high on the Titan.
Titan Alone = 16, 60, 42 FPS
Titan + GT 430 = 13, 60, 49 FPS
Titan + GTX 650 = 32, 60, 54 FPS
Titan + GTX 570 SC = 34, 60, 55 FPS
Titan + GTX 770 SC = 35, 60, 56 FPS
In practice, the jump in Min FPS when adding the GTX 570 SC turned the benchmark from looking like a slideshow at times to perfectly smooth. As an added bonus, the 570 SC will be doing Folding@home work when summertime finally passes us by. Right now it can make the room a bit too hot to leave Folding@home on 24/7 with both cards chugging away
Given that the cards are running almost the same GPU and Processor clock speeds, I would theorize the number of shader units on the cards is making the large difference in FPS. However, some benchmarks I could find Online using a GTX 650 vs GTX 650 TI showed the 650 coming out ahead because it was clocked higher. Difference in architectures? I dunno. If anyone could shed some light for me, it would be much appreciated.
Last edited: