Hello! I'm quarantined in my home and trying to figure out various different GPU specs and their impact on real world performance; as I don't have anything better to do at the moment!
Anyhow, I hit a wall while comparing GTX 770 with GTX 970 and need your opinions on the matter.
First, look at the screenshot below. As you can see, both GTX770 and 970 are running at the same core (1,215MHz) and memory frequencies (3,506MHz) and have the same 256-Bit bandwidth, which eliminates any bandwidth related variables. So far so good.
Since we know that the GTX770 has 1536 cores whereas the 970 has 1664, it's easy enough to calculate theoretical GFlop performance of both GPUs at this 'exact' moment:
GTX970: 0.002 x 1,215MHz x 1,664 cores = 4,043 GFlops.
GTX770: 0.002 x 1,215MHz x 1,536 cores = 3,732 GFlops.
As you can see, the GTX770 is just ~8% slower than the GTX970, or at least it should be, yet the frame rate suggests that the 770 is actually ~43% slower!
My question is a simple 'why'? Why the huge difference? What am I missing here? They should perform within ~10% margin because they've the exact same memory bandwidth and frequency and yet...
It's just super confusing!
So, any ideas?

Anyhow, I hit a wall while comparing GTX 770 with GTX 970 and need your opinions on the matter.
First, look at the screenshot below. As you can see, both GTX770 and 970 are running at the same core (1,215MHz) and memory frequencies (3,506MHz) and have the same 256-Bit bandwidth, which eliminates any bandwidth related variables. So far so good.
Since we know that the GTX770 has 1536 cores whereas the 970 has 1664, it's easy enough to calculate theoretical GFlop performance of both GPUs at this 'exact' moment:
GTX970: 0.002 x 1,215MHz x 1,664 cores = 4,043 GFlops.
GTX770: 0.002 x 1,215MHz x 1,536 cores = 3,732 GFlops.
As you can see, the GTX770 is just ~8% slower than the GTX970, or at least it should be, yet the frame rate suggests that the 770 is actually ~43% slower!
My question is a simple 'why'? Why the huge difference? What am I missing here? They should perform within ~10% margin because they've the exact same memory bandwidth and frequency and yet...
It's just super confusing!
So, any ideas?
