Cookie Monster
Diamond Member
- May 7, 2005
- 5,161
- 32
- 86
The reduced memory interface is already balanced out by a high memory clock. You dont need such big memory interface if you can match it with a high memory clock. Samsung's latest GDDR4 really allows to do this especially when games arent as severly bandwidth limited unless your thinking of 2560x1600 with AA/AF.
A 7600GT has 22.4gb/s of bandwidth. Now if the rumoured 8600GTS has a memory clock of 2000mhz, thats 32gb/s. But likewise as coldpower said, the performance increase from the increased memory interface doesnt outweigh the overall increase in cost of the midrange GPU, the die size of the midrange GPU and a much more complex PCB required to accomodate the memory and its interface. Not to mention power consumption increases. What matters is the efficency of the architecture. Just take a look at 7600GT (128bit) vs the 6800 ultra (256bit). Or better, X850XT PE (256bit) vs X1650XT (128bit).
Anyway, why do you think the R560 ala X1650XT with 21gb/s bandwidth outperform the 7600GT which has 22.4gb/s at higher res with AA/AF?
A 7600GT has 22.4gb/s of bandwidth. Now if the rumoured 8600GTS has a memory clock of 2000mhz, thats 32gb/s. But likewise as coldpower said, the performance increase from the increased memory interface doesnt outweigh the overall increase in cost of the midrange GPU, the die size of the midrange GPU and a much more complex PCB required to accomodate the memory and its interface. Not to mention power consumption increases. What matters is the efficency of the architecture. Just take a look at 7600GT (128bit) vs the 6800 ultra (256bit). Or better, X850XT PE (256bit) vs X1650XT (128bit).
Anyway, why do you think the R560 ala X1650XT with 21gb/s bandwidth outperform the 7600GT which has 22.4gb/s at higher res with AA/AF?