- Jan 16, 2003
- 21,211
- 50
- 91
R600 512-bit from the INQ.
"If the 512 memory ring turns to be the real thing, we are talking about 128 GB/s of memory bandwidth with GDDR4 clocked at 2000MHz. We also learned that the R600 may use memory faster than 2000MHz as it will be available by Q1. If ATI keeps pushing the chip we might get even faster GDDR4 chips at production time.
Even the PCB of the R600 will be super complicated, as you need a lot of wires to make 512 bit memory to work. Overall it has the potential to beat Nvidia's G80, but yet again it will come at least three months after Nvidia. The G80's memory works at 384 bit as Nvidia pretty much dis-unified everything in G80 from shaders to memory controllers. Nvidia likes to make rules and probably could not get more than 384 bit wide controller in the chip, as the G80 is still a 90 nanometre chip. "
If true, all that would remain to be seen is if the 64 unified shader GPU can dish out the pain.
Will be an extremely complicated and expensive PCB and will probably be reflected in the price tag. I don't know who came up with the wider bus idea first (Nvidia or ATI) but it could go either way.
"If the 512 memory ring turns to be the real thing, we are talking about 128 GB/s of memory bandwidth with GDDR4 clocked at 2000MHz. We also learned that the R600 may use memory faster than 2000MHz as it will be available by Q1. If ATI keeps pushing the chip we might get even faster GDDR4 chips at production time.
Even the PCB of the R600 will be super complicated, as you need a lot of wires to make 512 bit memory to work. Overall it has the potential to beat Nvidia's G80, but yet again it will come at least three months after Nvidia. The G80's memory works at 384 bit as Nvidia pretty much dis-unified everything in G80 from shaders to memory controllers. Nvidia likes to make rules and probably could not get more than 384 bit wide controller in the chip, as the G80 is still a 90 nanometre chip. "
If true, all that would remain to be seen is if the 64 unified shader GPU can dish out the pain.
Will be an extremely complicated and expensive PCB and will probably be reflected in the price tag. I don't know who came up with the wider bus idea first (Nvidia or ATI) but it could go either way.