Is GeforceFX's DDR2 really DDR2?

daction

Senior member
Nov 18, 2000
388
0
0
I thought the main feature that defined DDR2 and it's ability to double the theoretical bandwidth was it's ability to prefetch 4 bits per clock, instead of DDR's 2 bits per clock, and then output the extra data by doubling the I/O buffer frequency. This article explains it in detail.
But now I'm confused about what GeforceFXs' memory actually is after going back to anandtech's preview saying:

"Where NVIDIA manages to remain competitive is by implementing higher speed "DDR2" memories. We put "DDR2" in quotes because there is no official DDR2 spec for graphics memory, and the only difference between this memory and conventional DDR is that the electrical and signaling characteristics of the memory are borrowed from the JEDEC DDR2 specification. This memory does not transfer 4 times per clock but simply improves on the way data gets in and out of the chip, allowing for much higher clock rates. This should sound familiar as it is very similar to what ATI did with GDDR3."

Tomshardware.com says something totally opposite though:

"The card is using DDR2 memory which means it´s using a prefetch of 4 and doubles the amount of data transfered again. If a card is running with 1 GHz DDR2 datarate, the modules can be run at a quarter of that: moderate 250 MHz. That´s what people mean when they say that DDR2 is a cheap solution with a lot headroom. You can also read that in the Jedec whitepaper on page 6.

NVIDIA is using Samsung DDR2 modules with a dram cell frequency of 500 MHz - only half the data frequency. This means that the DDR2 memory on GeForceFX behaves just like DDR memory with just higher clock frequencies."

Does anyone know for SURE what is the truth about GeforceFX's "DDR2"?
 

dawks

Diamond Member
Oct 9, 1999
5,071
2
81
Im not very educated on this topic so I could be wrong, but as far as I know, I'd say its not DDR2.
Remember that the GeForce FX?s 128-bit memory bus runs at 500MHz, but has a maximum bandwidth of just 16GB/sec. Meanwhile, the Radeon 9700?s 256-bit memory interface accommodates 19.8GB/sec, even though it runs at just 325MHz.
(Taken from GeForce FX Benchmarks)
Wouldnt memory bandwidth be much higher if it was actually DDR2?
 

formulav8

Diamond Member
Sep 18, 2000
7,004
522
126
2 to 1 against tom. I would go with the odds. :) Seriously I don't know.



Jason
 

Snoop

Golden Member
Oct 11, 1999
1,424
0
76
Originally posted by: DaZ
Im not very educated on this topic so I could be wrong, but as far as I know, I'd say its not DDR2.
Remember that the GeForce FX?s 128-bit memory bus runs at 500MHz, but has a maximum bandwidth of just 16GB/sec. Meanwhile, the Radeon 9700?s 256-bit memory interface accommodates 19.8GB/sec, even though it runs at just 325MHz.
(Taken from GeForce FX Benchmarks)
Wouldnt memory bandwidth be much higher if it was actually DDR2?

No, the FX's memory is 128 bit while the radeons runs at 256 bit, which 'essentially' doubles the bandwidth.

I should have known Anand has already covered the DDR2 issue (he is the man :D)
Quoted from anands preview:
Where NVIDIA manages to remain competitive is by implementing higher speed "DDR2" memories. We put "DDR2" in quotes because there is no official DDR2 spec for graphics memory, and the only difference between this memory and conventional DDR is that the electrical and signaling characteristics of the memory are borrowed from the JEDEC DDR2 specification. This memory does not transfer 4 times per clock but simply improves on the way data gets in and out of the chip, allowing for much higher clock rates. This should sound familiar as it is very similar to what ATI did with GDDR3.

NVIDIA is shooting for around a 500MHz clock speed (effectively 1GHz) for the "DDR2" memory on the GeForce FX. NVIDIA partnered with Samsung to provide memory for the GeForce FX built to NVIDIA's specification.
 

NFS4

No Lifer
Oct 9, 1999
72,636
47
91
I was over at Anand's lab last month and we were chattin' about about the GeForce FX. I was blabbin' about this and that and how I was amazed that the FX has DDR2 memory...

He kindly told me that it wasn't real DDR2 :) It's just a "refined" version of DDR
 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
DDR2 memory is not automatically any faster than DDR memory - it still transfers 2 bits per pin per clock cycle. 200 MHz DDR2 is no faster than 200 MHz DDR - in fact it may even be slower in some circumstances.

The differences are mainly internal, and allow it to reach much higher clock speeds. It is much easier (and cheaper once production ramps up) to produce a 250 MHz DDR2 RAM than a 250 MHz DDR RAM. If the Geforce FX really does use 500 MHz RAM, I find it hard to believe that it could be DDR at that speed. DDR2 seems much more plausible.

DDR was only designed to reach a maximum speed of 166 MHz (PC2700). Due to advances in manufacturing technology, some expensive 200 MHz chips are now available for system RAM, and small amounts of very fast RAM (e.g. 250 MHz) have been available for graphics cards.

DDR2 RAM was designed to launch at 200 MHz (PC3200) with the potential to reach 333 MHz for mainstream system memory (niche memory such as graphics RAM will be considerably faster).

A similar upgrade, DDR3, is planned ifor higher speeds.
 

GTaudiophile

Lifer
Oct 24, 2000
29,767
33
81
Originally posted by: NFS4
I was over at Anand's lab last month and we were chattin' about about the GeForce FX. I was blabbin' about this and that and how I was amazed that the FX has DDR2 memory...

He kindly told me that it wasn't real DDR2 :) It's just a "refined" version of DDR

Oh. so you've seen the benches...and know exactly how wrong I am that R300 is faster, right? ;)

And if he had the FX in his lab LAST MONTH, where is the review???
 

NFS4

No Lifer
Oct 9, 1999
72,636
47
91
Originally posted by: GTaudiophile
Originally posted by: NFS4
I was over at Anand's lab last month and we were chattin' about about the GeForce FX. I was blabbin' about this and that and how I was amazed that the FX has DDR2 memory...

He kindly told me that it wasn't real DDR2 :) It's just a "refined" version of DDR

Oh. so you've seen the benches...and know exactly how wrong I am that R300 is faster, right? ;)

And if he had the FX in his lab LAST MONTH, where is the review???

Where did I say anything about benches or having a card??? I said that we were chattin' about the GeForce FX. It had just recently been announced when we were talking about it.
 

Snoop

Golden Member
Oct 11, 1999
1,424
0
76
I wonder if anyone can explain Nvidia's reasoning for sticking with the 128 bit memory interface. It just doesn?t seem to make any sense, what would the motivations for them to continue ramping memory speed and not just redesigning the memory controller? Would they possibly break backwards compatibility with their drivers? Is the cost (or Development time?) of redesigning the memory controller so prohibitive that it?s more efficient to continue to use the 128 bit controller with the high cost RAM?

Thanks in advance

 

GTaudiophile

Lifer
Oct 24, 2000
29,767
33
81
Originally posted by: Snoop
I wonder if anyone can explain Nvidia's reasoning for sticking with the 128 bit memory interface. It just doesn?t seem to make any sense, what would the motivations for them to continue ramping memory speed and not just redesigning the memory controller? Would they possibly break backwards compatibility with their drivers? Is the cost (or Development time?) of redesigning the memory controller so prohibitive that it?s more efficient to continue to use the 128 bit controller with the high cost RAM?

I have the feeling NV30 was already well into the development cycle/pipeline before the Parhelia and R300 were announced. Maybe nVidia didn't count on Matrox and ATi adopting a 256-bit that soon or maybe it's that the ex-3dfx team has been working on it for THAT long. I think timing is the issue here, not money or technology.

Edit: Don't forget it was the Artx team that developed R300. I guess time will tell which company (nVidia/ATi) made the better buy (3dfx/Artx).
 

Snoop

Golden Member
Oct 11, 1999
1,424
0
76
I just find it hard to imagine that Nvidia could have been caught this flat footed on such a seemingly simple, yet powerful change in technology. It just seems like a stretch that both ATI AND Matrox developed and used a 256 bit memory controller on what is now 'relatively speaking' dated designs -last generation compared to the FX-.
 

andreasl

Senior member
Aug 25, 2000
419
0
0
Of course it's DDR2, but it's not the same kind of DDR2 that will go into modules for PC motherboards. The main difference is that memory for graphic cards is designed to operate on a point to point connection rather than a memory bus across a module connection. This allows the memory to be clocked higher than if it was used on regular modules. And it doesn't have to be designed to fit in a million and one combinations, just one.

Current DDR on graphic cards work the same way in relation to their DDR DIMM cousins.
 

daction

Senior member
Nov 18, 2000
388
0
0
Ahhh so DDR2 for video cards isn't going to be the same as it will for system ram. I guess I got all confused after reading this article that's been circulating around the net.
 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
ExtremeTech - Inside the GeForceFX Architecture
At first it may seem odd that nVidia would come out with a powerful new GPU and still use a 128-bit memory interface (compared to 256-bit in the Radeon 9700). But the GeForceFX now uses DDR-2 memory, and its controller is designed to achieve its best efficiency with DDR-2. nVidia's Tony Tamasi explains:

Some of the ideas [from LMA2] have been improved, but it is essentially a new design. It's still multi-partitioned as NV25 was, but the difference is that everything has been built around full-speed operation of 4X FSAA. The horsepower in the ROP [GeForceFX's pixel sub-system that reads and writes pixels, checks z-buffers, does blending, does AA, etc.] and the frame-buffer has been sized accordingly, the caches have been sized accordingly, and the arbitration policies have been designed accordingly, so in that sense it's completely new. It's also running at 2X the clock rate, so the entire design had to be changed to take advantage of DDR-2.
Tamasi went on to explain how nVidia takes advantage of DDR2:

"There are fundamental differences between DDR1 and DDR2, and if you want to make good use of DDR2, you have to design around longer burst lengths on the memory, because that's how they're going faster. So the entire memory subsystem has to be designed to handle that. You might be able to hook up a chip that's built for DDR1memory to DDR2 memory, and even run it at a high frequency, but you get horrible utilization out of the memory, because that DDR1 memory subsystem is all built around Burst-Length 2 accesses. So you'll get a half the efficiency accessing the Burst-Length 4 memory device."

This graph shows that DDR-2 has more headroom to achieve better incremental clock speed gains as access latency times are reduced. DDR1 is approaching its physical limit in terms of further reducing access latency, and as a result, it's also approaching its maximum possible clock rate. DDR-2 appears to have good headroom to scale up over a 1GHz effective clock rate (500MHz real clock). Although DDR-2 isn't slated to be in volume production for system memory until the third quarter of 2003, board makers should be able to get the needed DDR-2 memory chips they'll need for GeForceFX. One reason is that GeForceFX boards will likely be priced around $400, and as such, is not a high-volume product, so sufficient memories should be available when boards begin shipping in February.

When asked about GeForceFX's memory bandwidth versus Radeon 9700, both Kirk and Tamasi acknowledged that ATI holds a nearly 25% advantage in terms of peak memory bandwidth. But Kirk quickly claimed that the GeForceFX will be more efficient in terms of effective bandwidth achieved moving actual data in and out of the frame buffer. Tamasi added that they've seen 25-30% increases in effective available bandwidth in their labs. He attributed this increase to features like Z-occlusion and texture compression, and claimed that GeForceFX's effective maximum bandwidth may be closer to 20GB/sec.
 

Snoop

Golden Member
Oct 11, 1999
1,424
0
76
When asked about GeForceFX's memory bandwidth versus Radeon 9700, both Kirk and Tamasi acknowledged that ATI holds a nearly 25% advantage in terms of peak memory bandwidth. But Kirk quickly claimed that the GeForceFX will be more efficient in terms of effective bandwidth achieved moving actual data in and out of the frame buffer. Tamasi added that they've seen 25-30% increases in effective available bandwidth in their labs. He attributed this increase to features like Z-occlusion and texture compression, and claimed that GeForceFX's effective maximum bandwidth may be closer to 20GB/sec.
This sounds really bad. Basically, all the preceding crap about burst length, efficiency, latency, yada, yada, yada, means jack sh!t and any way you cut it, they are using more expensive memory and achieving less bandwidth. I must say, this is a little distressing to me as a Nvidia fan and stockholder.
 

Hockeyfan6781

Junior Member
Dec 28, 2002
19
0
0
I think nvidia probably knew about ati's plans of the R350 and R400, and didnt wanna completely blow its load all in one shot.... I think the NV35 which nvidia has slated for September 2003 will incorporate 256bit memory controller... just something they're probably keeping in their back pocket, like they did with the GF4, after ati released the 8500 series to beat out the GF3 line...
just a thought