GeForce 256 DDR vs GeForce2 MX 32Mb

Ardeaem

Member
Mar 1, 2001
25
0
0
I am building a computer for my fiance, and would like to know which video card to keep in my system (ie, which will play games better). I have a GeForce 256 DDR and a GeForce2 MX 32 Mb. Which will be better for gaming?


Ardeaem
 

SammyBoy

Diamond Member
Jan 7, 2001
3,570
1
0
what about the 64mb mx that ocie.com sells for 137 bucks, would that beat the 256??
 

Biggs

Diamond Member
Dec 18, 2000
3,010
0
0
I doubt it, the GF2MX is limited by the bandwidth of the memory and not the quantity. I don't see how a 128MB SDR video card could beat a 32MB DDR. Note that the GF DDR is a power hungry card so better get at least a 300W PSU.
 

Rand

Lifer
Oct 11, 1999
11,071
1
81
The DDR Geforce1 is faster, even with 64MB the MX wouldnt win out unless you were running an extremely texture intensive game at very high res.. say 1600x1200x32 and even then both of them wouldnt be getting such low frame rates it wouldnt matter anyway.

BTW, the DDR GeForce requires a lot of power and is known for causing poroblems on some low-end motherboard and/or weak power supplies.

 

Leo V

Diamond Member
Dec 4, 1999
3,123
0
0
Only two things going for MX:
-Substantially cooler running. GeForce1 dissipates ~15watts of heat, GeForce2 MX dissipates ~4watts eliminating the need for a fan. That's my excuse ;)
-Somewhat faster T&L due to 175mhz clock instead of 120mhz. I wouldn't call that a huge difference however, and it won't help it beat GF/DDR in most cases.
 

arod324

Golden Member
Jan 21, 2001
1,182
0
0
The DDR will alleviate some of the memory bandwidth problem's that the MX has. Go with the DDR, it is faster.
 

Mixxen

Golden Member
Mar 10, 2000
1,154
0
0
GF256 DDR has a sweet spot at 1024x768x32.

The GF2MX has a sweet spot at 800x600x32.

This is due to the higher memory bandwidth of the DDR (which is the bottle neck of all GF cards).

The MX supports Shading Rasterizer (per pixel shading), and the GF256 does not. I still opted to buy the GF256 DDR because I rather play at 1024x768x32.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
"The MX supports Shading Rasterizer (per pixel shading), and the GF256 does not."

No, both cards support NSR. nVidia hyped the "new" feature on the GF2 but it was always present on the GF1 boards. I know you can point to articles if you look around, they are wrong.
 

ahfung

Golden Member
Oct 20, 1999
1,418
0
0


<< No, both cards support NSR. nVidia hyped the &quot;new&quot; feature on the GF2 but it was always present on the GF1 boards. I know you can point to articles if you look around, they are wrong. >>



Ben, I'm not sure if its correct. Crytek X ISLE demo shows that MX is faster than DDR under any resolution and color depth, especially under cubemapping in refraction/reflection mode. I feel puzzled. :confused:
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
&quot;Ben, I'm not sure if its correct. Crytek X ISLE demo shows that MX is faster than DDR under any resolution and color depth, especially under cubemapping in refraction/reflection mode. I feel puzzled.&quot;

It is correct, would I lie to you?:)

Several GeForce boards shipped with nVidia demos including cube mapping for that particular feature, any of the rest you can check via nVidia's demos or using a DX caps viewer, the GF2 series offers no new features over the GF1 as far as 3D rendering. Why would the GF2MX be faster then the DDR in X-Isle with cubemapping? Raw MTexel fillrate more then likely(~700MTexels vs ~500MTexels).
 

ahfung

Golden Member
Oct 20, 1999
1,418
0
0
Hey Ben I know u got a GF DDR. You should try the demo out. For my DDR, it only got around 12fps under the cubemappings. It is very strange that on matter under what resolutions or color depth, the lowest fps still stuck at about 12fps. And overclocking to 150/350 has no effect on the fps of cubemappings. I have witness MX doing over 25fps at the same spots. At 150/350 the DDR should have ~600 vs 700MTexels of MX. I don't think this would account for almost doubled fps. Therefore I highly suspect if it has anything to do with the fillrate.

Isn't it weird?
 

Leo V

Diamond Member
Dec 4, 1999
3,123
0
0
ahfung, this is just a guess to explain the 12fps vs. 25fps difference: is there a chance that AGP2X was forced in the former case, and AGP4X in the latter? It would make a lot of sense in a high-geometry scene.
 

xtreme2k

Diamond Member
Jun 3, 2000
3,078
0
0
I dont think AGP 4x/2x is the case since on my GTS the whole demo at 1024/32bit rarely drop below 40fps, and I am on a BX mobo.

I think it has got to do with the architecture.
GF DDR can do 4 x 1 texture?
MX can do 2 x 2?

Also, might have something to do with the TnL unit?

I am running the demo in Win2000 and it seems slower than 98se. Drops down to about ~35fps during the cubemap.
 

ahfung

Golden Member
Oct 20, 1999
1,418
0
0
AGP mode doesn't matter. Both the MX and DDR ran on BX mobo. My system is 550E @ 860, 256MB RAM, while my friend's MX rig is 550E @ 733, 256MB RAM too. Both installed 6.50 and DX8.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Hmmmm, I'll have to DL it and play around with it a bit. Faster T&amp;L shouldn't be it either, with a DDR running at ~150MHZ you should be as close in T&amp;L performance as you are in raw fill.
 

Mixxen

Golden Member
Mar 10, 2000
1,154
0
0
Hey, I just tried the MFC Pixel Shader demo that comes with the DirectX 8 SDK kit...and the GF256 does not support Pixel Shading.

The GF256 does support Cubemapping. I think the Per Pixel Shader and Cubemapping are different entities.

Uhh..I just tried the Crytek demo and it seems Per Pixel Lighting works on my GF256DDR...so my question is...Is Per Pixel Shader and Per Pixel Lighting the same thing?