<<
No, it doesn't. Since the GeForce4 Ti 4200 has two pixel shaders it makes use of the 128MB RAM. The Radeon 8500 with 128MB RAM will see maybe a 0.5% performance increase.
>>
The GF4 has only 1 pixel shader, not two.
Perhaps your confusing that with it's two vertex shaders, or it's 4 pixel pieplines?
And FWIW, the Radeon 8500 (R200 core) also has two vertex shaders.
The only areas in which the extra memory would be of any benefit really would be when using 4X FSAA at high resolutions.
<< Take a look at the GeForce3 Ti 200s that were equiped with 128MB RAM. Unless the architecture of the card uses the RAM then it won't make much use of it. There's no way that any Radeon 8500 will be able to beat a GeForce4 Ti's capability. >>
There is absolutely nothing preventing the GeForce3, or even the GeForce 1 from using up to 256MB of RAM, that is the maximum capabilities of the core.
I'm not sure of the maximum capabilities of the R200 core in terms of DRAM utilized for a texture/frame buffer.
The fact that it has two vertex shaders (I presume you meant vertex and not pixel shaders in your statement) has no impact on whether the graphics core is capable of addressing and utilizing 128MB RAM.
Whether it needs 128MB of RAM is an entirely different matter and that's debateable, but memory requirements for the GF4 and not much higher then they are for the GF3.
Actually logically the R200 core would take better advantage of the extra memory as it utilizes a FSAA implementation that consumes considerably more texture memory then that of the GF4's multi-sampled form of FSAA, in adition the R200 core address allocations in the memory are slightly larger then that of the GF4 core.