Yah, I guess there's no point to benchmarks, I mean, we should just take everything that we read on the internet at face value.
What in the world are you talking about?
Considering this isn't the first time we've disagreed on results from our own tests and observations, benchmarks are particularly relevant since they offer independent, reporformable, standard results to which to compare.
That's fine and I've already told you to get a better selection of benchmarks to use. Obviously you must not be looking very hard if you can't find the benchmarks freely available on the web that agree with my statements.
I don't think I need to link anything for you, since I'm sure you've seen the results I'm talking about on any major review site.
You don't need to link anything, you need to go off and read some more reviews. A good place to start is Ti4200 and Radeon 8500 comparisons because in those tests even the slower clocked 128 MB cards beat the 64 MB cards. Also as I already explained you can pick benchmarks that show no differences and you'll still be wrong because benchmarks don't tell the true story.
Yes, the difference is that the texture memory requirement hasn't changed, simply the number of textures that need to be rendered per frame has changed.
What in the world are you talking about? The average size and detail level of game textures has increased by massive amounts in last the few years. Also even if the size hasn't increase the fact that you've got more of them per scene increases memory storage requirements since you need all of the textures in a given scene to be in the VRAM otherwise you'll get texture swaps during the rendering process.
Thus the requirements have gone up from two angles, both from bigger textures
and from having more of them.
I've run every game you mentioned extensively at 1024 and max details with a GF3 64MB, Ti4200 128MB(@1280 also), and 9700pro 128MB (@1280), and didn't notice any stuttering or slowdowns from texture swapping
And I've run games at 1152 x 864 on owned/tested 64 MB and 128 MB Ti4200s, a 64 MB GF3 Ti500 and a 128 MB Ti4600 and the 128 MB cards did much better than their 64 MB counterparts.
but then again, I never ran them at unrealistic resolutions either.
1152 x 864 is hardly unrealistic. Besides, this is a discussion about whether 64 MB vs 128 MB makes a difference, not what your subjective definition of unrealistic is.
I noticed lower minimum framerates when the card was overextended beyond its means.
Over-extended as in exceeding the card's VRAM since 1024 x 768 is usually CPU limited in most games with medium speed cards.
any real-time render will not miss the slight stutters or slowdown you mention.
The render won't but the observer might. It's much different actually playing a game and feeling the controls and physics respond to you rather than sitting back and watching a realtime timedemo running, much like it's a lot different to comparing TV FPS to game FPS. There's total interaction in one case and absolutely none in the other case.
You'll be able to visually see such anomalies and you'll certainly be able to catch it if you are monitoring minimum fps.
This is true and it's the best way to pick them up but unfortunately not all games have minimum framerate measurements. Also playing the game is still by far the best way of picking up such things.
Speaking of a minimum framerate, why don't you benchmark Botmatch Anubis in UT2003 retail with maximum detail levels at both 1280 x 960 and 1600 x 1200 on your Radeon 9700 Pro (both playable settings) and watch the demo carefully, thereafter looking at the minimum framerate at both settings. You'll see that the minimums plummet at 1600 x 1200 far lower than the expected penalty of raising the resolution to that step. That is caused by texture swapping and is irrefutable proof that even 128 MB cards are starting to get squeezed in the newest games. And if you're up for it try running the benchmark on a 64 MB Ti4200 vs 128 MB Ti4200 and you'll see the 64 MB Ti4200 gets absolutely killed.
I'm wrong b/c you said so, but published reports and a few examples I gave indicate otherwise.
No, I told to look at some more benchmarks because the proof is out there (no pun intended

). I also told you that benchmarks can mean nothing in this discussion depending on which ones are run, what the system specs are and how they're run.
They don't need to turn them into 128MB cards, b/c the extra memory simply isn't needed or used.
My goodness, it appears that the point of my comments whizzed completely past your head. I wasn't trying to "turn" a 64 MB card into a 128 MB one, I was merely illustrating that even the best memory management in the world always works better with more physical RAM.
Of course a 64MB system isn't going to behave like a 128MB system on any modern OS, but it sure as hell would if it was running Windows 3.1.
You're not running Windows 3.11 any more than you're running Quake III. It's 2003, not 1999.
the benefits of more system RAM would be much more tangible than additional video RAM
Not in a case such as texture thrashing which is the most severe case of a lack of VRAM. It's a perfectly fine analogy and it also illustrates how much more critical VRAM is than system RAM in a case like this because textures are an all-or-nothing excercise but normal data can be broken down as Windows pleases to make the most efficient use of available system resources.
Its a 4 year old game that is the engine for 3 of the 5 games you mentioned.
Are you trying to be intentionally difficult or do you honestly have trouble with grasping the concept of games running on a given engine are not the same as the original game?
but so have compression techniques
Texture compression techniques are exectly the same as they were since the DirectX 6.0/7.0 spec definied them several years ago, namely 6:1 compression on noisy non-alpha textures and 4:1 ratios on most standard textures. If you're talking about Z/colour compression, that only works on data in the VRAM as well which doesn't help data being loaded from the system memory. Indirectly it does free up more space and helps out so I'll concede this point.
and memory controller efficiency
The memory controller only works on data in the VRAM; it also doesn't help data coming from the system RAM, nor does it reduce the footprint of existing data in the VRAM. You're talking about saving bandwidth but the issue here is storage space, not bandwidth.
That's not true at all, I offered 3DMark2K3 as the only "gaming" application that would fully use 128MB of VRAM, I distinguished that only 100MB would be used for textures, the other 24MB would be used for instructions, extensions and shader programs.
3Dmark is not game and neither is it the "only" program that can squeeze 128 MB cards. Again I refer you the UT2003 example for your own reference. Also this discussion is about 64 MB cards being obsolete, not about 128 MB cards being obsolete.
And as you stated, the need for VRAM will be less extensive once virtual texturing, programmable shaders, and compression techniques improve
Yeah but that ain't here yet and the compression techniques are always behind the developers in terms of how much they [the techniques] can compress vs how much demands are put on the card by the developers.
I have played those games start to finish, and guess what? Its not one long endless romp where the textures for the entire game are stored in VRAM. Every time a new level loads, the onboard and system cache is flushed and replaced with the necessary textures for that next level or area.
Exactly and the problem comes when the textures for one level can't be fully loaded onto 64 MB cards. That's when you get stuttering and slowdowns especially when you enter new areas that are textured differently to the old ones. Some game have "zones" where you can do 360 degree turns and you'll get constant texture swaps because everywhere you look requires textures not stored in the VRAM to be fetched from the system RAM. And when they're fetched there's not enough room to keep the old ones so out they're swapped out, a process that continues forever as long as you're in that area.
Again, memory issues aren't compounded if the same textures are accessed, the bottleneck is b/c more textures need to be rendered at the same time.
But there are more textures and they're bigger too which makes them more likely to be stored in the system RAM.
And I guess in your comprehensive testing, you also factored in any platform/processor/system RAM changes you made since running those extremely CPU/platform bottlenecked games that you mentioned?.
Of course and in a lot of cases only the cards were changed. In addition I also ran many other tests where I changed a lot of other things and this helped me to get a clearer and broader picture about exactly what's happening in a wide variety of situations.
Did you do the same?
And I'm not saying 128MB of memory is pointless, its just pointless on slower GPUs (anything less than NV30 and R300). Current games don't need it (again, any links disputing that would be greatly appreciated),
Almost any card available in a 128 MB form is better than its 64 MB counterpart, going right back to 8500/GF Ti4200 boards. Links to such results are freely available so please try a search. Here's
a good one to get you started. Most games already show a difference at 1024 x 768 and even Quake III is showing a difference at the highest texture detail levels at 1600 x 1200; keep in mind this game was released when 16/32 MB cards were the standard.
And again like I said before, the benchmark results far underestimate reality in terms of the actual benefit of more VRAM.