256MB FX 5600 Ultra coming

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

nRollo

Banned
Jan 11, 2002
10,460
0
0
"anyone who does have the problem only has themselves to blame for not resolving it, wether that means getting their money back or whatever."
LOL- that's a new one:
"It is the foolhardy buyer's fault the 9700 doesn't work right? Have they no silicon wafer fab, staff of engineers, and board assembly skills to remedy the issue?!"
 

kylebisme

Diamond Member
Mar 25, 2000
9,396
0
0
no, but it is fool-hearty to think that every thing you buy is going to work just perfect for you.
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
" no, but it is fool-hearty to think that every thing you buy is going to work just perfect for you."

Let's see:
I provided links to 100s of people who have scrolling wavey lines with their 9700s, am one of them on one of my 3 monitors, have never heard of this on nVidia based cards, and your response is "It's foolish to think all video cards work right"?

LOL

Well, that would seem to be the case, but would you agree that might be one reason a person might consider the "suxorz" 5800 FXs? (like I've been trying to say all along?)

What if the 9700 buyer doesn't have 3 monitors to try like me, won't try new motherboards like me, buy a $100 psu like me in an attempt to fix it? Should those guys just watch the gray bars roll through their screen every day because "the FX is late", "some of them are noisey", or "the 9700 is faster at some FSAA/aniso/resolutions"?

 

Guga

Member
Feb 21, 2003
74
0
0
When I make my first reply I never said that R8500 is faster or better than Ti500.

I only criticize the fact rollo put R8500 like a very worse card than it reallys is.

Rollo, escuse me but it seems that you'r a bit tendencious for Nvidia.

I like good cards and the rest don't matter. I have 2 computers, one of them is my server (win2k server) with a R8500, the other is my workstation with XP Pro, with a creative Ti4400, and if you really want to knows, in some games, I don't see so much difference, that I can notice!

Yes. I play games on my 2k server.. If I need a second pc, why don't join the work to the pleasure.

Just one more thing.. You said you have lots of vid cards, including a GF 4 MX.. All those card except r8500 didn't please u??
Come on.. are you tell us that you like more GF4 MX than R8500??

Just in 2D quality.... but that's just my opinion



 

nRollo

Banned
Jan 11, 2002
10,460
0
0
" I remember making the same defense for 3dfx as Rollo is for Nvidia now..."
Pretty deep, man....like, 3dfx got bought out by nVidia, and now nVidia is putting out cards late with a brute force approach, just like 3dfx! And...and...the FX is the first card with 3dfx tech in it, that 3dfx engineers worked on, so it's like the whole 3dfx thing all over again....yup, nVidia will be going belly up any day now......


LOL, sure thing dude.
 

kylebisme

Diamond Member
Mar 25, 2000
9,396
0
0
Originally posted by: Rollo
" no, but it is fool-hearty to think that every thing you buy is going to work just perfect for you."

Let's see:
I provided links to 100s of people who have scrolling wavey lines with their 9700s, am one of them on one of my 3 monitors, have never heard of this on nVidia based cards, and your response is "It's foolish to think all video cards work right"?

LOL

Well, that would seem to be the case, but would you agree that might be one reason a person might consider the "suxorz" 5800 FXs? (like I've been trying to say all along?)

What if the 9700 buyer doesn't have 3 monitors to try like me, won't try new motherboards like me, buy a $100 psu like me in an attempt to fix it? Should those guys just watch the gray bars roll through their screen every day because "the FX is late", "some of them are noisey", or "the 9700 is faster at some FSAA/aniso/resolutions"?

omg, a videocard with scrolling lines and it is an nvidia card too! also, a guy that understands that life is not always as simple as we might want it to be, but you can do your best to make it fair by returning products that don't work right for you and trying again. ah what a wonderful world. :D
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
64 is perfectly fine and 128 is more than we will need for a long time
Except that it isn't.

64 MB cards get killed in most of today's games (even at medium resolutions with texture compression enabled) and 128 MB cards are starting to get squeezed in UT2003 and Unreal 2 at high resolutions.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: BFG10K
64 is perfectly fine and 128 is more than we will need for a long time
Except that it isn't.

64 MB cards get killed in most of today's games (even at medium resolutions with texture compression enabled) and 128 MB cards are starting to get squeezed in UT2003 and Unreal 2 at high resolutions.

Their poor performance is related to the engine itself not having enough horsepower, as review after review has shown 128MB vs 64MB provides a slight performance in instances where neither card runs at a playable framerate. The most common cards that come in 64MB and 128MB flavors are the 8500 and the Ti4200, and there are few instances where the 128MB version outperforms the 64MB version. In most cases, the 64MB version wins due to higher clocked RAM.

Even a more advanced R300 core doesn't benefit much from the extra memory (as seen on Anand's 64MB 9500pro preview), as games today simply don't require it. By the time its needed, the current crop of cards will be obsolete and will perform poorly whether or not they have 64MB, 128MB, 256MB or a 1GB of RAM. There's simply no reason to purchase the 256MB version of the 5600Ultra unless there are compelling reasons other than additional memory.

Chiz
 

nRollo

Banned
Jan 11, 2002
10,460
0
0
"omg, a videocard with scrolling lines and it is an nvidia card too! also,"
I didn't say no other video card in history has had scrolling lines, I just said it's a very common problem on the 9700s. (and posted links to prove it) I doubt you can post similar links on nVidia forums, I just don't think the problem is nearly as widespread.
 

kylebisme

Diamond Member
Mar 25, 2000
9,396
0
0
well it is not like i was even looking for it, i just found it on the front page and though it would be fitting to post it here. point being that issue are all over the place with all sorts of products, when you don't like what you have bought it is your own duty to do something about it. siting around and complaining that it should not happen is not going to help anything, and it is being compleatly unrealistc as well.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
Their poor performance is related to the engine itself not having enough horsepower,
Actually the UT2003 engine is very fast though Legend's implementation in Unreal II leaves a lot to be desired.

as review after review has shown 128MB vs 64MB provides a slight performance in instances where neither card runs at a playable framerate.
The performance gain is more than slight.

Also benchmarks are only a small window into actual gameplay and don't always show the whole story. When actually playing the games 128 MB cards are much smoother than 64 MB cards because they don't stutter and texture swap as often. I've extensively tested both 64 MB and 128 MB cards in a wide range of games and I can tell you that there's a big difference between the two, especially in the games made in the last year.

Even a more advanced R300 core doesn't benefit much from the extra memory (as seen on Anand's 64MB 9500pro preview), as games today simply don't require it.
Yes they do - most games made after Quake III (eg JK2, RTCW, MOHAA, SOF2, UT2003 etc) will easily kill 64 MB cards, even at medium resolutions and even with texture compression enabled.
 

rogue1979

Diamond Member
Mar 14, 2001
3,062
0
0
I have been playing around with alot of video cards lately, Radeon 8500-9100, 9500-9700 and Geforce4. While most of the benchmarks show a small improvement from 64MB to 128MB cards, they don't show the whole story. Watching that little fps counter shows little gain when using more video memory, although in the most demanding situations the minimum framerate stays a little higher. Turn off that stupid counter we are all addicted to and you will notice that at higher resolutions the 128MB cards are smoother than their 64MB counterparts, no doubt about it.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Actually the UT2003 engine is very fast though Legend's implementation in Unreal II leaves a lot to be desired.

I was actually referring to the card's GPU not having enough horsepower, as I don't question UT2K3's engine being one of the most advanced game engine's on the market today.

The performance gain is more than slight.

Yet there isn't a single benchmark that substantiates this nor any comments from reviewers to support that statement. From what I've seen, they soften the impact of performance penalties at unrealistic/undesireable resolutions.

Also benchmarks are only a small window into actual gameplay and don't always show the whole story. When actually playing the games 128 MB cards are much smoother than 64 MB cards because they don't stutter and texture swap as often. I've extensively tested both 64 MB and 128 MB cards in a wide range of games and I can tell you that there's a big difference between the two, especially in the games made in the last year.

Some benchmarks are snapshots, some are indicative of what you'll find in normal gameplay. You make it sound as if texture swapping is like accessing the swap file when you run out of system RAM. That's simply not the case with 64MB in today's games, as efficient caching algorithms and compression techniques prevent texture dumps that cause noticeable slowdown or stuttering. UT2K3 is one of the few if not the only game that can use more than 64MB of onboard memory for textures (80MB I believe at max detail and resolutions). The only other "gaming" application that I know of using a full 128MB of video ram is 3DMark2K3 which can use approximately 100MB for textures in some tests (according to their whitepapers).

Yes they do - most games made after Quake III (eg JK2, RTCW, MOHAA, SOF2, UT2003 etc) will easily kill 64 MB cards, even at medium resolutions and even with texture compression enabled.

That's interesting since 3 of the games you listed use modified versions of the Q3 engine (JK2, SoF2, and RTCW) and as such, are more CPU dependent than most games on the market. Those games extensively use pre-cached textures, and are some of the few that show slight performance benefits between 64 and 128MB. However, even those highly CPU dependent games show that different resolutions have much more impact on performance than the amount of onboard RAM. You'll notice that 64MB and 128MB versions run neck and neck until resolutions are increased to 1280 and beyond, at which point both fall to unplayable levels and negate any performance benefit provided by the extra RAM.

I've used numerous 64MB and 128MB cards myself (I didn't just go from an onboard Trident to a 9700pro overnight), and have seen the difference at the same resolution and texture detail makes little difference. This is particularly evident in the games you mentioned, that are by and large CPU dependent. Run a 64MB GF3 and a 128MB GF3 at 1024x768 with max details and you will see little difference in performance. Increase the resolution to 1600x1200 and both cards will struggle equally, not b/c of a lack of RAM, but simply b/c the GPU has become the bottleneck.

There might be more of a performance delta in higher performing cards like the FX Ultra or the 9700pro if they came with 64MB and 128MB of RAM, as they might practically benefit from the additional memory. However, 64MB versions of those cards don't exist, as mfgs. have seen a need for more RAM in future games and are already pushing for 256MB of RAM that simply isn't necessary. 128MB is certainly the standard for future gaming, but not on a card that is 2 to 3 generations old that will struggle running tomorrow's games regardless of how much memory it has. If you had a choice between two identical 5600FX Ultra cards knowing it performed on par with a Ti4600 w/out AA and AF, the only differences being one had 128MB and the other had 256MB and cost $100 more, which would you choose?

Chiz
 

BoberFett

Lifer
Oct 9, 1999
37,562
9
81
I think what chizow is saying is that by the time a game comes out that begs for 256 MB of onboard video RAM, the 5600 is going to be sucking so much wind that you could put a GB of RAM onboard and it wouldn't help. The bottleneck will be the GPU and not the amount of RAM.
 

Soulkeeper

Diamond Member
Nov 23, 2001
6,731
155
106
i got me a 128mb GF4 ti4200 and am glad I did
my next card will prob have atleast 256mb if not 512

************ opinion below ************

it's better that they just load up on the onboard mem so you have one less thing to worry about for a few gpu generations

256mb is a good move
once game developers have something to work with they'll be able to use it and will find a way to

*********** end of opinion *****************
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
I was actually referring to the card's GPU not having enough horsepower, as I don't question UT2K3's engine being one of the most advanced game engine's on the market today.
And I was telling you that while the engine is advanced it's also extremely fast for the level of eye candy it delivers.

Yet there isn't a single benchmark that substantiates this nor any comments from reviewers to support that statement.
Then you really need to look harder at a wider range of benchmarks and review commmentaries. You also need to test things yourself instead of relying on reviewers for everything.

From what I've seen, they soften the impact of performance penalties at unrealistic/undesireable resolutions.
There's a difference even at medium resolutions like 1152 x 864 (with texture compression enabled) in the games I mentioned. In particular there's reduced/non-existant stuttering when entering new areas or quickly changing your view from a new area to an old one and then back again.

Some benchmarks are snapshots, some are indicative of what you'll find in normal gameplay.
While some benchmarks are better than others all benchmarks are still snapshots because not one of them runs through the entire game.

Also looking at a benchmark is a lot different to actually playing the game and feeling the response you get from texture swapping. Slight stutters and slowdowns can be missed during benchmark runs but are certainly picked up when you're trying to do a fast turn, entering a new area or trying to time a complex set of jumps.

You make it sound as if texture swapping is like accessing the swap file when you run out of system RAM.
In it's most severe form it's exactly like that. VRAM -> system RAM is a memory heirarchy exactly like the L1/L2 cache -> system RAM -> HD. In fact VRAM is even more important because the effects of spilling over are much worse than spilling over in the normal memory heirarchy because virtual memory and all of the associated advanced features it brings to the table are not available to video cards.

That's simply not the case with 64MB in today's games, as efficient caching algorithms and compression techniques prevent texture dumps that cause noticeable slowdown or stuttering.
You're wrong. All of those techniques you mentioned reduce the effects of spillover but they don't magically turn 64 MB cards into 128 MB cards. At the end of the day 64 MB VRAM is never going to be 128 MB VRAM, any more than virtual memory will make a 64 MB system act like a 128 MB system.

UT2K3 is one of the few if not the only game that can use more than 64MB of onboard memory for textures (80MB I believe at max detail and resolutions).
Uh no, even Quake III has 50 MB worth of uncompressed textures in Q3DM9. Granted they'd probably compress to 12.5 MB but Quake III is a four year old game and texture sizes and detail levels have dramatically increased since then.

Also you make it sound like textures are the only thing thing residing in the VRAM when in reality 2 or 3 framebuffers are in there along with other things such as vertex/geometry data and pixel/vertex shading programs. If anything arbitrary shader lengths are going to make the problem bigger because if you run out of room the new instructions have to be loaded from the system RAM, just like arbitrary length applications can't rely on exclusively fitting into the caches.

Until we get virtual texturing on cards then the amount of VRAM is extremely critical since the spillover effect is usually massive in terms of a performance hit.

Those games extensively use pre-cached textures
Of couse they do as does any other game out there. Leaving the textures on the HD until they're needed is quite a nutty thing to do and in fact even if you did so Windows would automatically move the data into the disk cache.

All games load as many textures as possible into the VRAM and store the rest into the AGP aperture/system RAM, to be swapped when needed. I don't know of any game that doesn't precache textures to some degree.

You'll notice that 64MB and 128MB versions run neck and neck until resolutions are increased to 1280 and beyond,
And as I explained earlier that's exactly what I'd expect given that benchmarks don't tell the whole story.

But if you played the games from start to finish you'd see that 1152 x 864 and even 1024 x 768 are already starting to show differences between the cards, all the while delivering playable framerates. At that resolution the problem isn't the fillrate/bandwidth on 64 MB cards, it's mostly a lack of VRAM to store the textures and other data. This is what causes the odd stutters, delays and framerate fluctuations which detract from an otherwise smooth gaming experience.

Then as the resolution increases the texture storage problems get worse but the primary bottleneck shifts more onto the GPU's fillrate/bandwidth so the effects aren't noticed as much as you're already running at a slideshow speeds anyway. To put it simply the more powerful the the card, the more dependent on VRAM it is. A 16 MB Radeon 9700 Pro will probably get beaten across the board by a 128 MB Ti4200, except maybe at low resolutions like 320 x 240 x 16 and all other details on minimum.

I've used numerous 64MB and 128MB cards myself (I didn't just go from an onboard Trident to a 9700pro overnight), and have seen the difference at the same resolution and texture detail makes little difference.
Then I'm afraid your tests couldn't have been very comprehensive. I've replayed entire games after moving from one card to another and specifically targetted the areas I had problems with before. I also had the framerate counter going in those places (and in many more as well) to gauge exactly what was going on.

I can tell for a fact that 128 MB cards were much smoother and didn't framerate fluctuate as much as the 64 MB cards did.

Run a 64MB GF3 and a 128MB GF3 at 1024x768 with max details and you will see little difference in performance.
Perhaps in the benchmarks but not in in actual gameplay. Also "performance" isn't quite the right word to describe the difference; a better word would be "smoothness".

128MB is certainly the standard for future gaming, but not on a card that is 2 to 3 generations old that will struggle running tomorrow's games regardless of how much memory it has.
That's true but at least if you lower the settings to become CPU limited the card with the more VRAM will always do better, assuming identical or close fillrate/bandwidth capability.

If you had a choice between two identical 5600FX Ultra cards knowing it performed on par with a Ti4600 w/out AA and AF, the only differences being one had 128MB and the other had 256MB and cost $100 more, which would you choose?
The 128 MB one of course. Hey, I'm not saying the 256 MB is needed now, all I'm saying is that 128 MB is needed now and is already starting to get squeezed in the newest games. Widespread 256 MB cards aren't that far away and will perhaps even arrive with the next generation of boards. And as soon as the price delta between 256 cards vs 128 MB cards gets low enough it'll be a no-brainer to get them, much like it's a no-brainer to get 128 MB cards now.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: BFG10KThen you really need to look harder at a wider range of benchmarks and review commmentaries. You also need to test things yourself instead of relying on reviewers for everything.
Yah, I guess there's no point to benchmarks, I mean, we should just take everything that we read on the internet at face value.
rolleye.gif
Considering this isn't the first time we've disagreed on results from our own tests and observations, benchmarks are particularly relevant since they offer independent, reporformable, standard results to which to compare. Its also not the first time you've made assertions that simply don't jive with published reports, much less my own, so benchmarks from independent reviewers are the only objective method of evaluating your claims. Again, any independent reports/benchmarks or VRAM usage figures would be greatly appreciated since we obviously don't see eye to eye on the subject. I don't think I need to link anything for you, since I'm sure you've seen the results I'm talking about on any major review site.

There's a difference even at medium resolutions like 1152 x 864 (with texture compression enabled) in the games I mentioned. In particular there's reduced/non-existant stuttering when entering new areas or quickly changing your view from a new area to an old one and then back again.
Yes, the difference is that the texture memory requirement hasn't changed, simply the number of textures that need to be rendered per frame has changed. I've run every game you mentioned extensively at 1024 and max details with a GF3 64MB, Ti4200 128MB(@1280 also), and 9700pro 128MB (@1280), and didn't notice any stuttering or slowdowns from texture swapping but then again, I never ran them at unrealistic resolutions either. I noticed lower minimum framerates when the card was overextended beyond its means.

Slight stutters and slowdowns can be missed during benchmark runs but are certainly picked up when you're trying to do a fast turn, entering a new area or trying to time a complex set of jumps.
The only instances where I could see something being missed are time demos; any real-time render will not miss the slight stutters or slowdown you mention. You'll be able to visually see such anomalies and you'll certainly be able to catch it if you are monitoring minimum fps.

You're wrong. All of those techniques you mentioned reduce the effects of spillover but they don't magically turn 64 MB cards into 128 MB cards. At the end of the day 64 MB VRAM is never going to be 128 MB VRAM, any more than virtual memory will make a 64 MB system act like a 128 MB system.
I'm wrong b/c you said so, but published reports and a few examples I gave indicate otherwise.
rolleye.gif
They don't need to turn them into 128MB cards, b/c the extra memory simply isn't needed or used. Of course a 64MB system isn't going to behave like a 128MB system on any modern OS, but it sure as hell would if it was running Windows 3.1. Considering system RAM and VRAM behave very differently in application and requirements, thats a poor analogy to begin with. Textures are stored and called as needed from system RAM; the benefits of more system RAM would be much more tangible than additional video RAM as the comparison between video -> system RAM and system RAM -> swap file is like comparing a dirt-road to a 16-lane super highway.

Uh no, even Quake III has 50 MB worth of uncompressed textures in Q3DM9. Granted they'd probably compress to 12.5 MB but Quake III is a four year old game and texture sizes and detail levels have dramatically increased since then.
Its a 4 year old game that is the engine for 3 of the 5 games you mentioned. Textures have become more advanced over time, but so have compression techniques and memory controller efficiency.

Also you make it sound like textures are the only thing thing residing in the VRAM when in reality 2 or 3 framebuffers are in there along with other things such as vertex/geometry data and pixel/vertex shading programs. If anything arbitrary shader lengths are going to make the problem bigger because if you run out of room the new instructions have to be loaded from the system RAM, just like arbitrary length applications can't rely on exclusively fitting into the caches.
That's not true at all, I offered 3DMark2K3 as the only "gaming" application that would fully use 128MB of VRAM, I distinguished that only 100MB would be used for textures, the other 24MB would be used for instructions, extensions and shader programs. And as you stated, the need for VRAM will be less extensive once virtual texturing, programmable shaders, and compression techniques improve.

[/quote]And as I explained earlier that's exactly what I'd expect given that benchmarks don't tell the whole story.

But if you played the games from start to finish you'd see that 1152 x 864 and even 1024 x 768 are already starting to show differences between the cards, all the while delivering playable framerates. At that resolution the problem isn't the fillrate/bandwidth on 64 MB cards, it's mostly a lack of VRAM to store the textures and other data. This is what causes the odd stutters, delays and framerate fluctuations which detract from an otherwise smooth gaming experience.[/quote]
I have played those games start to finish, and guess what? Its not one long endless romp where the textures for the entire game are stored in VRAM. Every time a new level loads, the onboard and system cache is flushed and replaced with the necessary textures for that next level or area.

Then as the resolution increases the texture storage problems get worse but the primary bottleneck shifts more onto the GPU's fillrate/bandwidth so the effects aren't noticed as much as you're already running at a slideshow speeds anyway. To put it simply the more powerful the the card, the more dependent on VRAM it is. A 16 MB Radeon 9700 Pro will probably get beaten across the board by a 128 MB Ti4200, except maybe at low resolutions like 320 x 240 x 16 and all other details on minimum.
Again, memory issues aren't compounded if the same textures are accessed, the bottleneck is b/c more textures need to be rendered at the same time.

Then I'm afraid your tests couldn't have been very comprehensive. I've replayed entire games after moving from one card to another and specifically targetted the areas I had problems with before. I also had the framerate counter going in those places (and in many more as well) to gauge exactly what was going on.

I can tell for a fact that 128 MB cards were much smoother and didn't framerate fluctuate as much as the 64 MB cards did.
And I guess in your comprehensive testing, you also factored in any platform/processor/system RAM changes you made since running those extremely CPU/platform bottlenecked games that you mentioned? There's a term for that, its called displaced perception. Unless you ran them under the same platform and only changed the amount of onboard RAM, your results are null and void.

The 128 MB one of course. Hey, I'm not saying the 256 MB is needed now, all I'm saying is that 128 MB is needed now and is already starting to get squeezed in the newest games. Widespread 256 MB cards aren't that far away and will perhaps even arrive with the next generation of boards. And as soon as the price delta between 256 cards vs 128 MB cards gets low enough it'll be a no-brainer to get them, much like it's a no-brainer to get 128 MB cards now.

And I'm not saying 128MB of memory is pointless, its just pointless on slower GPUs (anything less than NV30 and R300). Current games don't need it (again, any links disputing that would be greatly appreciated), and future games or higher resolutions in current games will simply require a faster GPU. No amount of onboard memory will make gameplay any smoother when you're chugging along at 25fps with minimum framerates dipping into the single digits.

Chiz
 

BoberFett

Lifer
Oct 9, 1999
37,562
9
81
And I'm not saying 128MB of memory is pointless, its just pointless on slower GPUs (anything less than NV30 and R300). Current games don't need it (again, any links disputing that would be greatly appreciated), and future games or higher resolutions in current games will simply require a faster GPU. No amount of onboard memory will make gameplay any smoother when you're chugging along at 25fps with minimum framerates dipping into the single digits.

And that's the crux of the issue. More memory isn't going to speed up a card that's limping along on an ancient GPU.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
Yah, I guess there's no point to benchmarks, I mean, we should just take everything that we read on the internet at face value.
What in the world are you talking about?

Considering this isn't the first time we've disagreed on results from our own tests and observations, benchmarks are particularly relevant since they offer independent, reporformable, standard results to which to compare.
That's fine and I've already told you to get a better selection of benchmarks to use. Obviously you must not be looking very hard if you can't find the benchmarks freely available on the web that agree with my statements.

I don't think I need to link anything for you, since I'm sure you've seen the results I'm talking about on any major review site.
You don't need to link anything, you need to go off and read some more reviews. A good place to start is Ti4200 and Radeon 8500 comparisons because in those tests even the slower clocked 128 MB cards beat the 64 MB cards. Also as I already explained you can pick benchmarks that show no differences and you'll still be wrong because benchmarks don't tell the true story.

Yes, the difference is that the texture memory requirement hasn't changed, simply the number of textures that need to be rendered per frame has changed.
What in the world are you talking about? The average size and detail level of game textures has increased by massive amounts in last the few years. Also even if the size hasn't increase the fact that you've got more of them per scene increases memory storage requirements since you need all of the textures in a given scene to be in the VRAM otherwise you'll get texture swaps during the rendering process.

Thus the requirements have gone up from two angles, both from bigger textures and from having more of them.

I've run every game you mentioned extensively at 1024 and max details with a GF3 64MB, Ti4200 128MB(@1280 also), and 9700pro 128MB (@1280), and didn't notice any stuttering or slowdowns from texture swapping
And I've run games at 1152 x 864 on owned/tested 64 MB and 128 MB Ti4200s, a 64 MB GF3 Ti500 and a 128 MB Ti4600 and the 128 MB cards did much better than their 64 MB counterparts.

but then again, I never ran them at unrealistic resolutions either.
1152 x 864 is hardly unrealistic. Besides, this is a discussion about whether 64 MB vs 128 MB makes a difference, not what your subjective definition of unrealistic is.

I noticed lower minimum framerates when the card was overextended beyond its means.
Over-extended as in exceeding the card's VRAM since 1024 x 768 is usually CPU limited in most games with medium speed cards.

any real-time render will not miss the slight stutters or slowdown you mention.
The render won't but the observer might. It's much different actually playing a game and feeling the controls and physics respond to you rather than sitting back and watching a realtime timedemo running, much like it's a lot different to comparing TV FPS to game FPS. There's total interaction in one case and absolutely none in the other case.

You'll be able to visually see such anomalies and you'll certainly be able to catch it if you are monitoring minimum fps.
This is true and it's the best way to pick them up but unfortunately not all games have minimum framerate measurements. Also playing the game is still by far the best way of picking up such things.

Speaking of a minimum framerate, why don't you benchmark Botmatch Anubis in UT2003 retail with maximum detail levels at both 1280 x 960 and 1600 x 1200 on your Radeon 9700 Pro (both playable settings) and watch the demo carefully, thereafter looking at the minimum framerate at both settings. You'll see that the minimums plummet at 1600 x 1200 far lower than the expected penalty of raising the resolution to that step. That is caused by texture swapping and is irrefutable proof that even 128 MB cards are starting to get squeezed in the newest games. And if you're up for it try running the benchmark on a 64 MB Ti4200 vs 128 MB Ti4200 and you'll see the 64 MB Ti4200 gets absolutely killed.

I'm wrong b/c you said so, but published reports and a few examples I gave indicate otherwise.
No, I told to look at some more benchmarks because the proof is out there (no pun intended :p). I also told you that benchmarks can mean nothing in this discussion depending on which ones are run, what the system specs are and how they're run.

They don't need to turn them into 128MB cards, b/c the extra memory simply isn't needed or used.
My goodness, it appears that the point of my comments whizzed completely past your head. I wasn't trying to "turn" a 64 MB card into a 128 MB one, I was merely illustrating that even the best memory management in the world always works better with more physical RAM.

Of course a 64MB system isn't going to behave like a 128MB system on any modern OS, but it sure as hell would if it was running Windows 3.1.
You're not running Windows 3.11 any more than you're running Quake III. It's 2003, not 1999.

the benefits of more system RAM would be much more tangible than additional video RAM
Not in a case such as texture thrashing which is the most severe case of a lack of VRAM. It's a perfectly fine analogy and it also illustrates how much more critical VRAM is than system RAM in a case like this because textures are an all-or-nothing excercise but normal data can be broken down as Windows pleases to make the most efficient use of available system resources.

Its a 4 year old game that is the engine for 3 of the 5 games you mentioned.
Are you trying to be intentionally difficult or do you honestly have trouble with grasping the concept of games running on a given engine are not the same as the original game?

but so have compression techniques
Texture compression techniques are exectly the same as they were since the DirectX 6.0/7.0 spec definied them several years ago, namely 6:1 compression on noisy non-alpha textures and 4:1 ratios on most standard textures. If you're talking about Z/colour compression, that only works on data in the VRAM as well which doesn't help data being loaded from the system memory. Indirectly it does free up more space and helps out so I'll concede this point.

and memory controller efficiency
The memory controller only works on data in the VRAM; it also doesn't help data coming from the system RAM, nor does it reduce the footprint of existing data in the VRAM. You're talking about saving bandwidth but the issue here is storage space, not bandwidth.

That's not true at all, I offered 3DMark2K3 as the only "gaming" application that would fully use 128MB of VRAM, I distinguished that only 100MB would be used for textures, the other 24MB would be used for instructions, extensions and shader programs.
3Dmark is not game and neither is it the "only" program that can squeeze 128 MB cards. Again I refer you the UT2003 example for your own reference. Also this discussion is about 64 MB cards being obsolete, not about 128 MB cards being obsolete.

And as you stated, the need for VRAM will be less extensive once virtual texturing, programmable shaders, and compression techniques improve
Yeah but that ain't here yet and the compression techniques are always behind the developers in terms of how much they [the techniques] can compress vs how much demands are put on the card by the developers.

I have played those games start to finish, and guess what? Its not one long endless romp where the textures for the entire game are stored in VRAM. Every time a new level loads, the onboard and system cache is flushed and replaced with the necessary textures for that next level or area.
Exactly and the problem comes when the textures for one level can't be fully loaded onto 64 MB cards. That's when you get stuttering and slowdowns especially when you enter new areas that are textured differently to the old ones. Some game have "zones" where you can do 360 degree turns and you'll get constant texture swaps because everywhere you look requires textures not stored in the VRAM to be fetched from the system RAM. And when they're fetched there's not enough room to keep the old ones so out they're swapped out, a process that continues forever as long as you're in that area.

Again, memory issues aren't compounded if the same textures are accessed, the bottleneck is b/c more textures need to be rendered at the same time.
But there are more textures and they're bigger too which makes them more likely to be stored in the system RAM.

And I guess in your comprehensive testing, you also factored in any platform/processor/system RAM changes you made since running those extremely CPU/platform bottlenecked games that you mentioned?.
Of course and in a lot of cases only the cards were changed. In addition I also ran many other tests where I changed a lot of other things and this helped me to get a clearer and broader picture about exactly what's happening in a wide variety of situations.

Did you do the same?

And I'm not saying 128MB of memory is pointless, its just pointless on slower GPUs (anything less than NV30 and R300). Current games don't need it (again, any links disputing that would be greatly appreciated),
Almost any card available in a 128 MB form is better than its 64 MB counterpart, going right back to 8500/GF Ti4200 boards. Links to such results are freely available so please try a search. Here's a good one to get you started. Most games already show a difference at 1024 x 768 and even Quake III is showing a difference at the highest texture detail levels at 1600 x 1200; keep in mind this game was released when 16/32 MB cards were the standard.

And again like I said before, the benchmark results far underestimate reality in terms of the actual benefit of more VRAM.
 

BoberFett

Lifer
Oct 9, 1999
37,562
9
81
BFG10K

Ummm, you might want to go back and read the reviews again. From all the reviews I've read, 64 and 128 MB versions of the exact same card usually turn in almost identical scores in current games.

http://www.tomshardware.com/graphic/20030120/index.html

If you're talking about games of the future and their need for more memory, then you probably don't want to be buying what is currently an average to below average card. (Remember, this thread is about the 256 MB 5600, a card that can barely beat the 4200) Because those future games that need 128-256 MB of VRAM are going to choke the GPU itself. Running out of VRAM is the least of that cards concerns.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: BFG10K
And I'm not saying 128MB of memory is pointless, its just pointless on slower GPUs (anything less than NV30 and R300). Current games don't need it (again, any links disputing that would be greatly appreciated),
Almost any card available in a 128 MB form is better than its 64 MB counterpart, going right back to 8500/GF Ti4200 boards. Links to such results are freely available so please try a search. Here's a good one to get you started. Most games already show a difference at 1024 x 768 and even Quake III is showing a difference at the highest texture detail levels at 1600 x 1200; keep in mind this game was released when 16/32 MB cards were the standard.

And again like I said before, the benchmark results far underestimate reality in terms of the actual benefit of more VRAM.

I'm not going to sort through the rest of the Theorycraft with you, I'll just post relevant benchmarks (from a simple search as you suggested) that confirm what my non-bionic eyes see. You must be using limited edition versions that produce results that can only be replicated on your monitor, b/c once again, you're making unsubstantiated claims about performance and requirements from your own super-keen observations.

GF4 64MB vs 128MB through all the games you mentioned in 800 through 1600

AT's sub-$200 Round-Up comparing the 8500 64MB vs. the 128MB, note the 1280 results

Pretty much every sub-GF4 variant at different clock speeds and resolutions

Even at the resolutions you claim significant differences, there are none. Granted there are also no minimum frame rate numbers, but if the effect was half as pronounced as what you describe, the average difference would be much greater than the 10% or less performance delta in the most extreme instances.

The link you provided has to be a joke, Codecreatures at 15fps vs 10fps? That's not a game, that's a science project. Again, we're talking playable framerates here. You're kidding yourself if you think someone is going to run 39 fps at 1600 if the option to drop down to 1280 and run at 85 fps is available. But then again, you've made similar assertions about high resolutions in the past, so I guess anything is possible.

The rest of the game tests like UT2k3 provide a much clearer picture, why you didn't bother to link those escapes me. and JK2.

I'm not factoring in secret gremlins that eat and steal frames and cause texture thrashing and stuttering, so keep that in mind when viewing those benchmarks.

Chiz
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
"If you're talking about Z/colour compression, that only works on data in the VRAM as well which doesn't help data being loaded from the system memory. Indirectly it does free up more space and helps out so I'll concede this point."

Actually, color compression doesn't save you any space as the maximum potential size of the buffer must be reserved if it is needed or not, you are giving too much validity to the 64MB boards are good enough line ;)

Again, we're talking playable framerates here. You're kidding yourself if you think someone is going to run 39 fps at 1600 if the option to drop down to 1280 and run at 85 fps is available. But then again, you've made similar assertions about high resolutions in the past, so I guess anything is possible.

JKII UHQ 1600x1200 R8500 64MB- 73.8, 128MB- 96.9 according to the benches you linked to(TR), a 31% improvement without FSAA to bump up the framebuffer and really start to stress the memory requirements(another ~7MB of texture to be swapped over the uber fast AGP bus).

This entire discussion seems foolish. I can show benches where a 64MB board has no edge over a 32MB, or a 32MB over a 16MB board. How the hell does showing some benches that don't show a difference prove that a 128MB board won't always be faster? Worse case it very clearly is, and it is beneficial now at framerates that are very playable. There are instances now, with games that are a year old actually, that benefit from the additional RAM in situations where framerate benefits are still very much viable concerns. Not buying a 128MB board now assures you that you will run into an increasing amount of problems as time passes. Given the explosion in geometry complexity we are seeing with titles such as Unreal2 or the enormous shader load we are seeing with DooM3, it isn't too far off that 256MB boards will start to make their edge visible in games.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Ben, I never said 128MB was pointless, the discussion stemmed from the comment that 64MB --> 128MB was necessary in current and future games. We have to work within the limitations of what is available on the market today offering 64MB and 128MB flavors. The 2 cards that come up in these discussion 9 out of 10 times is the GF4 Ti4200 and the Radeon 8500, and both cards already show their limitations when run at higher resolutions (where the benefit of extra memory begins to show tangible benefits). We're further limited by the software available at any given point in time.

The discussion is also stemming from the need for 256MB on an FX5600 given that early benchmarks shows it performs on par with a GF4 in current games. Your example of comparing past cards is perfectly relevant, in fact it only confirms the only point I have been trying to make.

I can show benches where a 64MB board has no edge over a 32MB, or a 32MB over a 16MB board. How the hell does showing some benches that don't show a difference prove that a 128MB board won't always be faster? Worse case it very clearly is, and it is beneficial now at framerates that are very playable.

Again, we are limited by what tools we have at our disposal today. In current applications, the need simply isn't there. Would you make the same argument for a 32MB Radeon DDR vs. a 64MB Radeon DDR after seeing how it runs DX8 games or games released 1 1/2 years afterwards? Would you also advocate a card that supports new features like DX9 but will realistically struggle with DX8 games like the FX 5200 series cards? Would it be more beneficial to spend an extra $50 (as was the case when GF4 Ti4200's arrived) on a card with 2x as much memory knowing that by the time it was a requirement (in a year and a half), you would need a faster card to keep up anyways? If you look historically at video card progression, you'll see that its never a smart move to purchase something based on potential future benefits. It might buy you some time, but the card still gets slaughtered by the next generation of cards and games, which shows that any potential benefit simply becomes unused potential, which would have been better invested in your next upgrade anyways.

Chiz
 

rogue1979

Diamond Member
Mar 14, 2001
3,062
0
0
I had two refurbished Geforce4 Ti4200 cards here in the house. An Aeopen Aeolus 128MB version or an Abit Siluro 64MB version. I did intensive studies to see which was better. Cost was not an issue, I only payed $79 for the 64MB and $99 for the 128MB card, this was for my 11 year old duaghter. The Siluro 64MB version overclocked to 300/610, it really rocked, besting the Aeolus 128MB @ 285/545 in all benchmarks, including the one for Unreal 2003. I have said this before, benchmarks are a guideline and absolutely do not tell the whole story when it gets down to actual game play for each individuals actual settings. My daughters favorite game is Unreal Tournmament, so that's where I did most of the testing. If you set up a practice match with 16 bots in the Condemned world, a very different outcome is found. Playing at 1600 x 1200 with no FSAA or anistropic on an 1800MHz Athlon running 300MHz DDR speeds, the slower clocked 128MB version was noticably smoother. The average framerates hovered around 90-95 fps for both cards, with the low framerate staying 10-15 fps higher for the 128MB version when the frame rates dropped down to 45 fps or so for a couple of intense frames. The 64MB version would hit 30fps and get a slight stutter that wasn't critical, the game was still playable, but just wasn't as smooth as the 128MB version. Turn off the frame counter and let my daughter frag to her delight and she saw the same thing, the 128MB was smoother and she was not told which card she was using, a totally unbiased objective opinion. And remember, the 64MB card was clocked significantly higher than the 128MB one.

This is real world testing folks, not some of these so called "benchmarks" everybody posted a link to. In this case they didn't mean squat, not even coming close to showing the reality of a 128MB card having extra gaming power over it's 64MB counterpart. It is obvious to me that since the price difference for 64MB version vs 128MB version is only $20, the 128MB version is a valid solution for performance in todays games. It was less then a week ago I did these tests. But two months ago I tried the same thing with a Radeon 8500 64MB vs the 128MB card and the results showed an even bigger boost for the 128MB version.