• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

R580 Beating GTX512 by 25% +

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Originally posted by: John Reynolds
Originally posted by: Gstanfor
NV30 didn't seem as obviously forward-looking as R300.
That's a load of crap. nV3x was streets ahead of R300 technology wise, hence SM2.0+. the only real advantage R300 had was its AA method. nVidia's problems with nV3x were performance related (a 1 quad design vs a 2 quad design)

3Dmark is not an ideal test case of anything. It's a pretty GFX demo, nothing more.

Revisionist history at its best. And the performance flaw that 3DMark revealed in the NV30 architecture was register usage under stressful DX9 code. Which is why NVIDIA was fiddling with the skybox in the 4th (?) test so much, since it was rendered using more DX9 shaders than any other part of the scene. A lot of DX8 games actually saw the 5800 and 5900 be fairly competitive to R300, but that's hardly forward-looking in nature is it?

There is nothing revisionist about that history, it's the truth, pure and simple. R300 had better AA and a speed advantage over nV3x. It certainly wasn't more forward looking though - ATi's own Ruby demo proved that (and it worked quite nicely on my 5900XT too).

Edit: about nVidia's behaviour back then: I have never condoned or supported the three big sins (lying about pipe numbers, frocing brilinear on throughout the 50 series drivers & cheating in 3dmark). nVidia was stupid to do any of them. Fortunately it has realized this and corrected the problems, but cost itself a lot of credibility in the process.
 
Originally posted by: Gstanfor
There is nothing revisionist about that history, it's the truth, pure and simple. R300 had better AA and a speed advantage over nV3x. It certainly wasn't more forward looking though - ATi's own Ruby demo proved that (and it worked quite nicely on my 5900XT too).

You're right, a graphics part marketed as a cutting-edge DX9 chip that couldn't run DX9 code competitively sure is forward-looking in design. Rarely do you see someone so zealous that they willingly bifurcate their perception of the facts from reality, but you're doing a damn fine job of it in this thread.

Intel's P4 CPUs have a helluva lot of logic and a nice featureset built into them, but you don't see gamers arguing that they're more forward-looking when they can't compete in terms of performance with AMD's A64 parts.
 
Originally posted by: Gstanfor
Forward looking and performance are two, unrelated issues.

Except for when your company touts you as a DX9 beast that would obliterate the competition. Just wait for it. . . .! It's going to kill R300!! But please don't use FP precision levels since that'll cause a register bottleneck. And please overlook the contradiction of not being able to run at FP precision for a DX9 part.

Can we put this silly line you're pursuing to bed yet? I mean, it's almost mind-boggling that I'm sitting here even discussing whether or not NV30 was a better DX9 part than R300.
 
Well then, don't talk about nV30. I don't and most other nVidia supporters don't - it's the fanATics that are obsessed with it.

I talk about nV35 occasionally, and I still game quite happily on it when I use the spare machine.
 
Originally posted by: Gstanfor
NV30 didn't seem as obviously forward-looking as R300.
That's a load of crap. nV3x was streets ahead of R300 technology wise, hence SM2.0+. the only real advantage R300 had was its AA method. nVidia's problems with nV3x were performance related (a 1 quad design vs a 2 quad design)
Yeah, that helped them when NV40 turned out more like R300 than NV30 (more, less complex pipes)? Register pressure didn't seem to forward-looking at the time, nor did a focus on FX12 and FP16 when R300 was geared for FP24. I consider performance a pretty big component of forward-looking design, but narrowing it down to mere quad count is a bit facile, especially since NV30 compensated with much higher core speed.

Yeah, my original statement was pretty much a load of crap in terms of featureset, but I was thinking equally of rubber-meets-road. NV30's SM2A specs and double-Z ROPs were obviously more advanced than R300's SM2 (which has been described as PS1.4+), but it just didn't seem to translate to games. While NV30 had a sweet-sounding featureset, it also acted like a souped-up NV25 with FP bolted on. NV35 changed that a bit, but R300 tangibly covered more bases right from the start.

I guess you could say R300 was ATI perfecting R200, and NV40 was NV perfecting NV30. So, true, NV30 was more forward-looking, but R300 was more forward-achieving (talk vs. walk). IMO, of course.

Erm, I thought NV still hasn't released a low-k product?
nV30 was Black Diamond Low-K, nV35 and above were FSG.
This is the first I've heard of this.

B3D: "Exactly what the delays were changes depending on who you may be talking to; some say they were due to reconfiguration of the chip to increase its power in the face of some unexpected competition, while others say that it's all down to the early adoption of TSMC's .13µ process and it not being ready when NVIDIA had expected. It seems that the later talk puts the delays to NVIDIA first designing the part on TSMC's low-k dielectric .13µ process which wasn't ready at the time NVIDIA needed it, forcing them to move from the low-k process to the standard .13µ process."

Penstarsys: "NVIDIA initially designed the NV-30 to utilize a low-k design, but that was dropped very shortly due to the increased risk that the low-k process will not be available."

Tho NV30 might have been attempted with low-k, it ended up without it, no?
 
Originally posted by: Gstanfor
Well then, don't talk about nV30. I don't and most other nVidia supporters don't - it's the fanATics that are obsessed with it.

You might want to read this thread again and see who keeps reaching for NV30 analogies for R520. A silly analogy I think Pete has done a good junk debunking.

 
Shader power alone will not save the R580. That was always my point. Shader power is only useful if it can be effectively applied. That was true for nV3x (shader power went up but couldn't be used effectively) and I expect it will be true of R580 because (a) of a lack of pipelines and (b) a lack of shader heavy games.
 
Originally posted by: Gstanfor
Well then, don't talk about nV30. I don't and most other nVidia supporters don't - it's the fanATics that are obsessed with it.

I talk about nV35 occasionally, and I still game quite happily on it when I use the spare machine.

Dude, you are wasting your time.

FanATIcs whine away about ye olde nV30 every day of the week on this forum, as if anyone still actually has one!

LOL- I bet if you tried to make similar comments about the MAXX, the same asshats would whine,"Waaahh! That's an old card! Why are you talking about THAT?!?!"

(and probably follow it up with some pseudo- zen gobbledygook about the sacred importance of appearing "non-biased"!)

 
Originally posted by: Gstanfor
That was true for nV3x (shader power went up but couldn't be used effectively) and I expect it will be true of R580 because (a) of a lack of pipelines and (b) a lack of shader heavy games.

If that were the case I think we'd see the # of ROPs increased in both G71 and R580, something I don't think we'll see happen. Waste of die space, which tells you something about where the bottleneck in graphics performance is shifting toward and why talking about pixel pipelines is a waste of time.

Edit: And, Rollo, congrats, that was quite the contributory post. Really added a lot to the conversation.
 
Originally posted by: John Reynolds
Originally posted by: Gstanfor
That was true for nV3x (shader power went up but couldn't be used effectively) and I expect it will be true of R580 because (a) of a lack of pipelines and (b) a lack of shader heavy games.

If that were the case I think we'd see the # of ROPs increased in both G71 and R580, something I don't think we'll see happen. Waste of die space, which tells you something about where the bottleneck in graphics performance is shifting toward and why talking about pixel pipelines is a waste of time.

Edit: And, Rollo, congrats, that was quite the contributory post. Really added a lot to the conversation.

As I said first up there will still be a situation where G71 has twice the texturing power or R580 (every G71 pipe has at least one texturing processor). You can talk shaders up all you like, but texturing is what makes the 3D world go round - has done since Voodoo1 (and before) and will do well into the future.
 
Originally posted by: Gstanfor
As I said first up there will still be a situation where G71 has twice the texturing power or R580 (every G71 pipe has at least one texturing processor). You can talk shaders up all you like, but texturing is what makes the 3D world go round - has done since Voodoo1 (and before) and will do well into the future.

Too bad that with programmable pipes the ALU instructions are becoming a lot more common then texture operations. Texturing was important, there were usages for texture lookup tables instead of math operations but now ALU operations are more common and a lot cheaper to perform.

So yes the G71 has twice the TMUs but will they be used, we'll just have to wait and see. R580 is looking like the better part in my opinion.

In what case do you see texturing used more than ALU ops? I can maybe think of a few limited cases but other than that...

 
Almost EVERYTHING has to be textured! (unless you are fond of wireframe graphics). Pixelshader effects (usually) get layered over or combined with with texturing.
 
Originally posted by: Gstanfor
As I said first up there will still be a situation where G71 has twice the texturing power or R580 (every G71 pipe has at least one texturing processor). You can talk shaders up all you like, but texturing is what makes the 3D world go round - has done since Voodoo1 (and before) and will do well into the future.

You do realize I'm trying to engage this discussion without predicting which part is going to "win" against the other, don't you? And yet you couldn't be more wrong as to where current and future games tend to bottleneck in graphics chips. Just read a post from Jason Cross of Extremetech (former hardware editor at CGM) in which he wrote that the "math required per pixel is going way up, to the point where if you have even 8 ALUs per ROP you'll be taking several clock cycles to handle even fairly basic shader operations." And by math he wasn't referring to texturing polygons. So the engineers of these companies either know what they're doing with the sharp increase in ALUs in their designs or you, on the other hand, know more than they do. Hmmmm. . . .
 
Originally posted by: John Reynolds
Edit: And, Rollo, congrats, that was quite the contributory post. Really added a lot to the conversation.

Question:
Why is John Reynolds in a thread about R580s and GTX512s talking about nV30s and R300s?

Answer: That's what ATI fans like to do best- wax nostalgic.

Why don't you look at how your taking this thread off topic arguing about meaningless, obsolete hardware is "adding to the conversation"?

What's next? Your appraisal of how the 7500 was more advanced than the GF2? Wow! Can't wait!

:roll:
 
Originally posted by: Rollo
Originally posted by: John Reynolds
Edit: And, Rollo, congrats, that was quite the contributory post. Really added a lot to the conversation.

Question:
Why is John Reynolds in a thread about R580s and GTX512s talking about nV30s and R300s?

Answer: That's what ATI fans like to do best- wax nostalgic.

Why don't you look at how your taking this thread off topic arguing about meaningless, obsolete hardware is "adding to the conversation"?

What's next? Your appraisal of how the 7500 was more advanced than the GF2? Wow! Can't wait!

:roll:

sure beats starting thread after thread which are pure FUD . . . like your last one
:thumbsdown:
 
Originally posted by: Rollo
Originally posted by: John Reynolds
Edit: And, Rollo, congrats, that was quite the contributory post. Really added a lot to the conversation.

Question:
Why is John Reynolds in a thread about R580s and GTX512s talking about nV30s and R300s?

Answer: That's what ATI fans like to do best- wax nostalgic.

Why don't you look at how your taking this thread off topic arguing about meaningless, obsolete hardware is "adding to the conversation"?

What's next? Your appraisal of how the 7500 was more advanced than the GF2? Wow! Can't wait!

And your would-be character assassination posts are just sooo much more on topic, right? Threads often--hell, almost always--meander in the topics discussed. It's the natural flow of any conversation, and at least I'm trying to take up some of the technical aspects of a portion of the discussion. You? What are you doing here again? Adding absolutely nothing of worth to the discussion, other than trying to turn it into yet another tired NV vs. ATI pissing match. No thanks.
 
I've never disputed that shader power is on the increase (and given nVidias history of increasing shader power over time in line with waht developers demand only a fool would argue that G71 will be caught short on shading power). Shading power isn't the be all and end all of performance though.

Assuming nothing changes architecturally between G70 & G71 other than doubling pipelines, G71 will be able to texture and filter twice as fast as G70 (give or take a % or two either way depending on ROP loading).

As we all well know texture filtering is a big issue for graphics enthusiasts (way bigger than pixel shading IMO) and has a big impact on final framerate.
 
Originally posted by: Gstanfor
As we all well know texture filtering is a big issue for graphics enthusiasts (way bigger than pixel shading IMO) and has a big impact on final framerate.

And right there, boys and girls, is where I call it quits rather than continuing to beat my head against a brick wall.

I am pretty keen at this point, though, on seeing whether or not either company increases the # and/or functionality of the ROPs in their designs.



 
Originally posted by: John Reynolds
Originally posted by: Gstanfor
As we all well know texture filtering is a big issue for graphics enthusiasts (way bigger than pixel shading IMO) and has a big impact on final framerate.

And right there, boys and girls, is where I call it quits rather than continuing to beat my head against a brick wall.

I am pretty keen at this point, though, on seeing whether or not either company increases the # and/or functionality of the ROPs in their designs.

Bye.
 
Originally posted by: Gstanfor
Almost EVERYTHING has to be textured! (unless you are fond of wireframe graphics). Pixelshader effects (usually) get layered over or combined with with texturing.

Please go do some research before posting about this again, you obviously know very little or much more likely, nothing at all about it (besides NVIDIA having a bigger number than ATi).

 
Originally posted by: John Reynolds
Originally posted by: Gstanfor
As we all well know texture filtering is a big issue for graphics enthusiasts (way bigger than pixel shading IMO) and has a big impact on final framerate.

And right there, boys and girls, is where I call it quits rather than continuing to beat my head against a brick wall.

I am pretty keen at this point, though, on seeing whether or not either company increases the # and/or functionality of the ROPs in their designs.

I would like to know why Gstanfor thinks that is the case, simply because NVIDIA has a bigger number or does he actually know something about pixel shaders (guessing not)?

 
Originally posted by: apoppin
sure beats starting thread after thread which are pure FUD . . . like your last one
:thumbsdown:

Yes, a discussion of three year old video cards no one owns anymore is ALWAYS more relevant than a link to a review of new tech posted that day.

You make as much sense as usual apoppin- perhaps if you had used more emoticons I would have understood you better?
 
Originally posted by: nts
I would like to know why Gstanfor thinks that is the case, simply because NVIDIA has a bigger number or does he actually know something about pixel shaders (guessing not)?

Considering that G71 will obviously be built upon G70's architecture, you have to bear in mind that each "pipe" will have two ALUs that are equivalent in the instructions they support, and that one of those two ALUs handles texture processing chores. Rather than redesign this aspect of the fragment shader pipeline it's simply easier to migrate it over to a 90nm layout using that process' library tools. The real benefit is the additional arithmetic processing G71's added quads, and those ALUs, will give the part, since games are definitely not texture addressing limited these days.

 
Originally posted by: John Reynolds
And your would-be character assassination posts are just sooo much more on topic, right? Threads often--hell, almost always--meander in the topics discussed.
When an ATI fan is involved, you can bet the rent it will "meander" to the nV30 too, can't you Johnny?

It's the natural flow of any conversation, and at least I'm trying to take up some of the technical aspects of a portion of the discussion.
1. No one cares about the tech of the nV30 anymore.
2. Off topic

You? What are you doing here again? Adding absolutely nothing of worth to the discussion, other than trying to turn it into yet another tired NV vs. ATI pissing match. No thanks.
Only if you consider me to be "nV" and yourself to be ATI. Geez- I'm just trying to say we don't need the freaking nV30 examined again in 2006. It BARELY needed to be examined in 2003- 9700s were cheaper, quieter, faster, and 100X more available.

So just who here do you think gives a fat rats ass about the 5800 in 2006 Reynolds?

 
Back
Top