G70 Vs. R520 All things being equal

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: Ackmed
Except not. There are far too many things left unchecked, or not mentioned.

For a much better discussion about this article; http://www.beyond3d.com/forum/showthread.php?t=24465

Read thru that, and ask yourself if you still think the article is worth anything.

I know your just trolling because you think your sacred ground is being threatened. I won't hold it against you as I know you do not know any better.

What this thread and article was worth is relative to the person viewing it. I do not need to go and read a thread at beyond3d now do I. Because I have already decided what is was worth, to me. Just do me a favor and get that would you? Now amscray.

 

christoph83

Senior member
Mar 12, 2001
812
0
0
Originally posted by: Ackmed
Except not. There are far too many things left unchecked, or not mentioned.

For a much better discussion about this article; http://www.beyond3d.com/forum/showthread.php?t=24465

Read thru that, and ask yourself if you still think the article is worth anything.


Too many things that you didn't bother to list any major points? All that thread was, was people thinking outloud trying to figure out how the results ended up the way they did. Considering how straight-forward the comparison was, it's surprising people won't accept the results.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: christoph83
Originally posted by: keysplayr2003
So it's a lot like your post here?

I'm with you on this one keysplayr2003. I think it's a good article. It doesn't neccesarily show which card is better, but more or less which one is running at a better efficiency.

In the end though it's like the AMD vs intel comparison. If intel had a 6ghz chip, it wouldn't really matter how slow it is per clock.

Yup. a 6GHz CPU would be vewwy nice. ;)

 

Ackmed

Diamond Member
Oct 1, 2003
8,498
560
126
Originally posted by: christoph83
Originally posted by: Ackmed
Except not. There are far too many things left unchecked, or not mentioned.

For a much better discussion about this article; http://www.beyond3d.com/forum/showthread.php?t=24465

Read thru that, and ask yourself if you still think the article is worth anything.


Too many things that you didn't bother to list any major points? All that thread was, was people thinking outloud trying to figure out how the results ended up the way they did. Considering how straight-forward the comparison was, it's surprising people won't accept the results.


If you actaully read the thread, you will see some major points. When did I say that I wouldnt accept the results? There are far too many unknowns, as they dont give any answers or insight as to why they think the tests turned out as they did. It doesnt matter if the NV card does more per Mhz, or if they dont. All that matters, is the end performance, and features.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: Ackmed
Originally posted by: christoph83
Originally posted by: Ackmed
Except not. There are far too many things left unchecked, or not mentioned.

For a much better discussion about this article; http://www.beyond3d.com/forum/showthread.php?t=24465

Read thru that, and ask yourself if you still think the article is worth anything.


Too many things that you didn't bother to list any major points? All that thread was, was people thinking outloud trying to figure out how the results ended up the way they did. Considering how straight-forward the comparison was, it's surprising people won't accept the results.


If you actaully read the thread, you will see some major points. When did I say that I wouldnt accept the results? There are far too many unknowns, as they dont give any answers or insight as to why they think the tests turned out as they did. It doesnt matter if the NV card does more per Mhz, or if they dont. All that matters, is the end performance, and features.

So why won't you let people get what they want out of the article? Why does it always need to be the "way you see it" or no way at all? You need to let people think freely man.
There were some major points in this thread as well. Some of them were made by me and they made sense to some and not to others.

 

DidlySquat

Banned
Jun 30, 2005
903
0
0
Originally posted by: keysplayr2003
Originally posted by: classy

Its not a matter of dealing with it. I am just stating the obvious fact that comparisons like these truly offer no insight. Why? Because of the fact they are so different how does one determine any valid outcome of any tests or comparisons? Its similar to Intel and AMD, the only true valid comparisons are the results produced. Thats all I'm saying. And I won't say its stupid anymore, ok? But it is irelevant.

No, you are stating an obvious opinion. Not an obvious fact. And that's a fact. Leave it at that. I understand the whole INtel AMD thing. If you ran a prescott at the same MHz as an Athlon 64 3500 (2GHz) It would be fish paste. (SpongeBob SquarePants). One of the biggest arguments in the CPU forum is about efficiency and AMD abundance and Intel's lack of. My Pentium M notebook (1.73GHz Dothan) blows away my P4 3.0E at many levels.

Some people are insisting that the R520 doesn't have pipes at all, but instead has a sort of hybrid pipeline arch halfway to a unified arch. W....T....F..... goes through peoples minds in lengendary.

Anyway, I liked the article and found it interesting. I emailed Mr. Davidson at Driver Heaven and he responded within 30 minutes. I asked if he would be kind enough to add more game benches to his article. He said he was glad we enjoyed the article but was very swamped at the moment. But said he would try and squeeze some in if time permits him to.

So we "might" see some more added.


First of all can anyone confirm that the both cards had the same amount of memory (that is, the X1800XT only used 256 MB) ?

Second, I agree that disabling the pipes is a little strange because the architectures are different, so why should 7800 be penalized just because its own pipeline count is higher.

But we can all agree that ATI uses much higher clocks, which is just a result of the 90 nm fabrication process - that is, it is not a design advantage, and nivida is likely to achieve that in the future.

So if they didn't disable pipes, and compared to a 512 MB GTX the difference would be even higher !!!! we all know what that means don't we ?
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: Pabster
Now imagine if G70 was clocked at X1800XT clocks...

We may just see this.

Since NVIDIA likes to release it's product and not just announce it, we will have to wait and see.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Well, last gen, the X800XTPE was clocked at 520MHz core 1120 mem. The 6800Ultra was clocked at 400 core 1100 mem. Minute difference on the RAM, but enormous difference on the core. Both cards had 16 pipes and 256MB RAM.

I mentioned that I had an X800XTPE and a 6800GT and compared the two at similar clocks. I downclocked the X800XTPE to my 6800GT speed in both 370core and 1000mem.
The GT performed better than the XTPE did at identical clocks. The XTPE probably performed a little worse than the X800XL does (which is no slouch). So, it did pretty well but not enough to beat out my GT. Now, I only did this for kicks and I didn't even make a thread here about it. But I knew that my GT had a more efficient architecture at similar clocks that an R420 does. Now of course, crank back up the X800XTPE to where it belongs and its a killer. If I could have managed to get the 6800GT to 520/1120 I would have, but I didn't even try of course. I never o/c'ed the GT.
 

DeathReborn

Platinum Member
Oct 11, 2005
2,786
789
136
The only comparisons i'd like to see would be a Radeon X1800XT 512MB vs GeForce 7800GTX 512MB (1.2ns GDDR3) in 16x12, 20x15 & with and without AA/AF. Still, it does show that 24 pipes is not required if you can ramp up the clock speed like ATI has. All credit to ATI for managing the technological improvements over R4xx but the delays, bad PR & price still need work.
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Originally posted by: DeathReborn
The only comparisons i'd like to see would be a Radeon X1800XT 512MB vs GeForce 7800GTX 512MB (1.2ns GDDR3) in 16x12, 20x15 & with and without AA/AF. Still, it does show that 24 pipes is not required if you can ramp up the clock speed like ATI has. All credit to ATI for managing the technological improvements over R4xx but the delays, bad PR & price still need work.

I hope you play at those resolutions.

Personally, I'm looking for 1680X1050 performance in everything because that's what I play at all of the time on my 2005fpw. The other results are interesting but I don't lose any sleep over the fact that my X850XT PE falls off at 20xx by 15xx. I go by 1600X1200 as a reference since it's just a bit more pixels on the screen but pretty much the same as what I play at.

I want to know how many of you who are comparing 20X15 resolutions actually use it? Sure it's great that one card is faster than the other up in the stratosphere of resolutions, but how many of you actually game at that setting on a daily basis (aside from BenSkywalker in his RTSes ;) ).
 

DeathReborn

Platinum Member
Oct 11, 2005
2,786
789
136
Originally posted by: jiffylube1024
Originally posted by: DeathReborn
The only comparisons i'd like to see would be a Radeon X1800XT 512MB vs GeForce 7800GTX 512MB (1.2ns GDDR3) in 16x12, 20x15 & with and without AA/AF. Still, it does show that 24 pipes is not required if you can ramp up the clock speed like ATI has. All credit to ATI for managing the technological improvements over R4xx but the delays, bad PR & price still need work.

I hope you play at those resolutions.

Personally, I'm looking for 1680X1050 performance in everything because that's what I play at all of the time on my 2005fpw. The other results are interesting but I don't lose any sleep over the fact that my X850XT PE falls off at 20xx by 15xx. I go by 1600X1200 as a reference since it's just a bit more pixels on the screen but pretty much the same as what I play at.

I want to know how many of you who are comparing 20X15 resolutions actually use it? Sure it's great that one card is faster than the other up in the stratosphere of resolutions, but how many of you actually game at that setting on a daily basis (aside from BenSkywalker in his RTSes ;) ).

Mostly I play at 1600x1200 (down to 640x480 in some really old games) on my 7800GT. My monitor (ViewSonic G220fb 21") can handle 2048x1536 but the refresh rate is too low. My brother plays at 2048x1536 @ 60Hz but he doesn't play things like BF2 or EQ2.

I'd consider buying a refresh 7800GTX with 512MB Memory along with a better monitor to handle the higher resolution better but will most likely not actually play at those resolutions often with this generation of cards.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Jiffy: I for one can only go up to 1600x1200 with my current 19" Dell CRT. So those resolutions really do not matter to me either. I do like the fact that we can crank the settings to the max at 1600x1200 however.
 

biostud

Lifer
Feb 27, 2003
19,739
6,817
136
I don't even need 1200x1600 to enjoy a game on my 19" IIyama CRT. 1280x960 is in most cases fine for me, I'm just looking forward to beeing able to crank FSAA, AA and game details up to maximum without stuttering.
 

crazydingo

Golden Member
May 15, 2005
1,134
0
0
Originally posted by: keysplayr2003
Well, last gen, the X800XTPE was clocked at 520MHz core 1120 mem. The 6800Ultra was clocked at 400 core 1100 mem. Minute difference on the RAM, but enormous difference on the core. Both cards had 16 pipes and 256MB RAM.

I mentioned that I had an X800XTPE and a 6800GT and compared the two at similar clocks. I downclocked the X800XTPE to my 6800GT speed in both 370core and 1000mem.
The GT performed better than the XTPE did at identical clocks. The XTPE probably performed a little worse than the X800XL does (which is no slouch). So, it did pretty well but not enough to beat out my GT. Now, I only did this for kicks and I didn't even make a thread here about it. But I knew that my GT had a more efficient architecture at similar clocks that an R420 does. Now of course, crank back up the X800XTPE to where it belongs and its a killer. If I could have managed to get the 6800GT to 520/1120 I would have, but I didn't even try of course. I never o/c'ed the GT.
Bingo !

Unless anyone was living under a rock, this isnt new. Comparing a 16 pipe 7800 to a 16 pipe 6800 GT at similar clocks would have been more interesting.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Yes, that would have been a good thing. To see architecture improvements from on gen to the next.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
I thought RSX was still 24 pipes, essentially G70 @ 90nm with some mem tweaks to accomodate Cell access?

From everything I have seen it is supposed to be a 32ALU part. I have been under the assumption that they would discard the vertex shaders as with Cell sitting there they are a waste of die space.
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Kevin, not if your kybd lacks a num pad. :D

chris, the point is while the article looks at instructions per clock, it doesn't necessarily measure efficiency per watt or die size or transistor count--namely, the efficiency of the architecture. It also fails to account for the G70 clocking its vertex shaders higher than its pixel shaders (this is mainly for 3DM05), and the games used aren't that favorable to ATI's more forward-looking pixel shaders (D3 is obviously nV's playground, and HL2 scales all the way down to DX7--Lost Coast or DoD:S might have been more interesting). But mainly it doesn't stress that, in general, you get higher clocks by doing less per clock (see P4 vs. Athlon), as you can see from the comments in DH's article thread. Still, no denying G70 has a higher IPC in those tests.

Interesting, Ben. I know (because I participated in it :)) early speculation was that RSX might ditch VS for Cell to handle, but I never thought that was more than just speculation. 32 pixel pipes at 550MHz sounds monstrous. I suppose if IGPs leave VS to the CPU, RSX can do the same, tho I'm not sure if the separate memory spaces (XDR and DDR) matter.
 

OatMan

Senior member
Aug 2, 2001
677
0
71
just my 2 cents,

I think this is an interesting "experiment" but I'm not sure what it really says. Perhaps withing the context of other comparisons it may give some insights into the differences in the architecture solutions chosen by the manufacturers. While this might have some essoteric interest (I can't even say value) I have to agree with the POINTLESS crowd.

Imagine comparing a ferarri and a porche.

OK since the porche is heavier we're gonna rip out a seat and two doors.

Now lets detune the ferarri so that the HP is the same.

Aww nuts, now the tourques are different. Oh well...

We'll have to put a big air brake on the ferarri now to even out the airodynamics since the porche is missing its doors and the trunk lid.

Now since one car has a double wishbone suspension and the other a macpherson strut type, we'll just bend the strut type into the shape of a wishbone.

all set.

OK now we have an apples to apples test to prove which is the better sports car.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: OatMan
just my 2 cents,

I think this is an interesting "experiment" but I'm not sure what it really says. Perhaps withing the context of other comparisons it may give some insights into the differences in the architecture solutions chosen by the manufacturers. While this might have some essoteric interest (I can't even say value) I have to agree with the POINTLESS crowd.

Imagine comparing a ferarri and a porche.

OK since the porche is heavier we're gonna rip out a seat and two doors.

Now lets detune the ferarri so that the HP is the same.

Aww nuts, now the tourques are different. Oh well...

We'll have to put a big air brake on the ferarri now to even out the airodynamics since the porche is missing its doors and the trunk lid.

Now since one car has a double wishbone suspension and the other a macpherson strut type, we'll just bend the strut type into the shape of a wishbone.

all set.

OK now we have an apples to apples test to prove which is the better sports car.

How about this. The architectural path nvidia chose seems to be more efficient than the architectural path ATI chose when compared pipe to pipe, clock to clock etc. etc.
Sure, the pipes are different from each other, but meant to do the same job yes? Whether it does things differently is irrevelant, it gets the same end result.

Not too difficult to see. For those that wish to anyway. ;)

Cheers.

 

heedoyiu

Senior member
Jan 13, 2005
309
0
0
it looks like ati still gets an ass kicking, nvidia really made a rebound after the 5700 mess
 

OatMan

Senior member
Aug 2, 2001
677
0
71
Hey I just thought of something...

ATI took the lead right around the time NVIDIA was invested in developing chipsets. ATI has been developing chipsets for a while, but they always seemed kinda back burner until the last year or so.

Perhaps we are seeing revearse de ja vous? While NVIDIA was developing the NForce chipset ATI was concentrating fully on its newest GPU. NVIDIA purchased 3dFX and came up with some whaked out hybrid which was great in theory but proved a bit of a fiascoe to develop and bring to market.

Perhaps now ATI has been caught from behind for similar reasons while it develops crossfire and chipsets for Intel and AMD platforms. ATI is dividing its resources more, as NVIDIA has a mature and strong chipset brand and is basically just adapting it for intel which leaves it better able to focus more on GPUs. Is either company significantly larger than the other?

Just some food for trash. I do not really keep up much with this stuff, so I am kinda talkin out me arse.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: OatMan
Hey I just thought of something...

ATI took the lead right around the time NVIDIA was invested in developing chipsets. ATI has been developing chipsets for a while, but they always seemed kinda back burner until the last year or so.

Perhaps we are seeing revearse de ja vous? While NVIDIA was developing the NForce chipset ATI was concentrating fully on its newest GPU. NVIDIA purchased 3dFX and came up with some whaked out hybrid which was great in theory but proved a bit of a fiascoe to develop and bring to market.

Perhaps now ATI has been caught from behind for similar reasons while it develops crossfire and chipsets for Intel and AMD platforms. ATI is dividing its resources more, as NVIDIA has a mature and strong chipset brand and is basically just adapting it for intel which leaves it better able to focus more on GPUs. Is either company significantly larger than the other?

Just some food for trash. I do not really keep up much with this stuff, so I am kinda talkin out me arse.

Not exactly sure which company is "actually" larger (In terms of employees, sqare footage of real estate etc. etc.). But I would just take a guess that it may be nvidia.

 

KeepItRed

Senior member
Jul 19, 2005
811
0
0
Originally posted by: Cooler
Ok They under clocked that card :confused:. Why not just OC GTX to ATI speed maybee because they cant without extream cooling. Thus showing the strong point of r520 extream clock speeds.