G70 Vs. R520 All things being equal

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Originally posted by: jiffylube1024
Originally posted by: Rage187
Originally posted by: Cooler

Thus showing the strong point of r520 extream clock speeds.

thats a weakness, genius.

It's about working smarter not harder, MHZ and GHZ mean crap. If you have to run at a higher clock rate to keep up that means your architecture sucks.

No it doesn't, it means you have 8 fewer pipelines ;) . It's called wasting less die space on pointless pipelines if you can make due with fewer at a higher clockspeed, so # of cores per wafer increase. Just a different way to do things. Neither approach is 'right' or 'wrong,' and Nvidia's approach certainly isn't 'smarter' if ATI can afford to use less die space for a competing card (provided they can yield more working cores per wafer, which is an entirely separate discussion altogether). This is just to show you that there's a flipside to that coin.
It also means having faster vertex shaders, since both cards have the same number (8), but one has high clock speeds.
 

Matt2

Diamond Member
Jul 28, 2001
4,762
0
0
Originally posted by: LTC8K6
So, what will ATI have if they get 20 or 24 "pixel processors" going considering they are doing pretty well with just 16?

R520 is 90nm while G70 is 130nm (I think). The die shrink is what made the high clockspeeds possible not the lack of pipes. When Nvidia shrinks to .90nm they will be able to raise the clockspeed just like ATI did. SInce yields are so bad (supposedly) already with 16 pipes,I think that ATI might even have to lower the clockspeed a little bit when they put more pipes in to compensate for the added transistors.
 

crazydingo

Golden Member
May 15, 2005
1,134
0
0
Originally posted by: DLeRium
Originally posted by: LTC8K6
So, what will ATI have if they get 20 or 24 "pixel processors" going considering they are doing pretty well with just 16?

They're not goign pretty well with 16. NV can just clock higher and then they would be going pretty well. This bench shows NV has the upper hand because fo higher efficiency. Many of us including me were fooled that the R520 would be more efficient with only 16 pipes, but in reality, speed change is more responsive to clock speed change, so R520 doesnt make up by having more efficient pipes, but rather just faster clocked stuff....
You are clueless. :laugh:

G70 isnt more efficient. It has more pixel pipelines, hence it will process more.

R520 has more efficient pipelines. You can compare the X1800XL and X850 XT PE, same clocks & pipelines but better performance.

We can only say G70 is more efficient once G71 (16 pipelines) is released and if it is faster than a similarly clocked 6800GT.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: DLeRium
Originally posted by: LTC8K6
So, what will ATI have if they get 20 or 24 "pixel processors" going considering they are doing pretty well with just 16?

They're not goign pretty well with 16. NV can just clock higher and then they would be going pretty well. This bench shows NV has the upper hand because fo higher efficiency. Many of us including me were fooled that the R520 would be more efficient with only 16 pipes, but in reality, speed change is more responsive to clock speed change, so R520 doesnt make up by having more efficient pipes, but rather just faster clocked stuff....

I disagree. If anything, it's the r520 that has more efficient "pipes". If you multiply the number of pipes times the clock frequency, both cards have roughly 10 gigatexels per second fillrate, with the gtx having a bit more.

http://www.techreport.com/reviews/2005q4/radeon-x1000/index.x?pg=14
If you look at the above shadermark benches, you can see that each card wins some and loses some. But then look at the second chart, showing flow control (aka dynamic branching) performance, and you can definitely see which card is more efficient running the hyped SM3 feature.
 

sandorski

No Lifer
Oct 10, 1999
70,783
6,341
126
Meh.

What if Romans had the Internal Combustion Engine?

What if Martians invaded Earth and had only sticks for Weapons?

Though I suppose there's some value in understanding how ATI/Nvidia do things to accomplish the end results, citing one method Best or so hardly matters in the Realworld.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,576
126
I'll rephrase the question.

Theoretically, if ATI could snap it's fingers and drop a 20 or 24 "pixel processor" card in stores next week running at current X1800XT clocks, would it kill the G70 series?
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: LTC8K6
I'll rephrase the question.

Theoretically, if ATI could snap it's fingers and drop a 20 or 24 "pixel processor" card in stores next week running at current X1800XT clocks, would it kill the G70 series?

That depends whether or not Nvidia can snap it's fingers the week after and magically get it's 24 pipe gtx to run 625mhz on air.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,576
126
Well, I was wondering where ATI would be if they had worked this just a little bit better. So, NV's magic wouldn't apply since their card was out and finished already. :D
 

Matt2

Diamond Member
Jul 28, 2001
4,762
0
0
Originally posted by: munky
Originally posted by: DLeRium
Originally posted by: LTC8K6
So, what will ATI have if they get 20 or 24 "pixel processors" going considering they are doing pretty well with just 16?

They're not goign pretty well with 16. NV can just clock higher and then they would be going pretty well. This bench shows NV has the upper hand because fo higher efficiency. Many of us including me were fooled that the R520 would be more efficient with only 16 pipes, but in reality, speed change is more responsive to clock speed change, so R520 doesnt make up by having more efficient pipes, but rather just faster clocked stuff....

I disagree. If anything, it's the r520 that has more efficient "pipes". If you multiply the number of pipes times the clock frequency, both cards have roughly 10 gigatexels per second fillrate, with the gtx having a bit more.

http://www.techreport.com/reviews/2005q4/radeon-x1000/index.x?pg=14
If you look at the above shadermark benches, you can see that each card wins some and loses some. But then look at the second chart, showing flow control (aka dynamic branching) performance, and you can definitely see which card is more efficient running the hyped SM3 feature.

I think you're wrong, unless I am misunderstanding you.

With the help of rivatuner we were able to ensure the card had 8 pipelines disabled while leaving the 8 vertex units intact.

With the same 450/1000 clock and BOTH cards using 16 pipes, the G70 clearly beat the R520 in games that were benched. That's not to say it will happen in all games, but even in CS:VST, an engine that favors ATI, the G70 beat the R520... and beat it pretty good, no 1% victory here. Even in 3DMark where the R520 beats the GTX in every comparison I have seen, loses to the G70 by almost 500 3dmarks.

It seems pretty clear to me that when pipes and clocks are the same, Nvidia does have the faster design.
 

CPlusPlusGeek

Banned
Oct 7, 2005
241
0
0
Originally posted by: munky
Originally posted by: LTC8K6
I'll rephrase the question.

Theoretically, if ATI could snap it's fingers and drop a 20 or 24 "pixel processor" card in stores next week running at current X1800XT clocks, would it kill the G70 series?

That depends whether or not Nvidia can snap it's fingers the week after and magically get it's 24 pipe gtx to run 625mhz on air.


lol WORD
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Yeh the comparison here is interesting as it isn't as clear cut as it is with the P7 NetBurst Architecture vs K8. With that would it is obvious you can see who adopted which philosophy.

Though why lower the memory clock rate so much to 1.0GHZ, 7800 GTX comes with 1.2GHZ memory why not down the X1800 XT memory down to 1.2GHZ from 1.5GHZ?

 

sbuckler

Senior member
Aug 11, 2004
224
0
0
What's particularly impressive is the 7800 uses a similar number of transistors to the X1800 so when comparing 16 pipes to 16 pipes the 7800 is using quite a few less (as it doesn't need 8 pipes worth of transistors) yet is still faster clock for clock.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Originally posted by: CPlusPlusGeek
Apparently the x1800 idles at upper 80's :Q

I dont believe ATI has the power saving featues Nvidia does. They cant set 2d and 3d clocks like Nvidia has been able to do now for 34 months.

 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,576
126
So is ATI making the error of clockspeed over # of pipes. Like with the 9600XT?
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: munky
Originally posted by: DLeRium
Originally posted by: LTC8K6
So, what will ATI have if they get 20 or 24 "pixel processors" going considering they are doing pretty well with just 16?

They're not goign pretty well with 16. NV can just clock higher and then they would be going pretty well. This bench shows NV has the upper hand because fo higher efficiency. Many of us including me were fooled that the R520 would be more efficient with only 16 pipes, but in reality, speed change is more responsive to clock speed change, so R520 doesnt make up by having more efficient pipes, but rather just faster clocked stuff....

I disagree. If anything, it's the r520 that has more efficient "pipes". If you multiply the number of pipes times the clock frequency, both cards have roughly 10 gigatexels per second fillrate, with the gtx having a bit more.

http://www.techreport.com/reviews/2005q4/radeon-x1000/index.x?pg=14
If you look at the above shadermark benches, you can see that each card wins some and loses some. But then look at the second chart, showing flow control (aka dynamic branching) performance, and you can definitely see which card is more efficient running the hyped SM3 feature.

While the Bilinear Texel Fillrate of both cards is close, 10.0 GP X1800 XT vs 10.32 for the 7800 GTX, their output texel fillrate is not with 10.0 GP X1800 XT vs 6.88 for the 7800 GTX.

Though from the information gathered so far I would agree ATI's R520 Dynamic Flow Control is a better implementation then the one on G70.

R580 vs G72 will be interesting.

We assume R580 is a 24 Pipe Part, if that is so what method will Nvidia use to make G72 increase performance, increase to 32 Pipes, and mildly increase clockrates, or maintain 24 Pipes, and increase clockrates considerably.
 

KoolDrew

Lifer
Jun 30, 2004
10,226
7
81
Originally posted by: Gamingphreek
While an interesting test, it really does have no bearing. Different architectures favor different things, setting all things equal is not the way to do it. Who on earth would get a card and underclock it for that purpose. The comparisons should exist at stock settings, when you mess with those you dont get an accurate representation.

-Kevin

 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Matt2
Originally posted by: munky
Originally posted by: DLeRium
Originally posted by: LTC8K6
So, what will ATI have if they get 20 or 24 "pixel processors" going considering they are doing pretty well with just 16?

They're not goign pretty well with 16. NV can just clock higher and then they would be going pretty well. This bench shows NV has the upper hand because fo higher efficiency. Many of us including me were fooled that the R520 would be more efficient with only 16 pipes, but in reality, speed change is more responsive to clock speed change, so R520 doesnt make up by having more efficient pipes, but rather just faster clocked stuff....

I disagree. If anything, it's the r520 that has more efficient "pipes". If you multiply the number of pipes times the clock frequency, both cards have roughly 10 gigatexels per second fillrate, with the gtx having a bit more.

http://www.techreport.com/reviews/2005q4/radeon-x1000/index.x?pg=14
If you look at the above shadermark benches, you can see that each card wins some and loses some. But then look at the second chart, showing flow control (aka dynamic branching) performance, and you can definitely see which card is more efficient running the hyped SM3 feature.

I think you're wrong, unless I am misunderstanding you.

With the help of rivatuner we were able to ensure the card had 8 pipelines disabled while leaving the 8 vertex units intact.

With the same 450/1000 clock and BOTH cards using 16 pipes, the G70 clearly beat the R520 in games that were benched. That's not to say it will happen in all games, but even in CS:VST, an engine that favors ATI, the G70 beat the R520... and beat it pretty good, no 1% victory here. Even in 3DMark where the R520 beats the GTX in every comparison I have seen, loses to the G70 by almost 500 3dmarks.

It seems pretty clear to me that when pipes and clocks are the same, Nvidia does have the faster design.

I meant comparing the performance and efficiency of the cards as they are, without downclocking and disabling pipes. The whole topic of video card efficiency is pretty pointless since you're comparing different games coded in different ways running on different hardware that favors some methods over others. What I was refering to is a specific case of running shaders with flow control.
 

Matt2

Diamond Member
Jul 28, 2001
4,762
0
0
Originally posted by: munky
Originally posted by: Matt2
Originally posted by: munky
Originally posted by: DLeRium
Originally posted by: LTC8K6
So, what will ATI have if they get 20 or 24 "pixel processors" going considering they are doing pretty well with just 16?

They're not goign pretty well with 16. NV can just clock higher and then they would be going pretty well. This bench shows NV has the upper hand because fo higher efficiency. Many of us including me were fooled that the R520 would be more efficient with only 16 pipes, but in reality, speed change is more responsive to clock speed change, so R520 doesnt make up by having more efficient pipes, but rather just faster clocked stuff....

I disagree. If anything, it's the r520 that has more efficient "pipes". If you multiply the number of pipes times the clock frequency, both cards have roughly 10 gigatexels per second fillrate, with the gtx having a bit more.

http://www.techreport.com/reviews/2005q4/radeon-x1000/index.x?pg=14
If you look at the above shadermark benches, you can see that each card wins some and loses some. But then look at the second chart, showing flow control (aka dynamic branching) performance, and you can definitely see which card is more efficient running the hyped SM3 feature.

I think you're wrong, unless I am misunderstanding you.

With the help of rivatuner we were able to ensure the card had 8 pipelines disabled while leaving the 8 vertex units intact.

With the same 450/1000 clock and BOTH cards using 16 pipes, the G70 clearly beat the R520 in games that were benched. That's not to say it will happen in all games, but even in CS:VST, an engine that favors ATI, the G70 beat the R520... and beat it pretty good, no 1% victory here. Even in 3DMark where the R520 beats the GTX in every comparison I have seen, loses to the G70 by almost 500 3dmarks.

It seems pretty clear to me that when pipes and clocks are the same, Nvidia does have the faster design.

I meant comparing the performance and efficiency of the cards as they are, without downclocking and disabling pipes. The whole topic of video card efficiency is pretty pointless since you're comparing different games coded in different ways running on different hardware that favors some methods over others. What I was refering to is a specific case of running shaders with flow control.


Hey, with that system of yours, why even bother with this thread? Shouldnt you be helping John Carmack code Doom20???
 

Ronin

Diamond Member
Mar 3, 2001
4,563
1
0
server.counter-strike.net
Originally posted by: KoolDrew
Originally posted by: Gamingphreek
While an interesting test, it really does have no bearing. Different architectures favor different things, setting all things equal is not the way to do it. Who on earth would get a card and underclock it for that purpose. The comparisons should exist at stock settings, when you mess with those you dont get an accurate representation.

-Kevin

Perhaps an objective point of view to see where each company's strengths lay? Stop looking at this from a competitive point of view, and look at it from a technological one.
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
Originally posted by: Ronin
Originally posted by: KoolDrew
Originally posted by: Gamingphreek
While an interesting test, it really does have no bearing. Different architectures favor different things, setting all things equal is not the way to do it. Who on earth would get a card and underclock it for that purpose. The comparisons should exist at stock settings, when you mess with those you dont get an accurate representation.

-Kevin

Perhaps an objective point of view to see where each company's strengths lay? Stop looking at this from a competitive point of view, and look at it from a technological one.

Very good point. Objective, rather than competitive, Very good point.

-Kevin
 

milomnderbnder21

Junior Member
May 13, 2005
9
0
0
How is it an error when at the moment it seems to give generally better performance? Even if it is only a moderate improvement?
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Originally posted by: LTC8K6
I'll rephrase the question.

Theoretically, if ATI could snap it's fingers and drop a 20 or 24 "pixel processor" card in stores next week running at current X1800XT clocks, would it kill the G70 series?

How can you not know the answer to this? Yes! If the X1800XT is about 10% faster (a rough average of all the benchmarks) than the 24-pipe 7800GTX already, how do you think it would do with 20 or 24 pipelines? It would destroy G70!

If Nvidia shrunk to 90mm tomorrow and got 625 MHz out of their cores, then their 24-pipe 7800GTX LOLWTFBBQ would absolutely demolish the X1800XT.

But what's the point of these what-if's ?
 

Hacp

Lifer
Jun 8, 2005
13,923
2
81
hmm funny to see people argueing when the result is Cleaer. I see no gaming benchmarks where ATI won.