hans030390
Diamond Member
- Feb 3, 2005
- 7,326
- 2
- 76
I thought the article was very interesting. Not exactly sure what it all means, but it's interesting.
How is it an error when at the moment it seems to give generally better performance? Even if it is only a moderate improvement?
Originally posted by: jam3
How is it an error when at the moment it seems to give generally better performance? Even if it is only a moderate improvement?
Power usage, plain and simple. x1800xt uses approximatly 50 more watts of power than the 7800gtx. And if you put them both on the same coolers I would bet there would be more noticeable gap in heat as well (would like to see some cooling tests. This is very reminiscent of AMD(Nvidia, lower mhz, more pipes, less power, less heat) vs Intel(less pipes, more mhz, more power, more heat).
Originally posted by: Matt2
Originally posted by: munky
Originally posted by: Matt2
Originally posted by: munky
Originally posted by: DLeRium
Originally posted by: LTC8K6
So, what will ATI have if they get 20 or 24 "pixel processors" going considering they are doing pretty well with just 16?
They're not goign pretty well with 16. NV can just clock higher and then they would be going pretty well. This bench shows NV has the upper hand because fo higher efficiency. Many of us including me were fooled that the R520 would be more efficient with only 16 pipes, but in reality, speed change is more responsive to clock speed change, so R520 doesnt make up by having more efficient pipes, but rather just faster clocked stuff....
I disagree. If anything, it's the r520 that has more efficient "pipes". If you multiply the number of pipes times the clock frequency, both cards have roughly 10 gigatexels per second fillrate, with the gtx having a bit more.
http://www.techreport.com/reviews/2005q4/radeon-x1000/index.x?pg=14
If you look at the above shadermark benches, you can see that each card wins some and loses some. But then look at the second chart, showing flow control (aka dynamic branching) performance, and you can definitely see which card is more efficient running the hyped SM3 feature.
I think you're wrong, unless I am misunderstanding you.
With the help of rivatuner we were able to ensure the card had 8 pipelines disabled while leaving the 8 vertex units intact.
With the same 450/1000 clock and BOTH cards using 16 pipes, the G70 clearly beat the R520 in games that were benched. That's not to say it will happen in all games, but even in CS:VST, an engine that favors ATI, the G70 beat the R520... and beat it pretty good, no 1% victory here. Even in 3DMark where the R520 beats the GTX in every comparison I have seen, loses to the G70 by almost 500 3dmarks.
It seems pretty clear to me that when pipes and clocks are the same, Nvidia does have the faster design.
I meant comparing the performance and efficiency of the cards as they are, without downclocking and disabling pipes. The whole topic of video card efficiency is pretty pointless since you're comparing different games coded in different ways running on different hardware that favors some methods over others. What I was refering to is a specific case of running shaders with flow control.
Hey, with that system of yours, why even bother with this thread? Shouldnt you be helping John Carmack code Doom20???
Originally posted by: Cooler
Ok They under clocked that card. Why not just OC GTX to ATI speed maybee because they cant without extream cooling. Thus showing the strong point of r520 extream clock speeds.
Originally posted by: crazySOB297
I find the overclocking potential of these cards insane though... anyone see what Macci did with them? He beat a 7800gtx SLI rig with a single x1800xt with his custom cooling... Some people are rumoring over 10k with air in 3dmark on the xt's... that's nuts
Originally posted by: crazySOB297
I find the overclocking potential of these cards insane though... anyone see what Macci did with them? He beat a 7800gtx SLI rig with a single x1800xt with his custom cooling... Some people are rumoring over 10k with air in 3dmark on the xt's... that's nuts
Originally posted by: jiffylube1024
I don't think Doom3 shows which has a more efficient core since Doom3 is such as wash for Nvidia cards. Sure, it shows how good NV cards are at heavy stencil shadow loads, but in terms of efficiency, those numbers are in line with full clockspeed comparisions as well.
Originally posted by: IeraseU
The test doesnt even show AA or AF filtering. IMO this is where ATI cards excell, they seem to take less of a hit under those conditions.
Originally posted by: IeraseU
The test doesnt even show AA or AF filtering. IMO this is where ATI cards excell, they seem to take less of a hit under those conditions.
Originally posted by: keysplayr2003
Originally posted by: jiffylube1024
I don't think Doom3 shows which has a more efficient core since Doom3 is such as wash for Nvidia cards. Sure, it shows how good NV cards are at heavy stencil shadow loads, but in terms of efficiency, those numbers are in line with full clockspeed comparisions as well.
Well, I did mention that more game testing was needed. But for the sake of your comment, remove Doom3 completely from the test. And I am not quite sure what you mean by "those numbers are in line with fulll clockspeed comparisons as well.". What exactly do you mean?
Originally posted by: IeraseU
The test doesnt even show AA or AF filtering. IMO this is where ATI cards excell, they seem to take less of a hit under those conditions.
Originally posted by: jiffylube1024
Originally posted by: keysplayr2003
Originally posted by: jiffylube1024
I don't think Doom3 shows which has a more efficient core since Doom3 is such as wash for Nvidia cards. Sure, it shows how good NV cards are at heavy stencil shadow loads, but in terms of efficiency, those numbers are in line with full clockspeed comparisions as well.
Well, I did mention that more game testing was needed. But for the sake of your comment, remove Doom3 completely from the test. And I am not quite sure what you mean by "those numbers are in line with fulll clockspeed comparisons as well.". What exactly do you mean?
I just meant that the difference in framerate between the two cards in Doom3 was pretty similar to the X1800XT running at 625 MHz and the 7800GTX running @ 24 pipes.
I do agree that in order to make this comparison make any sense, a lot more tests would need to be run.
Originally posted by: IeraseU
The test doesnt even show AA or AF filtering. IMO this is where ATI cards excell, they seem to take less of a hit under those conditions.
^ This would no longer be the case as the ATI card has been stripped of 50% of its memory bandwidth (running at 1 Ghz vs 1.5 GHz on stock X1800XT's), while the NVidia card has lost only 20% of its memory bandwidth, taking a drop from 1.2 Ghz to 1 Ghz.
So, while I do admit that a full gamut of tests at these speeds would be interesting for speculation, aside from an e-penis comparison within an e-penis comparison ("my 7800GTX is faster than your X1800XT, plus it's more efficient clock-per-clock!"), I think the test is, on the whole, trivial. Interesting - yes. But trivial nonetheless.
Unless of course NVIDIA does come out with a higher clocked card. The 512 GTX is already rumored to have better memory. So this comparison could give us a hint as to how well it will perform.Originally posted by: jiffylube1024
So, while I do admit that a full gamut of tests at these speeds would be interesting for speculation, aside from an e-penis comparison within an e-penis comparison ("my 7800GTX is faster than your X1800XT, plus it's more efficient clock-per-clock!"), I think the test is, on the whole, trivial. Interesting - yes. But trivial nonetheless.
Originally posted by: crazySOB297
I find the overclocking potential of these cards insane though... anyone see what Macci did with them? He beat a 7800gtx SLI rig with a single x1800xt with his custom cooling... Some people are rumoring over 10k with air in 3dmark on the xt's... that's nuts
Originally posted by: Wreckage
Unless of course NVIDIA does come out with a higher clocked card. The 512 GTX is already rumored to have better memory. So this comparison could give us a hint as to how well it will perform.Originally posted by: jiffylube1024
So, while I do admit that a full gamut of tests at these speeds would be interesting for speculation, aside from an e-penis comparison within an e-penis comparison ("my 7800GTX is faster than your X1800XT, plus it's more efficient clock-per-clock!"), I think the test is, on the whole, trivial. Interesting - yes. But trivial nonetheless.
Originally posted by: XabanakFanatik
How stupid can I be, with my first post on this forum?
(post meant for 7800GTX vs. X1800XT)
Is there any way to delete a post?
Crazy.
Originally posted by: Wreckage
Unless of course NVIDIA does come out with a higher clocked card. The 512 GTX is already rumored to have better memory. So this comparison could give us a hint as to how well it will perform.Originally posted by: jiffylube1024
So, while I do admit that a full gamut of tests at these speeds would be interesting for speculation, aside from an e-penis comparison within an e-penis comparison ("my 7800GTX is faster than your X1800XT, plus it's more efficient clock-per-clock!"), I think the test is, on the whole, trivial. Interesting - yes. But trivial nonetheless.