*edit* FEAR BENCH,more 1900 details and some g71 details

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
Originally posted by: 1Dark1Sharigan1
Originally posted by: TecHNooB

Are you sure... you're chinese right?

Yes and yes . . .

Back on topic . . . I'm still finding it hard to believe nvidia would need 700 mhz with 32 pipes. I mean sure if they are getting amazing yields with 90nm then great but still . . .

700mhz is pretty much par for the course (very easily achieved) for 90 nm processes, so if nVidia cn easily hit 700mhz why wouldn't they take advantage of it?
 

1Dark1Sharigan1

Golden Member
Oct 5, 2005
1,466
0
0
Originally posted by: Gstanfor
700mhz is pretty much par for the course (very easily achieved) for 90 nm processes, so if nVidia cn easily hit 700mhz why wouldn't they take advantage of it?

I suppose. Usually both nvidia and ATI only clock as high as they need to so it seem to me that nvidia must think that it needs 700 mhz in order to stay competitive/beat the R580 . . .
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
Originally posted by: Gstanfor
It seems to me that a ROP increase would go well with a rumored reworking of G71's AA capablilities.

It is possible it may be the FP blending units that have increased though - I'm just not sure how much of a priority this is at the moment though - seems to me that would be a next gen improvement (probaly bringing HDR AA along with it).
Yeah, but I wonder if 24 ROPs is getting to be too much for the memory bandwidth, especially with 32 pixel pipes/texturing units competing, too.

It's possible RSX is G71 with a quad disabled, for yields (just like C1 is supposed to have a 16-ALU "pipe" disabled). 24 ROPs sounds absurd for 128-bit memory, though. Can they disable ROPs? If not, then RSX is probably a separate chip, but perhaps like G71 in its total quads.
 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
Originally posted by: Pete
Originally posted by: Gstanfor
It seems to me that a ROP increase would go well with a rumored reworking of G71's AA capablilities.

It is possible it may be the FP blending units that have increased though - I'm just not sure how much of a priority this is at the moment though - seems to me that would be a next gen improvement (probaly bringing HDR AA along with it).
Yeah, but I wonder if 24 ROPs is getting to be too much for the memory bandwidth, especially with 32 pixel pipes/texturing units competing, too.

It's possible RSX is G71 with a quad disabled, for yields (just like C1 is supposed to have a 16-ALU "pipe" disabled). 24 ROPs sounds absurd for 128-bit memory, though. Can they disable ROPs? If not, then RSX is probably a separate chip, but perhaps like G71 in its total quads.

I'm sure they would be capable of disabling ROP's I'm pretty sure things have moved onwards from just the relatively crude shutting down of an entire quad. I'm sure there is more to nVidia's memory controller than what they let on, (wouldn't suprise me if the various units in the pipelines were crossbarred for load balancing) nV40 had no problem with 16 rops - a 1:1 ratio between it and the texturing/shading units - if the chip architecture is properly balanced and the memory fast enough there shouldn't be a problem.
 

5150Joker

Diamond Member
Feb 6, 2002
5,549
0
71
www.techinferno.com
I highly doubt 700-750 mhz for a 32 pipe/24 ROP card is going to be so easy to make for nVidia as Gstanfor claims. If attaining those speeds was so easy at 90nm then ATi would have already broken the 700 mhz barrier wiith the X1800XT. Yeah different architectures but the process is still the same. nVidia might be able to crack 700 mhz but only for a special edition (e.g. press edition 512 GTX style) card available in very limited quantities - mark my words.
 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
Originally posted by: 5150Joker
I highly doubt 700-750 mhz for a 32 pipe/24 ROP card is going to be so easy to make for nVidia as Gstanfor claims. If attaining those speeds was so easy at 90nm then ATi would have already broken the 700 mhz barrier wiith the X1800XT. Yeah different architectures but the process is still the same. nVidia might be able to crack 700 mhz but only for a special edition (e.g. press edition 512 GTX style) card available in very limited quantities - mark my words.

I think you will find nVidia being more conservative with the process than ATi.

Also bear in mind the G71 can be thought of as "wide, shallow & relatively simple" with reagrd to its pipelines whereas ATi's pipelines have significantly more complexity to them.

I thought ATi was trumpeting about how card manufacturers could increase the default clocks on R520 if they wanted, up until R520 ATi was making quite a point of their high 90nm clocks speeds (much parroted by the fanATics), so why do you claim differently?
 

5150Joker

Diamond Member
Feb 6, 2002
5,549
0
71
www.techinferno.com
Originally posted by: Gstanfor
Originally posted by: 5150Joker
I highly doubt 700-750 mhz for a 32 pipe/24 ROP card is going to be so easy to make for nVidia as Gstanfor claims. If attaining those speeds was so easy at 90nm then ATi would have already broken the 700 mhz barrier wiith the X1800XT. Yeah different architectures but the process is still the same. nVidia might be able to crack 700 mhz but only for a special edition (e.g. press edition 512 GTX style) card available in very limited quantities - mark my words.

I think you will find nVidia being more conservative with the process than ATi.

Also bear in mind the G71 can be thought of as "wide, shallow & relatively simple" with reagrd to its pipelines whereas ATi's pipelines have significantly more complexity to them.

I thought ATi was trumpeting about how card manufacturers could increase the default clocks on R520 if they wanted, up until R520 ATi was making quite a point of their high 90nm clocks speeds, so why do you claim differently?


ATi simply stated they would allow 3rd party vendors to produce OC'd versions but not support them with warranty, how again is that "trumpeting"? Like I said, if ATi could not reach 700 mhz stock for a 16 pipe/16 ROP card, good luck to nVidia hitting 700-750 with a 32 pipe/24 ROP card. If nVidia even produces a 32 pipe card (all speculation at this point), I'm betting the stock speed won't be above 500 mhz.
 

TSS

Senior member
Nov 14, 2005
227
0
0
the way this is going everybody will be let down by the time it gets out since its beeing hyped more and more, and clocks seem to increase with every new rumor... and even if its as powerfull as the rumors say it will be, it frightens me to think what kind of monsterous beast the G80/R600 will be then.
 

DeathReborn

Platinum Member
Oct 11, 2005
2,786
789
136
Originally posted by: 5150Joker
Originally posted by: Gstanfor
Originally posted by: 5150Joker
I highly doubt 700-750 mhz for a 32 pipe/24 ROP card is going to be so easy to make for nVidia as Gstanfor claims. If attaining those speeds was so easy at 90nm then ATi would have already broken the 700 mhz barrier wiith the X1800XT. Yeah different architectures but the process is still the same. nVidia might be able to crack 700 mhz but only for a special edition (e.g. press edition 512 GTX style) card available in very limited quantities - mark my words.

I think you will find nVidia being more conservative with the process than ATi.

Also bear in mind the G71 can be thought of as "wide, shallow & relatively simple" with reagrd to its pipelines whereas ATi's pipelines have significantly more complexity to them.

I thought ATi was trumpeting about how card manufacturers could increase the default clocks on R520 if they wanted, up until R520 ATi was making quite a point of their high 90nm clocks speeds, so why do you claim differently?


ATi simply stated they would allow 3rd party vendors to produce OC'd versions but not support them with warranty, how again is that "trumpeting"? Like I said, if ATi could not reach 700 mhz stock for a 16 pipe/16 ROP card, good luck to nVidia hitting 700-750 with a 32 pipe/24 ROP card. If nVidia even produces a 32 pipe card (all speculation at this point), I'm betting the stock speed won't be above 500 mhz.

R520 is a first gen 90nm chip. G71 and R580 come out using a more mature 90nm process which does allow more headroom. ATI has those "exteme pipes" or whatever they call them now, which is probably why they still haven't struck for 700-800MHz but then, they might with a XTX PE, lol.

R580 needs to get mature drivers before it shows the promise we all expected of it and G71 needs to at the least match it in price & performance. I think nVidia can manage that and work on G80 using a 80nm process. I'll only be getting a G71/R580 if the games of 2006 that I want (OFP2, Oblivion & TA2 (names are different on 1 & 3)) provide a noticeable improvement over SLI 7800GT's.
 
Jun 14, 2003
10,442
0
0
Originally posted by: Steelski
Originally posted by: CP5670
I'm guessing the 32 in the G71 column is the number of "pipes", but what's the other thing that has been increased to 24?

its a mystery.


ROPS maybe?

G70 has 24 pipes and 16 ROPS so maybe

i honestly cant wait...to see what the next two big hitters can produce.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
One thing for sure is NV claims that this G71 is fast as a 7800GTX 512 when clocked t 430mhz.

550x24=13200 MT/s
430x32=13760 MT/s

Theoretically on paper, G71 really could be a 32 pipe card according to this math.
Unless its a 24 pipe card of course.
 

hcforde

Junior Member
Dec 22, 2004
11
0
0
On the subject of price I would like to know what is under the hood of these xtx's. Is it possible that ther are better(faster ) memory chips which would give it more headroom for overclocking? Is there some other feature we have to wait until the 24th to find out about? A $100 diff is a little steep if that is all there is.

Any other thought on what is not so obvious?
 

Cooler

Diamond Member
Mar 31, 2005
3,835
0
0
so is it a 24 rop card with 32 pipes?
Or is a 16 rop card with 32 pipes?
any way you look at it will cost more then the x1900.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
X1900XT with FX-60

16x12 no AA no SS
=71 fps

16x12 SS
=33 fps

16x12 4xAA 16x HQ AF no SS
=48 fps min 24 fps

16x12 AxAA 8xAF
=49 fps

X1800XT with A64 4000, from xbitlabs

16x12 no AA no SS
=51 fps

16x12 4xAA 16xAF
=39 fps min 23 fps

First thing to notice is that this card takes quite a hit using SS.
Secondly, the AA hit is alot bigger than the R520 counterpart.

X1800XT takes a 23.5% hit.
X1900XT takes a 32.4% hit.


 

deadseasquirrel

Golden Member
Nov 20, 2001
1,736
0
0
Originally posted by: Steelski
I have edited this thread to include a link to some new FEAR BENCHMARKS here is the link again (same as in OP) They are in 1600x1200 with and without AA. Very hefty
http://www.vrforums.com/showthread.php?t=53164&page=8

I hope those aren't correct because it doesn't seem as much of an increase compared to the other high-end cards out there right now: (from here)

1600x1200 0xAA 0xAF
x1800XT...........38......67......122
256GTX............45......79......154
512GTX............49......92......187
x1900xt..........38......71......161

1600x1200 4xAA 8xAF
x1800XT...........21......41......89
256GTX............24......41......94
512GTX............31......51......118
x1900xt..........24......49......116

Then again, this isn't the XTX, and, while this is a standard benchmark, there could have been a setting or two that isn't the same across both tests. I'd never judge a card as a whole based off of a single game benchmark, but I still expected that 25% better than the 512GTX (and it still be there at other resolutions and settings for all I know).
 

xtknight

Elite Member
Oct 15, 2004
12,974
0
71
Was something supposed to happen tomorrow (in terms of R580/G71 rumors, etc.) or am I dreaming? I can't remember what it was.

Edit: was it Catalyst 6.1s?
 

SolMiester

Diamond Member
Dec 19, 2004
5,330
17
76
Deadseasquirrel, are those benchie's right?, I thought the x1900 was 25% faster than the 512 GTX?, least thats what the fanATItics has been walfling!
 

allies

Platinum Member
Jun 18, 2002
2,572
0
71
If G71 is gonna be 32 pipes 24 ROPs, wth is G80 gonna be? I can't wait until I upgrade my comp later this year, it's gonna be sweet :)

 

moonboy403

Golden Member
Aug 18, 2004
1,828
0
76
x1900 xt's fear benchmark looks disappointing to me

i thought it's gonna be much better

now i wanna see what g71 can bring to the table
 

Beef Taco

Senior member
Jul 26, 2005
328
0
0
Originally posted by: xtknight
Was something supposed to happen tomorrow (in terms of R580/G71 rumors, etc.) or am I dreaming? I can't remember what it was.

Edit: was it Catalyst 6.1s?

No, the Catalyst 6.1's are already out, just check the ATI website. As far as I know, i think the X1900XT is going to be launched tomorrow.
 

Steelski

Senior member
Feb 16, 2005
700
0
0
Originally posted by: deadseasquirrel
Originally posted by: Steelski
I have edited this thread to include a link to some new FEAR BENCHMARKS here is the link again (same as in OP) They are in 1600x1200 with and without AA. Very hefty
http://www.vrforums.com/showthread.php?t=53164&page=8

I hope those aren't correct because it doesn't seem as much of an increase compared to the other high-end cards out there right now: (from here)

1600x1200 0xAA 0xAF
x1800XT...........38......67......122
256GTX............45......79......154
512GTX............49......92......187
x1900xt..........38......71......161

1600x1200 4xAA 8xAF
x1800XT...........21......41......89
256GTX............24......41......94
512GTX............31......51......118
x1900xt..........24......49......116

Then again, this isn't the XTX, and, while this is a standard benchmark, there could have been a setting or two that isn't the same across both tests. I'd never judge a card as a whole based off of a single game benchmark, but I still expected that 25% better than the 512GTX (and it still be there at other resolutions and settings for all I know).

I hate to be a really annoying guy but the article that you posted really does not go with what other sites show.
http://www.anandtech.com/video/showdoc.aspx?i=2607&p=7 shows the GTX512 benchmark a bit different and so does
http://www.xbitlabs.com/articles/video/display/games-2005_12.html
and the site that shows these benchmarks also have a very different story.
http://www.vr-zone.com/index.php?i=3043&s=9
care to elaborate why the site that you have linked is to a Magical GTX, also the XT in that article was using the 5.11 drivers and does not mention the FEAR rename game fix.
If we take the numbers from that site and compare them then we see a very substantial difference in FPS.
what i see is 49 fps avarage for XT and 34 for GTX. 1600x1200 aa+af
I dont see how the site you show can come to those results.

Not to mention this is with a Beta driver!
 

schtuga

Member
Dec 22, 2005
106
0
0
This guy claims to have one.
http://www.hardforum.com/showpost.php?p=1028888555&postcount=1
My Ati X1900xt and nVidia geforce 7800 GTX 512 benchmarks (and useability thoughts)

--------------------------------------------------------------------------------

Note: The nVidia card does not recognize 1280 x 1024 as a valid resolution for my Dell 30" monitor. The tests are run at 1280 x 800 because of this.

3dmark refuses to allow me to publish my benchmark because it doesn't recognize the x1900 xt as a valid video card (it shows up as generic VGA in the project viewer). Possibly because the ATI card isn't officially released until Tuesday.

My scores for 3dmark 2006 are as follows"

GeForce 7800 GTX 512 - 6042
Ati X1900 xt - 5986

As for Pro's and Cons (for me personally) of the ATI Card:

Pros:
1) Dual DVI Link for both DVI ports. The 7800 GTX 512 only has it on one port.

2) 2560 x 1600 support is MUCH better on the ATI X1900. The 7800 GTX 512 is finicky trying to find the right driver to get to work at that resolution.

3) At 2560 x 1600 the ATI card display is rock solid. For the nVidia display I would occasionally get a random garbage screen ever 10 minutes or so for a split second.

4) The ATI card supports ALL resolutions on my 30" lcd. The nVidia card only supports 4 resolutions.

5) In Shader heavy games (like EQ2) the ATI card is noticably faster than the nVidia card.(EQ2 is still a CPU bound game however especially with shadows).



6) Much cheaper than the nVidia 7800 GTX 512

Cons:
1) Noticeably louder than the nVidia 7800 GTX 512 (the fan can have a high pitch whine to it at times).

2) The Geforce 7800 GTX 512 has an advantage in non shader heavy games

he's got pics