8800GTX to be 30% faster than ATI's X1950XTX. GTS to be about equal to it.

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ronnn

Diamond Member
May 22, 2003
3,918
0
71
This is just normal pr crap. Nv was losing out to the x1900 - x1950 series - so big time promoted the g80. Now they are just lowering expectations. On release day they want it compared to these small gains that gullible ati fans will latch on to, and everyone (especially nvidia fans) will be pleasantly impressed when it comes in much quicker. Many will run out to buy a monitor that can showcase these gains when using 2.
 

CP5670

Diamond Member
Jun 24, 2004
5,668
768
126
A 30% increase over the X1950 is pretty lame if it's true. A pair of overclocked 7900GTOs/GTXs would be able to get quite close to it in that case.
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
Now is that 30% before or after AA/AF? Better yet, is that 30% faster with or without their rumored playable 16xAA?
 

Nightmare225

Golden Member
May 20, 2006
1,661
0
0
They're just feeding false information to get everyone worked up, and then they'll surprise us when benchmarks are released. ;) I would have thought that NVIDIA would have learned from NV30...
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: Cooler
To even make 8800GTX a good buy it should be faster then 2 7900 GTX in SLI. It does not look that way so basically for close to the same price you can get the same profromance today. Since ATI 1950XTX is close to 7950GX2 I would expect that ATI will obtain the speed crown especially if it uses 1Gig GDDR4 and 512 bit memory controller

Even as fast as 2 7900 GTX in SLI would be pretty damn good considering all it's added DirectX 10 functionality. Considering it is a single card as well.

30% Faster then X1950 XTX is pretty decent as that would still place it around 10% Faster then the 7950GX2, but about 20% slower then 7900 GTX SLI and about 8% Slower then 7900 GTO SLI.

I guess it can't be helped this generation, DX9.0C to DX10 is just to heavy on the functionality side to allow much of an increase of performance, if your using the same optical process. ATI may have better luck since they will be using the 80nm process which allows some wiggle room.

2 SLI 7900 GTX is 800USD is it not? That is above the projected 649USD for the 8800 GTX which also gives you the additional features of DX10 as well as HDR + AA. As well there is no need to worry about SLI scaling.

I would expect ATI to have the speed crown when they launch their R600, as they are launching later and they do have the 80nm advantage. We will have to see.

That is of course assuming these performance rumors of G80 are accurate. Now that SLI is maturing this overlap can't be helped sometimes.
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
Why do I get the feeling like the G80 will just turn out to be a "meh" instead of a "wow?" If two 7900GTO's can be had for less, and give better frames what's the point? Crysis will hit but will have EA slapped all over it, and other good looking DX9 games are still coming out...

I want to see some frames for popular titles. Enough with the leaked 3DMarks powered by quad-core Kentsfields that no one has.
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,675
146
106
www.neftastic.com
Originally posted by: Crusader
30% sounds good to me.

VCAA + DX10/SM4 + 128bit HDR + Quantum Physics Engine + faster than any other card ever released? = teh win.

Now if the extra ram on top of the base 512 is for performance-free IQ options.. I'd say I'm definitely more excited about this launch than any time in the past, from my Diamond Monster 3D till today!
I do care about upcoming DX10 performance because I'm tired of doing constant card swaps, and I'm not swapping out again for another DX9 card. If I pull out my current card, nothing less than a DX10 card is taking its place.
Tired of dinking around with slow and ancient GF7/X1900 era stuff.

But we arent far from the launch party on the 8th.. I'll wait for some data that can be confirmed.

You do have a good point - there are a few new features, but really, is it REALLY worth the premium price at this point? I find it hard to swallow $400 for a top end video card at this point. Enthusiast market aside, for the price of a 8800GTX, you can put together a decent rig that can play most current games quite adequately.
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: josh6079
Why do I get the feeling like the G80 will just turn out to be a "meh" instead of a "wow?" If two 7900GTO's can be had for less, and give better frames what's the point? Crysis will hit but will have EA slapped all over it, and other good looking DX9 games are still coming out...

I want to see some frames for popular titles. Enough with the leaked 3DMarks powered by quad-core Kentsfields that no one has.

The only thing I could think about at this point is OpenEXR HDR + AA support for Nvidia users as well as the new features this GPU has to offer.
 

SpeedZealot369

Platinum Member
Feb 5, 2006
2,778
1
81
You know what guys, I think it's pointless to make claims of performance until real benchmarks come out...

Personally I think these claims are bogus. I might be wrong, but no way is the G80 only 30% faster then a x1950xtx. No way. Have you guys seen those specs?

Either way lets wait till benchmarks come out. k? deal.

-SZ
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: Dethfrumbelo
3DMark may not tell us too much at this point. I'm much more interested in real games running high AA/AF and HDR. But if the performance in games is only on par/slightly faster than a X1950XT/7950GX2, then I'll just go with the X1900XT, which should be even cheaper in 3 weeks. 700 million+ transistors (nearly double the X1950XT) and only a 20% gain? Hmmm...

Didn't this exact same thing occur with the X1900 XTX vs 7900 GTX, the R580 had about 40% more transistors then G71, and was only barely faster in some situations.

This same thing occured with the Geforce 2 Ultra to Geforce 3 Transistion. The Geforce 3 had more then 2 times the transistors as NV16 and was hardly twice as fast.

When your doing a generational leap when a massive amount of functionality is added, you shouldn't expect transistor efficiency to go up. Alot of transistors need to be dedicated to functionality rather then peformance itself.

It's 30% faster then then the X1950 XTX is what were hearing not 20%.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: Nightmare225
They're just feeding false information to get everyone worked up, and then they'll surprise us when benchmarks are released. ;) I would have thought that NVIDIA would have learned from NV30...

nov 8 isn't far off and i don't think anything would be gained by downplaying the g80's specs.

in fact . . . "+30%" is usually a pre-release marketing hype that often turns out to be 20% . . . or less.

i'm sorry but i am STILL in SHOCK

there is no emoticon to describe it

i know it's "next gen" and 'all that' . . . but +30% . . .

that's all

seriously, i was expecting a solid +50% . . . and i thought i was being conservative . . .

i am waiting to see what the real nvidia fans say now
[you know who] ;)



 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: nismotigerwvu
an 800 gram cooling solution....thats like...1 and 3/4th pounds...just in the cooling....yipes

I think it's actually 800g for the entire card, not just the HSF.


30% increase in performance over X1950XTX? Something doesn't sound right.
 

ShadowOfMyself

Diamond Member
Jun 22, 2006
4,227
2
0
Originally posted by: coldpower27
Originally posted by: Dethfrumbelo
3DMark may not tell us too much at this point. I'm much more interested in real games running high AA/AF and HDR. But if the performance in games is only on par/slightly faster than a X1950XT/7950GX2, then I'll just go with the X1900XT, which should be even cheaper in 3 weeks. 700 million+ transistors (nearly double the X1950XT) and only a 20% gain? Hmmm...

Didn't this exact same thing occur with the X1900 XTX vs 7900 GTX, the R580 had about 40% more transistors then G71, and was only barely faster in some situations.

This same thing occured with the Geforce 2 Ultra to Geforce 3 Transistion. The Geforce 3 had more then 2 times the transistors as NV16 and was hardly twice as fast.

When your doing a generational leap when a massive amount of functionality is added, you shouldn't expect transistor efficiency to go up. Alot of transistors need to be dedicated to functionality rather then peformance itself.

It's 30% faster then then the X1950 XTX is what were hearing not 20%.

Except that 9700 PRO brought full dx9 functionality and still kicked the crap out of anything on the market... This has got to be BS.. It the card turns out to be that "slow", many will definately wait for R600
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: ShadowOfMyself
Originally posted by: coldpower27
Originally posted by: Dethfrumbelo
3DMark may not tell us too much at this point. I'm much more interested in real games running high AA/AF and HDR. But if the performance in games is only on par/slightly faster than a X1950XT/7950GX2, then I'll just go with the X1900XT, which should be even cheaper in 3 weeks. 700 million+ transistors (nearly double the X1950XT) and only a 20% gain? Hmmm...

Didn't this exact same thing occur with the X1900 XTX vs 7900 GTX, the R580 had about 40% more transistors then G71, and was only barely faster in some situations.

This same thing occured with the Geforce 2 Ultra to Geforce 3 Transistion. The Geforce 3 had more then 2 times the transistors as NV16 and was hardly twice as fast.

When your doing a generational leap when a massive amount of functionality is added, you shouldn't expect transistor efficiency to go up. Alot of transistors need to be dedicated to functionality rather then peformance itself.

It's 30% faster then then the X1950 XTX is what were hearing not 20%.

Except that 9700 PRO brought full dx9 functionality and still kicked the crap out of anything on the market... This has got to be BS.. It the card turns out to be that "slow", many will definately wait for R600

of course NV30 [dustBuster] was first with Dx9 functionality . . .

i still don't believe it's only +30% faster :p
[that would make it about +20% faster than the current GX2 and slower than sli'd GTXes
. . . for $650!]
:Q

no way
 

Zebo

Elite Member
Jul 29, 2001
39,398
19
81
Originally posted by: hardwareking
The 7950 gx2 is faster than the x1950 xtx by quiet a bit.(not sure of % terms).So we might see 7950 gx2 price drops once the 8800 gtx arrives.
And if ATI drops the x1950 xtx price enough they'll have a good temporary answer to the 8800 gts.

With nothing on, sure GX2 is faster. But then go read HardOCP or Xbitlabs review.. They are equals.


If this site is true this is a nightmare for Nvidia. 30% faster, aint shat for a new gen. I was expecting 100% faster.

Very expensive huge chip(s), sucking lots of power for only 30% performance gain? I seruoisly doubt it.
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: ShadowOfMyself
Except that 9700 PRO brought full dx9 functionality and still kicked the crap out of anything on the market... This has got to be BS.. It the card turns out to be that "slow", many will definately wait for R600

The Radeon 9700 Pro is a card that is on the ATI side to be fair, and we haven't seen that to be the norm anymore. I think expecting R300 to happen all the time. aka Having your cake an eating it too is not realistic but that is just me.

Radeon X850 XT PE to Radeon X1800 XT that is 2 times as many transistors and I don't think I remember ATI's card being 2 times as fast as the Radoen X850 XT PE.

For an 83% increase in transistors over ATI for a 30% increase in performance does seem reduced. You also have to keep in mind G80 is a new architecture, so performance will be very raw. There is also the issue of Nvidia and ATI not using the same yard stick when counting transistors, as ATI has shown they are constantly less transistor dense then Nvidia on a given process. If G80 is so heavy in transistor count, I would expect G81 to focus on increasing performance without increasing transistor count or even trimming it down ala G71.

G71 is very transistor frugal and was lagging on functionality so I guess Nvidia this generation put the focus much more heavily on functionality rather then performance. Also there are rumors of a full physics processing unit on the die of G80 itself.

Outside of the DX 8.1 to DX9.0 transistion, the amount of transistors to add functionality has generally been on the rise. DX 9.0c seemed quite expensive, for what it added.

If R600 is not delayed then I would urge people to consider looking at both SKU's before coming to an conclusion.

For ATI it looks like they aren't increasing their transistor count by much I heard rumors of 500 mil or so, so it's likely going to be ATI's turn to be more transistor frugal then Nvidia. And from that they could possibly be faster as they can clock it higher due to a cooler 80nm process.

 

Zebo

Elite Member
Jul 29, 2001
39,398
19
81
Looks like I'll be happy with my almost year old $420AR SLI @ GTX levels overclocked 7900GT's for another year. I said back then and I'll say again 7900GT was the best video card ever from a price/performance perpective. If this 30% holds true then GTO is the nvidia cards to get @$500 not 8800's.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: Nightmare225
Smells ultra-fishy. Isn't this the architecture that Nvidia's spent millions on?

hundreds of millions :p

it was tens of millions for nv30

again . . . i don't believe it is only "30% faster" than the xtx ...
. . . or else it has a LOT of 'features'.
:roll:
 

Zebo

Elite Member
Jul 29, 2001
39,398
19
81
Originally posted by: coldpower27
Originally posted by: ShadowOfMyself
Except that 9700 PRO brought full dx9 functionality and still kicked the crap out of anything on the market... This has got to be BS.. It the card turns out to be that "slow", many will definately wait for R600

The Radeon 9700 Pro is a card that is on the ATI side to be fair, and we haven't seen that to be the norm anymore. I think expecting R300 to happen all the time. aka Having your cake an eating it too is not realistic but that is just me.

Radeon X850 XT PE to Radeon X1800 XT that is 2 times as many transistors and I don't think I remember ATI's card being 2 times as fast as the Radoen X850 XT PE.

For an 83% increase in transistors over ATI for a 30% increase in performance does seem reduced. You also have to keep in mind G80 is a new architecture, so performance will be very raw. There is also the issue of Nvidia and ATI not using the same yard stick when counting transistors, as ATI has shown they are constantly less transistor dense then Nvidia on a given process. If G80 is so heavy in transistor count, I would expect G81 to focus on increasing performance without increasing transistor count or even trimming it down ala G71.

G71 is very transistor frugal and was lagging on functionality so I guess Nvidia this generation put the focus much more heavily on functionality rather then performance. Also there are rumors of a full physics processing unit on the die of G80 itself.

Outside of the DX 8.1 to DX9.0 transistion, the amount of transistors to add functionality has generally been on the rise. DX 9.0c seemed quite expensive, for what it added.

If R600 is not delayed then I would urge people to consider looking at both SKU's before coming to an conclusion.

For ATI it looks like they aren't increasing their transistor count by much I heard rumors of 500 mil or so, so it's likely going to be ATI's turn to be more transistor frugal then Nvidia. And from that they could possibly be faster as they can clock it higher due to a cooler 80nm process.

Nvidia is using two cores that' why it's huge. Wait and see. I say this because of resistor package on photos of card and like with GX2 they need two to beat ATI. Also 700million trasistors almost impossible to make in a single chip without largenumbers of defects making it cost prohibitive... They have simply contiued this dual core trend with 8800's added some shaders units and dx10

 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: Zebo
Nvidia is using two cores that' why it's huge. Wait and see. I say this because of resistor package on photos of card and like with GX2 they need two to beat ATI. They have simply contiued this trend with 8800's added some shaders units and dx10

Maybe, if it is indeed 700 million transistors on a 90nm process it will be total die size of 500mm2, whther or not this will be split into 2 dice is currently uncertain, it would be certainly be interesting if there were 2 dice in 1 package.

I have my doubts of how 2 dice would work with a single 384Bit Wide Memory Interface.

And anyway "wait and see" I always do that but I am having fun speculating in the meantime.