8800GTX to be 30% faster than ATI's X1950XTX. GTS to be about equal to it.

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Zebo

Elite Member
Jul 29, 2001
39,398
19
81
I'm no EE but I remember Intel talking about this (on much smaller chips than G80, smithfield) how it much more cost effective for them to have two separate die because making them like A64's, same die would have quaddrupled thier defect rate.. defect rate is almost exponential as you increase area whether dual core or not. Only going to 65nm could intel see benefit in making them same die (conroe)... So 500mm2 is just not happening IMO@90nm combined with other evidence like dual resistor package on back of card and SLI's maturity.

500mm2 is 1 inch by 1 inch ever seen a chip that size? No and for good reason. G80 is Dual GPU.
 

extra

Golden Member
Dec 18, 1999
1,947
7
81
Look at it this way:

If it really isn't that much of an improvement over what you already have then there will be no rush to go out and buy one! In wich case you can save your money for the refresh when DX10 games will ACTUALLY BE OUT!

Personally I'll be keeping my 7800gt until spring, it plays everything just fine.

Maybe by that time the 8800 (or the ATI equiv) will be $200 and I can grab one~
 
Jun 14, 2003
10,442
0
0
Originally posted by: nismotigerwvu
an 800 gram cooling solution....thats like...1 and 3/4th pounds...just in the cooling....yipes


thats like hanging a zalman CNPS 7000cu off your video card lol! (it was like 850-900grams i think)
PCB reinforcments anyone?
 

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
ROFL 30%??

Please when was the last time an X1950 XTX scored 12K / 1.3 = 9200 in 06...

LMAO its more like 6.5 K... and thats with good parts, like a core 2 duo and good ram. Under the same setup an 8800GTX hits just under 12K.
Lets see if I still know my math:
12K/7K = 1.71

hmm I would say that at the worst we are looking at a 65% increase, and in ideal situations (optimized drivers bla bla bla) close to 75%. If that kind of increase is not enough for you, then heck dont buy it.

/thread for real
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
People are jumpingto conclusions way to fast on this German "preview" of the 8800 series. Most of the infomation there is pretty much the exactly the same as dailytechs.

How about features of this card? Matured SLi + SLi issues solved like vsync issue?, HDCP, better AA/AF, HDR plus AA, enhanced pure video, soundstorm!?!? :D

Lets wait for the benchmarks. However, for people who think 700million transistors on 90nm is impossible, how about the pre NV40 days? 222million transistors on 130nm!! I think its possible given the assumption the layout of the GPU much be MUCH more complex than any other GPU made before.
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Originally posted by: Dethfrumbelo
3DMark may not tell us too much at this point. I'm much more interested in real games running high AA/AF and HDR. But if the performance in games is only on par/slightly faster than a X1950XT/7950GX2, then I'll just go with the X1900XT, which should be even cheaper in 3 weeks. 700 million+ transistors (nearly double the X1950XT) and only a 20% gain? Hmmm...

Exactly, nobody is taking these results in context. According to these rumours the 8800GTX is 30% faster than the X1950XTX in 3dmark06 or 05. That would be at stock settings presumably: 1280X1024 with, I believe 2X AA/ 8x AF (someone correct me about these settings if I'm wrong). In the other G80 thread, by the way, the advantage is more like 60%

That's basically a test of pure shader power at low/medium res.

----------

That benchmark does not take into account the G80's other significant advantage: memory bandwidth.

Once the resolution starts going up (1600X1200 and above), and the AA really starts going, I'd expect G80 to pull away. I would predict a pretty sizeable (~50%) gap between 8800GTX and X1950XTX at 1920X1200 with 4X AA.

Not to mention the fact that aside from the X1950XTX (which currently has way more memory bandwith than any card on the market), I'd expect G80 to really kick butt at high res vs every other card: X1900XTX and 7900GTX.

1024X768/1280X1024 isn't exactly going to allow this architecture to stretch its legs and show what it's capable of...

Not to mention the fact that new and faster drivers invariably will come out, seeing as G80 is a brand new architecture. Nvidia milked good rewards out of the 6800 series with driver revisions (less so the 7800's because they were pretty much just beefed up 6800 cards), while ATI tweaked the X1xxx series constantly - mainly due to the new memory ring bus.
 
Jun 14, 2003
10,442
0
0
Originally posted by: Cookie Monster
People are jumpingto conclusions way to fast on this German "preview" of the 8800 series. Most of the infomation there is pretty much the exactly the same as dailytechs.

How about features of this card? Matured SLi + SLi issues solved like vsync issue?, HDCP, better AA/AF, HDR plus AA, enhanced pure video, soundstorm!?!? :D

Lets wait for the benchmarks. However, for people who think 700million transistors on 90nm is impossible, how about the pre NV40 days? 222million transistors on 130nm!! I think its possible given the assumption the layout of the GPU much be MUCH more complex than any other GPU made before.


maybe they are building layers on their die (or do they do this already?) like 3D die? so the die isnt physically large in a surface area way... maybe just a little deeper?
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: otispunkmeyer
Originally posted by: Cookie Monster
People are jumpingto conclusions way to fast on this German "preview" of the 8800 series. Most of the infomation there is pretty much the exactly the same as dailytechs.

How about features of this card? Matured SLi + SLi issues solved like vsync issue?, HDCP, better AA/AF, HDR plus AA, enhanced pure video, soundstorm!?!? :D

Lets wait for the benchmarks. However, for people who think 700million transistors on 90nm is impossible, how about the pre NV40 days? 222million transistors on 130nm!! I think its possible given the assumption the layout of the GPU much be MUCH more complex than any other GPU made before.


maybe they are building layers on their die (or do they do this already?) like 3D die? so the die isnt physically large in a surface area way... maybe just a little deeper?

There are layers now, but they are mainly used for the "wiring" and interconnects between the different parts of the cpu.

It is my understanding that as of now, no one has devised a way to have more than one layer of transistors in a chip.
 

imported_Crusader

Senior member
Feb 12, 2006
899
0
0
Originally posted by: apoppin
Originally posted by: Nightmare225
They're just feeding false information to get everyone worked up, and then they'll surprise us when benchmarks are released. ;) I would have thought that NVIDIA would have learned from NV30...

nov 8 isn't far off and i don't think anything would be gained by downplaying the g80's specs.

in fact . . . "+30%" is usually a pre-release marketing hype that often turns out to be 20% . . . or less.

i'm sorry but i am STILL in SHOCK

there is no emoticon to describe it

i know it's "next gen" and 'all that' . . . but +30% . . .

that's all

seriously, i was expecting a solid +50% . . . and i thought i was being conservative . . .

i am waiting to see what the real nvidia fans say now
[you know who] ;)

Err well, I'd have to say that most everyone attempting to make 30% look like a small gain.. most likely because ATI cant handle this chip as they wont have an answer for 6 months.
By that time, a guy might as well wait for the G80 refresh.. dominating ATI yet again.
The bottom line is that this is the fastest, most advanced card to ever be produced in the world.. and now thats somehow not enough? Nice try. :disgust:

Thats what I'm doing, buying this G80, and then the refresh to urinate in ATIs face all over again when R600 is out.

Whats wrong with that plan?
 

jim1976

Platinum Member
Aug 7, 2003
2,704
6
81
Originally posted by: jiffylube1024
Originally posted by: Dethfrumbelo
3DMark may not tell us too much at this point. I'm much more interested in real games running high AA/AF and HDR. But if the performance in games is only on par/slightly faster than a X1950XT/7950GX2, then I'll just go with the X1900XT, which should be even cheaper in 3 weeks. 700 million+ transistors (nearly double the X1950XT) and only a 20% gain? Hmmm...

Exactly, nobody is taking these results in context. According to these rumours the 8800GTX is 30% faster than the X1950XTX in 3dmark06 or 05. That would be at stock settings presumably: 1280X1024 with, I believe 2X AA/ 8x AF (someone correct me about these settings if I'm wrong). In the other G80 thread, by the way, the advantage is more like 60%

That's basically a test of pure shader power at low/medium res.

----------

That benchmark does not take into account the G80's other significant advantage: memory bandwidth.

Once the resolution starts going up (1600X1200 and above), and the AA really starts going, I'd expect G80 to pull away. I would predict a pretty sizeable (~50%) gap between 8800GTX and X1950XTX at 1920X1200 with 4X AA.

Not to mention the fact that aside from the X1950XTX (which currently has way more memory bandwith than any card on the market), I'd expect G80 to really kick butt at high res vs every other card: X1900XTX and 7900GTX.

1024X768/1280X1024 isn't exactly going to allow this architecture to stretch its legs and show what it's capable of...

Not to mention the fact that new and faster drivers invariably will come out, seeing as G80 is a brand new architecture. Nvidia milked good rewards out of the 6800 series with driver revisions (less so the 7800's because they were pretty much just beefed up 6800 cards), while ATI tweaked the X1xxx series constantly - mainly due to the new memory ring bus.


It's a very logical post Jiffy.
I just want to add beyond this , that the main bet for Nvidia and ATI IMO right now isn't the pure power in D3D9 applications, but the best combination of D3D9 and D3D10. What I mean with this is that we are talkin about a whole new marchitecture that does not have a previous point of reference to be compared with. I see it as a dangerous trap for both companies because we are talking about a key point in gpu history. Of course they should have in mind the D3D9 "behavior" of the gpu but IMO they are way much more concerned about D3D10 since it could prove a real trap for their performance.
 

redbox

Golden Member
Nov 12, 2005
1,021
0
0
Originally posted by: coldpower27
Originally posted by: Dethfrumbelo
3DMark may not tell us too much at this point. I'm much more interested in real games running high AA/AF and HDR. But if the performance in games is only on par/slightly faster than a X1950XT/7950GX2, then I'll just go with the X1900XT, which should be even cheaper in 3 weeks. 700 million+ transistors (nearly double the X1950XT) and only a 20% gain? Hmmm...

Didn't this exact same thing occur with the X1900 XTX vs 7900 GTX, the R580 had about 40% more transistors then G71, and was only barely faster in some situations.

This same thing occured with the Geforce 2 Ultra to Geforce 3 Transistion. The Geforce 3 had more then 2 times the transistors as NV16 and was hardly twice as fast.

When your doing a generational leap when a massive amount of functionality is added, you shouldn't expect transistor efficiency to go up. Alot of transistors need to be dedicated to functionality rather then peformance itself.

It's 30% faster then then the X1950 XTX is what were hearing not 20%.

I wouldn't think that transistor count alone would be a good way of judging performance. The R580 had different structure to it's gpu than past gpu's but it wasn't as different as what we are looking at with the G80. Sure alot of transistors will need to be added just to have functionality, but isn't the whole aim of DX10 to take some of the burden off of those transistors and to better use the ones the card has. I know we aren't in a DX10 era right yet, but I was just wondering if DX10 is supposed to be much more effecient why does the first DX10 card have such a high transistor count?
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: redbox
I wouldn't think that transistor count alone would be a good way of judging performance. The R580 had different structure to it's gpu than past gpu's but it wasn't as different as what we are looking at with the G80. Sure alot of transistors will need to be added just to have functionality, but isn't the whole aim of DX10 to take some of the burden off of those transistors and to better use the ones the card has. I know we aren't in a DX10 era right yet, but I was just wondering if DX10 is supposed to be much more effecient why does the first DX10 card have such a high transistor count?

That is not exactly what I am saying. I am not saying you judge performance alone with transistor count at all, you misunderstand. I am saying that you shouldn't expect massive increases in performance every generation. We have been sustaining it so far as die sizes have continued to grow bigger and bigger. You need to account for factors as what the transistor budget will be spent on. You can't predict performance from transistor count alone. I am also considering the rumor information i have read and trying to make sense of how it all fits together.

DX10 is more versatile from what I can see, not necessarily more efficient.

There was a slide before that said Geforce 7900 Series GPU are 20x the baseline G965 IGP Part, while the 8800 Series will be 27x so that is ~ 30% or so.

It wouldn't be the first time we have had a marginal performance improvement with the focus being on fucntionality rather then more on performance. As well 30% is probably an average, there will likely to be places where you see higher improvements, in the situations you need it the most.

You also have to keep in mind that Nvidia is going to be adding more then just DX10 functionality with the 700 Million Transistor budget, we are hearing a new AA mode called VCAA I assume that would require something not to mention 128Bit HDR as well the ability to do AA with it. The Physics Processing functions as well would take osme of the budget and who knows what else are on the g80's list of feature set that we don't know about.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: jiffylube1024
Originally posted by: Dethfrumbelo
3DMark may not tell us too much at this point. I'm much more interested in real games running high AA/AF and HDR. But if the performance in games is only on par/slightly faster than a X1950XT/7950GX2, then I'll just go with the X1900XT, which should be even cheaper in 3 weeks. 700 million+ transistors (nearly double the X1950XT) and only a 20% gain? Hmmm...

Exactly, nobody is taking these results in context. According to these rumours the 8800GTX is 30% faster than the X1950XTX in 3dmark06 or 05. That would be at stock settings presumably: 1280X1024 with, I believe 2X AA/ 8x AF (someone correct me about these settings if I'm wrong). In the other G80 thread, by the way, the advantage is more like 60%

That's basically a test of pure shader power at low/medium res.

----------

That benchmark does not take into account the G80's other significant advantage: memory bandwidth.

Once the resolution starts going up (1600X1200 and above), and the AA really starts going, I'd expect G80 to pull away. I would predict a pretty sizeable (~50%) gap between 8800GTX and X1950XTX at 1920X1200 with 4X AA.

Not to mention the fact that aside from the X1950XTX (which currently has way more memory bandwith than any card on the market), I'd expect G80 to really kick butt at high res vs every other card: X1900XTX and 7900GTX.

1024X768/1280X1024 isn't exactly going to allow this architecture to stretch its legs and show what it's capable of...

Not to mention the fact that new and faster drivers invariably will come out, seeing as G80 is a brand new architecture. Nvidia milked good rewards out of the 6800 series with driver revisions (less so the 7800's because they were pretty much just beefed up 6800 cards), while ATI tweaked the X1xxx series constantly - mainly due to the new memory ring bus.

Yeah, what you says is true, you was able to see the X850XT PE performing almost the same as the X1800XT in resolutions like 1024x768 and 1280x1024, but once you cranked the resolution and the AA, you was able to see the X850XT PE beaten between 20 and 50%. But considering the huge die size of the G80, I be that they put there more functionality and Dual Core GPU than performance. I guess that since the debut of the NV40 and R4XX, not much can be done to increase performance at all. After all, the 7900GTX and the X1950XTX is not two times faster than both under normal circunstances except very very high resolutions with Anti Aliasing. There's some performance stalls somewhere, cause the only way you can double up the performance fron that generation is using Crossfire or SLI. Pretty much that is the thing that is happening now eh?
 

thilanliyan

Lifer
Jun 21, 2005
12,062
2,275
126
Originally posted by: apoppin
Originally posted by: Nightmare225
Smells ultra-fishy. Isn't this the architecture that Nvidia's spent millions on?

hundreds of millions :p

it was tens of millions for nv30

again . . . i don't believe it is only "30% faster" than the xtx ...
. . . or else it has a LOT of 'features'.
:roll:

Uh oh Appopin...you better not mention NV30 again, lest the NV fanboys burn you at the stake.:)
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: evolucion8
Yeah, what you says is true, you was able to see the X850XT PE performing almost the same as the X1800XT in resolutions like 1024x768 and 1280x1024, but once you cranked the resolution and the AA, you was able to see the X850XT PE beaten between 20 and 50%. But considering the huge die size of the G80, I be that they put there more functionality and Dual Core GPU than performance. I guess that since the debut of the NV40 and R4XX, not much can be done to increase performance at all. After all, the 7900GTX and the X1950XTX is not two times faster than both under normal circunstances except very very high resolutions with Anti Aliasing. There's some performance stalls somewhere, cause the only way you can double up the performance fron that generation is using Crossfire or SLI. Pretty much that is the thing that is happening now eh?

That really depends on what you consider "normal circumstances" nowadays.

I would say that finally with the 7900 GTX and X1950 XTX we have GPU's that are overall about 2 times as fast as the original 6800 Ultra and the X850 XT PE. Considering that a 7600 GT is faster on the whole then the 6800 Ultra and that a 7900 GTX does beat the 7600 GT SLI setup.

Considering that now we also have much faster CPU's to play with ala Core 2 Extreme X6800 to help alleviate the CPU bottlenecks, and performance has increased but no more 2x every generation OMG! performance levels.

30% average is actually quite nice considering transistors have to spent on DX10 and addtional funcitonality.
 

lopri

Elite Member
Jul 27, 2002
13,314
690
126
Originally posted by: Crusader
Err well, I'd have to say that most everyone attempting to make 30% look like a small gain.. most likely because ATI cant handle this chip as they wont have an answer for 6 months.
By that time, a guy might as well wait for the G80 refresh.. dominating ATI yet again.
The bottom line is that this is the fastest, most advanced card to ever be produced in the world.. and now thats somehow not enough? Nice try. :disgust:
See, this is exactly what I hate to see. And makes me hate ATI. :frown:

1. With the market at its hand (fastest GPU untill R600 debut / first GPU to support DX10), NV will charge whatever they feel like they can get away with. $650 for a GTX? God.. I really hope AMD/ATI won't abandon the high-end GPU market.

2. Anyone noticed the G80 spec indicates something along the line of 8-quad? (8 TCP or something like that) Then, upon close inspection the 8800 GTX spec says it's a 7 TCP part, and 8800 GT looks to be a 6 TCP part. My guess? The good 8 TCP part is being secretely stockpiled until R600 launches, and NV will launch it w/ 1GB of on-board RAM. The price of that part is anyone's guess, but let's say I have a very good idea.

If the performance increase is 30% above 1950 XTX (therefore 35~40% above 7900 GTX), it's such a disappointment for me. Especially knowing that they're probably collecting parts that'll do 40% above 1950 XTX for later time. And especially knowing what we've heard so far about G80. (Heat, Power, Price) It's been reported that the quoted 30% figure was achieved with a Kentsfield, so even that number could have been skewed.

At least here is a hoping that NV has vastly improved the nagging IQ issues that pops up endlessly, and give us a new level of IQ that we haven't seen. And I think they'll get that right this time. DX10 is, IMO, a non-issue at this time other than marketing purposes.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
I have to say 30% sounds too low. Either the test was flawed and/or the drivers are holding things back.
 

Dethfrumbelo

Golden Member
Nov 16, 2004
1,499
0
0
Originally posted by: lopri
Originally posted by: Crusader
Err well, I'd have to say that most everyone attempting to make 30% look like a small gain.. most likely because ATI cant handle this chip as they wont have an answer for 6 months.
By that time, a guy might as well wait for the G80 refresh.. dominating ATI yet again.
The bottom line is that this is the fastest, most advanced card to ever be produced in the world.. and now thats somehow not enough? Nice try. :disgust:
See, this is exactly what I hate to see. And makes me hate ATI. :frown:

1. With the market at its hand (fastest GPU untill R600 debut / first GPU to support DX10), NV will charge whatever they feel like they can get away with. $650 for a GTX? God.. I really hope AMD/ATI won't abandon the high-end GPU market.

2. Anyone noticed the G80 spec indicates something along the line of 8-quad? (8 TCP or something like that) Then, upon close inspection the 8800 GTX spec says it's a 7 TCP part, and 8800 GT looks to be a 6 TCP part. My guess? The good 8 TCP part is being secretely stockpiled until R600 launches, and NV will launch it w/ 1GB of on-board RAM. The price of that part is anyone's guess, but let's say I have a very good idea.

If the performance increase is 30% above 1950 XTX (therefore 35~40% above 7900 GTX), it's such a disappointment for me. Especially knowing that they're probably collecting parts that'll do 40% above 1950 XTX for later time. And especially knowing what we've heard so far about G80. (Heat, Power, Price) It's been reported that the quoted 30% figure was achieved with a Kentsfield, so even that number could have been skewed.

At least here is a hoping that NV has vastly improved the nagging IQ issues that pops up endlessly, and give us a new level of IQ that we haven't seen. And I think they'll get that right this time. DX10 is, IMO, a non-issue at this time other than marketing purposes.

Agreed. DX10 is totally irrelevant at this point - DX9 performance with these cards is all that matters right now. Maybe I'll start caring about DX10 in 6 months, maybe a year - and I'd need to move to Vista first, not too eager to do that either. Improved IQ would be nice, but not at that price if the performance isn't there.

The X1950XT is averaging 29 fps in Oblivion at 1280x1024/no AA (AT's charts), 30% more than that is still <40 fps - that's not good if you just spent $650 on a video card. All the added features will be worthless if the performance can't support them and keep the games playable.

Even though I'm not a big fan of SLI/Crossfire, that would be a better way to go if indeed the best we see is a 30% improvement in most games.
 

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81

Originally posted by: Cookie Monster
People are jumpingto conclusions way to fast on this German "preview" of the 8800 series. Most of the infomation there is pretty much the exactly the same as dailytechs.

How about features of this card? Matured SLi + SLi issues solved like vsync issue?, HDCP, better AA/AF, HDR plus AA, enhanced pure video, soundstorm!?!? :D

Lets wait for the benchmarks. However, for people who think 700million transistors on 90nm is impossible, how about the pre NV40 days? 222million transistors on 130nm!! I think its possible given the assumption the layout of the GPU much be MUCH more complex than any other GPU made before.
Uhhh, the 90nm process only has about 2x the transistor density as the 130nm process. 700M is signigicantly more than 2x of 222M.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: thilan29
Originally posted by: apoppin
Originally posted by: Nightmare225
Smells ultra-fishy. Isn't this the architecture that Nvidia's spent millions on?

hundreds of millions :p

it was tens of millions for nv30

again . . . i don't believe it is only "30% faster" than the xtx ...
. . . or else it has a LOT of 'features'.
:roll:

Uh oh Appopin...you better not mention NV30 again, lest the NV fanboys burn you at the stake.:)

at the time it seemed appropriate

i still stay it's off . . . +30% is way too slow :p
 

Zebo

Elite Member
Jul 29, 2001
39,398
19
81
Originally posted by: Cookie Monster
People are jumpingto conclusions way to fast on this German "preview" of the 8800 series. Most of the infomation there is pretty much the exactly the same as dailytechs.

How about features of this card? Matured SLi + SLi issues solved like vsync issue?, HDCP, better AA/AF, HDR plus AA, enhanced pure video, soundstorm!?!? :D

Lets wait for the benchmarks. However, for people who think 700million transistors on 90nm is impossible, how about the pre NV40 days? 222million transistors on 130nm!! I think its possible given the assumption the layout of the GPU much be MUCH more complex than any other GPU made before.


No one is jumping to anything. Most everyone has said emphatically they doubt these numbers but if they true it sux.
 

GTaudiophile

Lifer
Oct 24, 2000
29,767
33
81
I usually don't upgrade to a new OS until SP1 of that OS. So no Vista in 2007 for me I bet.

Question is, how well will G80/R600 run Crysis? I mean, I think/hope that game WILL be worth $1000 in upgrades but not if those upgrades pump out only 30 FPS with FSAA at 1600x1200.