8800GTX to be 30% faster than ATI's X1950XTX. GTS to be about equal to it.

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

redbox

Golden Member
Nov 12, 2005
1,021
0
0
Originally posted by: coldpower27

That is not exactly what I am saying. I am not saying you judge performance alone with transistor count at all, you misunderstand. I am saying that you shouldn't expect massive increases in performance every generation. We have been sustaining it so far as die sizes have continued to grow bigger and bigger. You need to account for factors as what the transistor budget will be spent on. You can't predict performance from transistor count alone. I am also considering the rumor information i have read and trying to make sense of how it all fits together.

DX10 is more versatile from what I can see, not necessarily more efficient.

There was a slide before that said Geforce 7900 Series GPU are 20x the baseline G965 IGP Part, while the 8800 Series will be 27x so that is ~ 30% or so.

It wouldn't be the first time we have had a marginal performance improvement with the focus being on fucntionality rather then more on performance. As well 30% is probably an average, there will likely to be places where you see higher improvements, in the situations you need it the most.

You also have to keep in mind that Nvidia is going to be adding more then just DX10 functionality with the 700 Million Transistor budget, we are hearing a new AA mode called VCAA I assume that would require something not to mention 128Bit HDR as well the ability to do AA with it. The Physics Processing functions as well would take osme of the budget and who knows what else are on the g80's list of feature set that we don't know about.

I was wanting it to read like I was agreeing with you. Oh well. To clearify I do agree that we can't just look at the transistor count and try to judge performance. Also expecting double the performance between generations is a little bit outlandish in my view. It isn't untill recently with the advancement of sli and crossfire that consumers have almost demanded that the new gpu be 2x better than last generation. I would be quite happy with 30% advancement. The question is where is that advancement at?

I saw those slides too, but I can't recall if they said how they got those numbers. Where they just theoretical numbers? Or where they based on a synthetic application? I would take those slides released by Nvidia with a grain of salt. Furthermore those numbers would have to be in a DX9 environment would they not?

The main problem with not only G80, but also R600 is they are going to play dual roles. They need to improve performance for DX9 while at the same time not being a slouch in DX10 all while bringing new features ala VCAA and higher HDR. Like you said alot of the transistors would have to be used just for the function of these features, which leads me too my next question. With all of these features many of which won't be used untill DX10 just how is Nvidia going to increase performance by 30% in a DX9 application if we are even talking about those increases being in DX9?
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
I predict (and would hope) the 8800GTX will be able to rival 7900GT SLI in terms of performance.
 

m21s

Senior member
Dec 6, 2004
775
0
71
Originally posted by: BFG10K
I predict (and would hope) the 8800GTX will be able to rival 7900GT SLI in terms of performance.

Uhm I sure as hell hope NOT!

I already do that now with a 7950GX2!

If this gets only 30% increase in performance i'm gonna be pissed. :|
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
Yeah but the 7950 GX2 is two cards in SLI. Also beating a 7900 GTX SLI might be a tough ask from a single 8800 GTX.
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: redbox
I was wanting it to read like I was agreeing with you. Oh well. To clearify I do agree that we can't just look at the transistor count and try to judge performance. Also expecting double the performance between generations is a little bit outlandish in my view. It isn't untill recently with the advancement of sli and crossfire that consumers have almost demanded that the new gpu be 2x better than last generation. I would be quite happy with 30% advancement. The question is where is that advancement at?

I saw those slides too, but I can't recall if they said how they got those numbers. Where they just theoretical numbers? Or where they based on a synthetic application? I would take those slides released by Nvidia with a grain of salt. Furthermore those numbers would have to be in a DX9 environment would they not?

The main problem with not only G80, but also R600 is they are going to play dual roles. They need to improve performance for DX9 while at the same time not being a slouch in DX10 all while bringing new features ala VCAA and higher HDR. Like you said alot of the transistors would have to be used just for the function of these features, which leads me too my next question. With all of these features many of which won't be used untill DX10 just how is Nvidia going to increase performance by 30% in a DX9 application if we are even talking about those increases being in DX9?

Ok, thanks for the clarification, I hope I don't come off as too condescending, or anything, you don't deserve that if you were simply agreeing with me. Howeve in the future if you were to agree with me, simply put "I agree with what you said here....." something somewhat more right smack in the face would work well with me.

As well regarding the SLI/Crossfire comment, yeah this compounds the problem as the technology improves and this platform sclaes better which puts more pressure on the next generation card to be faster. Just as long as 8800 GTX SLI is untouchable by the prior generation everything will be rosy. 30% is "allright" though not something I would go and purchase though if I already own 7900 GTX SLI though.

That said, well the transistor budget is over 2.5x times more then G71, so there should be enough transistors left over for "some" performance improvement despite Nvidia throwing everything but the kitchen sink onto the die in terms of feature enrichment.

I personally doubt these cards will be playing DX10 games "well" by the time those games arrive "sufficient" would be more like it.

I would guess the advancement is probably be in DX9 performance. There isn't even a 3D Mark DX10 we can use for guauging it's DX10 capabilities. What's important for the bulk majority of the lifetime of this SKU is likely going to be Shader Model 2.0/3.0 performance.

Let's hope both ATI and NV can pull off a "R300" and have exemplary DX9 and DX10 performance, though I am not expecting this. By "R300" I mean good performance in games based on either API with which the SKU was designed for.

R600 is a more interesting beast, with a rumored transistor count of 500 mil, I wonder if it's going to be more conservative on the feature enrichment, since it did alot fo that already on the R5xx generation.

 

Noema

Platinum Member
Feb 15, 2005
2,974
0
0
Originally posted by: Dethfrumbelo


The X1950XT is averaging 29 fps in Oblivion at 1280x1024/no AA (AT's charts),
30% more than that is still <40 fps - that's not good if you just spent $650 on a video card. All the added features will be worthless if the performance can't support them and keep the games playable.

A little off topic:

Do you mean this chart ?

I am not sure about this; hopefully someone can clear that up for me: I believe those are for 1600x1200 and they mistakenly wrote 1280x1024.

First, because those seem like incredibly low numbers for a relatively low res like 1280x1024; secondly because the text on top reads as follows:


"Our goal was to get acceptable performance levels under the current generation of cards at 1600x1200. This was fairly easy with the range of cards we tested here. These settings are amazing and very enjoyable. While more is better in this game, no current computer will give you everything at high res. Only the best multi-GPU solutions and a great CPU are going to give you settings like the ones we have at high resolutions, but who cares about grass distance, right?"

Not a flame but just wondering...I sure hope those are 1600x1200 because even with my lowly 7800GS I get 20+ fps with similar settings at 1280x1024, and my 7800gs is quite inferior even to the X1950pro :) And I'm planning on getting a X1950XTX by the end of the year when I update my rig to a CD2 box.

Can anyone confirm or disprove my suspicion?
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: BFG10K
Yeah but the 7950 GX2 is two cards in SLI. Also beating a 7900 GTX SLI might be a tough ask from a single 8800 GTX.

Somewhere around the level of 7950 GT SLI would be around right for a Single 8800 GTX
more then then a Single 7950GX2, but less then 7900 GTX SLI and less then 7900 GTO SLI.
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: Noema
Originally posted by: Dethfrumbelo


The X1950XT is averaging 29 fps in Oblivion at 1280x1024/no AA (AT's charts),
30% more than that is still <40 fps - that's not good if you just spent $650 on a video card. All the added features will be worthless if the performance can't support them and keep the games playable.

A little off topic:

Do you mean this chart ?

I am not sure about this; hopefully someone can clear that up for me: I believe those are for 1600x1200 and they mistakenly wrote 1280x1024.

First, because those seem like incredibly low numbers for a relatively low res like 1280x1024; secondly because the text on top reads as follows:


"Our goal was to get acceptable performance levels under the current generation of cards at 1600x1200. This was fairly easy with the range of cards we tested here. These settings are amazing and very enjoyable. While more is better in this game, no current computer will give you everything at high res. Only the best multi-GPU solutions and a great CPU are going to give you settings like the ones we have at high resolutions, but who cares about grass distance, right?"

Not a flame but just wondering...I sure hope those are 1600x1200 because even with my lowly 7800GS I get 20+ fps with similar settings at 1280x1024, and my 7800gs is quite inferior even to the X1950pro :) And I'm planning on getting a X1950XTX by the end of the year when I update my rig to a CD2 box.

Can anyone confirm or disprove my suspicion?


Are you doing exactly the same thing as they are doing something about running through the forest with fireballs being thrown at you? How about settings exactly the same?

I think what Anandtech is doing is presenting the worse case scenario...

 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
Thats what I'm doing, buying this G80, and then the refresh to urinate in ATIs face all over again when R600 is out.

Whats wrong with that plan?
Nothing so long as you don't forget to lift your cheerleader skirt and squat to keep it from going down your leg.

Seriously dude, chill, no one is doubting that the G80 is going to be awesome. If you haven't noticed a lot of people don't believe this rumored 30% improvement. As far as the R600 vs. G80 crap, lets save that for when the cards are actually out.

I don't see a problem with Nvidia's current framerates, it's there poor AF, HDR-AA, no F@H support and crappy physic implementations that I think the G80 will turn around. If it gets close to the same frames as some powerful 79 series cores do in SLI + add a different feature set--one that won't cripple when adding multiple enhancements it'd be a great card.
 

Noema

Platinum Member
Feb 15, 2005
2,974
0
0
Originally posted by: coldpower27


Are you doing exactly the same thing as they are doing something about running through the forest with fireballs being thrown at you? How about settings exactly the same?

I think what Anandtech is doing is presenting the worse case scenario...

You are absolutely right.

I hadn't noticed the chart below that includes 1600x1200; I had just glanced at the benchmark. It seems I was wrong :eek:

The numbers still seem low, but then again...Oblivion+Fireballs+Oblivion gates= pain

And yes, that's probably a worst case scenario.



 

lopri

Elite Member
Jul 27, 2002
13,314
690
126
Originally posted by: josh6079
I don't see a problem with Nvidia's current framerates, it's there poor AF, HDR-AA, no F@H support and crappy physic implementations that I think the G80 will turn around.
Do you really want to leave your GPU @100% all the time?
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
In case this hasn't been mentioned yet... those numbers seem too low :p
Later NV will release more mature drivers, and maybe make a few IQ "optomizations" along the way to boost those scores, but even so, it seems too low. This is either a fake report (remember Sander Sassen's underwhelming r520 benches?), or Nv is really gonna take advantage of the delayed competition, and pull another 7800gtx launch, charging a premium price for hardware with conservative specs. I just hope they don't pull another 7800gtx512 when the r600 finally does launch... :laugh:
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
Originally posted by: lopri
Originally posted by: josh6079
I don't see a problem with Nvidia's current framerates, it's there poor AF, HDR-AA, no F@H support and crappy physic implementations that I think the G80 will turn around.
Do you really want to leave your GPU @100% all the time?
It is nice to have the option available for wichever card I want. It isn't an "issue" by any means, just something that current Nvidia GPU's lack. I think G80 will change that though. With the sounds of it's FP16HDR + AA ability I'd expect its dynamic branching and floating point calculations to surpass the current X19k line.

Just pointing out the short-comings of the 79 series that I think Nvidia will address with their 88 series, that's all.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
no F@H support

wtf..? Is that a checkbox feature now... So, G71 isn't as good as R580+ at something that neither card was designed to do...

on topic: I'm gonna wait and see... In line with what Crusader said, even if it is 'only' 30% faster than the X1950XTX, it would still be the fasted single GPU out. If it'll be worth the cost, that might be debatable...
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Originally posted by: zephyrprime

Originally posted by: Cookie Monster
People are jumpingto conclusions way to fast on this German "preview" of the 8800 series. Most of the infomation there is pretty much the exactly the same as dailytechs.

How about features of this card? Matured SLi + SLi issues solved like vsync issue?, HDCP, better AA/AF, HDR plus AA, enhanced pure video, soundstorm!?!? :D

Lets wait for the benchmarks. However, for people who think 700million transistors on 90nm is impossible, how about the pre NV40 days? 222million transistors on 130nm!! I think its possible given the assumption the layout of the GPU much be MUCH more complex than any other GPU made before.
Uhhh, the 90nm process only has about 2x the transistor density as the 130nm process. 700M is signigicantly more than 2x of 222M.

Obivously the die size will be bigger. Currently G71 is around 198 mm^2 (I could be wrong here). But im guessing somewhere around ~400 mm^2 for G80.
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
wtf..? Is that a checkbox feature now... So, G71 isn't as good as R580+ at something that neither card was designed to do...
Not at all, that's why I said that it isn't an issue by any means. I like hearing how I could contribute since I already had a card that could do it, that's all. I never even knew about F@H until Anandtech did the GPU article on it. I know full well that niether cards were meant for protein folding analysis. I was just saying that that is one of the short-comings of the G71 that I think Nvidia will correct with the G80. I'm sorry if I made it sound as if it were a feature in the DX9 requirement that Nvidia didn't work with.
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: Cookie Monster
Originally posted by: zephyrprime

Originally posted by: Cookie Monster
People are jumpingto conclusions way to fast on this German "preview" of the 8800 series. Most of the infomation there is pretty much the exactly the same as dailytechs.

How about features of this card? Matured SLi + SLi issues solved like vsync issue?, HDCP, better AA/AF, HDR plus AA, enhanced pure video, soundstorm!?!? :D

Lets wait for the benchmarks. However, for people who think 700million transistors on 90nm is impossible, how about the pre NV40 days? 222million transistors on 130nm!! I think its possible given the assumption the layout of the GPU much be MUCH more complex than any other GPU made before.
Uhhh, the 90nm process only has about 2x the transistor density as the 130nm process. 700M is signigicantly more than 2x of 222M.

Obivously the die size will be bigger. Currently G71 is around 198 mm^2 (I could be wrong here). But im guessing somewhere around ~400 mm^2 for G80.

Try 500mm2, hence why people don't think it's possible or likely it's Single Die. Remember were talking about 700 Million Transistors. G71 is only 278 Million.
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
700 Million is pretty steep for such a nanometer. Why again does the memory not match with the rumor of a dual-core?
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Originally posted by: josh6079
wtf..? Is that a checkbox feature now... So, G71 isn't as good as R580+ at something that neither card was designed to do...
Not at all, that's why I said that it isn't an issue by any means. I like hearing how I could contribute since I already had a card that could do it, that's all. I never even knew about F@H until Anandtech did the GPU article on it. I know full well that niether cards were meant for protein folding analysis. I was just saying that that is one of the short-comings of the G71 that I think Nvidia will correct with the G80. I'm sorry if I made it sound as if it were a feature in the DX9 requirement that Nvidia didn't work with.

Maybe it's your use of the term "short-comings"...
 

Zebo

Elite Member
Jul 29, 2001
39,398
19
81
Originally posted by: josh6079
Thats what I'm doing, buying this G80, and then the refresh to urinate in ATIs face all over again when R600 is out.

Whats wrong with that plan?
Nothing so long as you don't forget to lift your cheerleader skirt and squat to keep it from going down your leg.

Seriously dude, chill, no one is doubting that the G80 is going to be awesome. If you haven't noticed a lot of people don't believe this rumored 30% improvement. As far as the R600 vs. G80 crap, lets save that for when the cards are actually out.

I don't see a problem with Nvidia's current framerates, it's there poor AF, HDR-AA, no F@H support and crappy physic implementations that I think the G80 will turn around. If it gets close to the same frames as some powerful 79 series cores do in SLI + add a different feature set--one that won't cripple when adding multiple enhancements it'd be a great card.

LOL........ excellent post BTW keeping perspective.. If they do those things and stop 'shimmering' so much it will be a huge success even at 30%. I can't remeber where I read it but NV is eager to get back to IQ this round.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Originally posted by: Zebo
Originally posted by: josh6079
Thats what I'm doing, buying this G80, and then the refresh to urinate in ATIs face all over again when R600 is out.

Whats wrong with that plan?
Nothing so long as you don't forget to lift your cheerleader skirt and squat to keep it from going down your leg.

Seriously dude, chill, no one is doubting that the G80 is going to be awesome. If you haven't noticed a lot of people don't believe this rumored 30% improvement. As far as the R600 vs. G80 crap, lets save that for when the cards are actually out.

I don't see a problem with Nvidia's current framerates, it's there poor AF, HDR-AA, no F@H support and crappy physic implementations that I think the G80 will turn around. If it gets close to the same frames as some powerful 79 series cores do in SLI + add a different feature set--one that won't cripple when adding multiple enhancements it'd be a great card.

LOL........ excellent post BTW keeping perspective.. If they do those things and stop 'shimmering' so much it will be a huge success even at 30%. I can't remeber where I read it but NV is eager to get back to IQ this round.

Would be strange if suddenly nVIDIA had the IQ crown and ATi had the speed crown next gen. Aww The irony.. :)
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
Originally posted by: Cookie Monster
Would be strange if suddenly nVIDIA had the IQ crown and ATi had the speed crown next gen. Aww The irony.. :)
Yeah, that would be sweet. Then I'd have an excuse to use EVGA again. I do miss certain things about Nvidia and hope that the G80 doesn't dissapoint. If the IQ is in their favor and is unanswered by the competition, I'll glady buy when the demand is high.
 

Zebo

Elite Member
Jul 29, 2001
39,398
19
81
Originally posted by: josh6079
700 Million is pretty steep for such a nanometer. Why again does the memory not match with the rumor of a dual-core?

Cores might not be identical. Might not be an SLI implementation at all but shaders off on another core? I don't know but it looks like two cores on resistor pack and logically 1 inch of silicone makes no economic sense.

http://www.anandtech.com/showdoc.aspx?i=2610&p=8


 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Originally posted by: josh6079
Originally posted by: Cookie Monster
Would be strange if suddenly nVIDIA had the IQ crown and ATi had the speed crown next gen. Aww The irony.. :)
Yeah, that would be sweet. Then I'd have an excuse to use EVGA again. I do miss certain things about Nvidia and hope that the G80 doesn't dissapoint. If the IQ is in their favor and is unanswered by the competition, I'll glady buy when the demand is high.

Finally i could update my FX5700. :D since G80 could potentially bring both substantial performance and IQ, something that nVIDIA and ATi lacked last gen. (in a sense that they failed to bring that "whole" package. If you bought nVIDIA you had to sacrifice this and that, while ATi had to sacrifice this and etc).

8800GTS looks to be my next card.