Originally posted by: coldpower27
That is not exactly what I am saying. I am not saying you judge performance alone with transistor count at all, you misunderstand. I am saying that you shouldn't expect massive increases in performance every generation. We have been sustaining it so far as die sizes have continued to grow bigger and bigger. You need to account for factors as what the transistor budget will be spent on. You can't predict performance from transistor count alone. I am also considering the rumor information i have read and trying to make sense of how it all fits together.
DX10 is more versatile from what I can see, not necessarily more efficient.
There was a slide before that said Geforce 7900 Series GPU are 20x the baseline G965 IGP Part, while the 8800 Series will be 27x so that is ~ 30% or so.
It wouldn't be the first time we have had a marginal performance improvement with the focus being on fucntionality rather then more on performance. As well 30% is probably an average, there will likely to be places where you see higher improvements, in the situations you need it the most.
You also have to keep in mind that Nvidia is going to be adding more then just DX10 functionality with the 700 Million Transistor budget, we are hearing a new AA mode called VCAA I assume that would require something not to mention 128Bit HDR as well the ability to do AA with it. The Physics Processing functions as well would take osme of the budget and who knows what else are on the g80's list of feature set that we don't know about.
I was wanting it to read like I was agreeing with you. Oh well. To clearify I do agree that we can't just look at the transistor count and try to judge performance. Also expecting double the performance between generations is a little bit outlandish in my view. It isn't untill recently with the advancement of sli and crossfire that consumers have almost demanded that the new gpu be 2x better than last generation. I would be quite happy with 30% advancement. The question is where is that advancement at?
I saw those slides too, but I can't recall if they said how they got those numbers. Where they just theoretical numbers? Or where they based on a synthetic application? I would take those slides released by Nvidia with a grain of salt. Furthermore those numbers would have to be in a DX9 environment would they not?
The main problem with not only G80, but also R600 is they are going to play dual roles. They need to improve performance for DX9 while at the same time not being a slouch in DX10 all while bringing new features ala VCAA and higher HDR. Like you said alot of the transistors would have to be used just for the function of these features, which leads me too my next question. With all of these features many of which won't be used untill DX10 just how is Nvidia going to increase performance by 30% in a DX9 application if we are even talking about those increases being in DX9?
