Originally posted by: lopri
My personal bet = Rosie O'donnelle
Thats a lot of geometry, i dont know if G80 can handle it.
Originally posted by: lopri
My personal bet = Rosie O'donnelle
Originally posted by: Acanthus
And to the people talking about how it "might not" be dual core.
The leaked info says that it isnt "traditional" dual core.
Which means it isnt a pair of twin GPUs that are identical.
There are 2 cores, they are not the same.
Originally posted by: Dethfrumbelo
Originally posted by: Acanthus
And to the people talking about how it "might not" be dual core.
The leaked info says that it isnt "traditional" dual core.
Which means it isnt a pair of twin GPUs that are identical.
There are 2 cores, they are not the same.
I guess both the GTX and GTS are using a base 256 bit bus, with an additional 64 and 128 bit bus, respectively, being dedicated to AA/HDR/post-processing.
Maybe the primary core works off the 256 bit bus doing all the geometry/texture/shader work while there's a smaller secondary GPU which does the AA/HDR/etc (which may help to explain the "free" AA claim).
Originally posted by: TheSlamma
I hardly saw a difference when I went from my P4 1.8Ghz 400bus to my P4 3.0ghz 800bus. But saw a massive difference when I went from my 5900 to my 6800.
From what we hear, the new [nvidia] card might not adopt the Microsoft graphics standard of a true Direct 3D 10 engine with a unified shader architecture. Unified shader units can be changed via a command change from a vertex to geometry or pixel shader as the need arises. This allows the graphics processor to put more horsepower where it needs it. We would not put it past Nvidia engineers to keep a fixed pipeline structure. Why not? They have kept the traditional pattern for all of their cards. It was ATI that deviated and fractured the "pipeline" view of rendering; the advent of the Radeon X1000 introduced the threaded view of instructions and higher concentrations of highly programmable pixel shaders, to accomplish tasks beyond the "traditional" approach to image rendering.
One thing is for sure; ATI is keeping the concept of the fragmented pipeline and should have unified and highly programmable shaders. We have heard about large cards - like ones 12" long that will require new system chassis designs to hold them - and massive power requirements to make them run.
Originally posted by: apoppin
The New Graphics - A Tale of Direct X 10From what we hear, the new [nvidia] card might not adopt the Microsoft graphics standard of a true Direct 3D 10 engine with a unified shader architecture. Unified shader units can be changed via a command change from a vertex to geometry or pixel shader as the need arises. This allows the graphics processor to put more horsepower where it needs it. We would not put it past Nvidia engineers to keep a fixed pipeline structure. Why not? They have kept the traditional pattern for all of their cards. It was ATI that deviated and fractured the "pipeline" view of rendering; the advent of the Radeon X1000 introduced the threaded view of instructions and higher concentrations of highly programmable pixel shaders, to accomplish tasks beyond the "traditional" approach to image rendering.
One thing is for sure; ATI is keeping the concept of the fragmented pipeline and should have unified and highly programmable shaders. We have heard about large cards - like ones 12" long that will require new system chassis designs to hold them - and massive power requirements to make them run.
Originally posted by: Regs
How are these things going to fit inside of an ATX? My current 7800gt is about 1/4 away from a HDD molex connector.
Originally posted by: Acanthus
Originally posted by: Regs
How are these things going to fit inside of an ATX? My current 7800gt is about 1/4 away from a HDD molex connector.
The referred sizes are engineering samples, retail cards are always smaller.
Originally posted by: Acanthus
Quads dont really exsist with the latest architectural designs.
At least not in the traditional sense.
When people say "dual core GPUs" they are referring to 2 GPUs on one package, not just dual GPUs in a single core. (at least the people that know wtf they are talking about anyway).
Originally posted by: Gstanfor
Originally posted by: Acanthus
Quads dont really exsist with the latest architectural designs.
At least not in the traditional sense.
When people say "dual core GPUs" they are referring to 2 GPUs on one package, not just dual GPUs in a single core. (at least the people that know wtf they are talking about anyway).
I find that rather hard to believe.
It's a large part of how GPU's acheive the parallelism they do - Single Instruction, Multiple Data (In a quads case 1 instruction affects 4 pixels at once).
Originally posted by: Acanthus
Quads dont really exsist with the latest architectural designs.
At least not in the traditional sense.
When people say "dual core GPUs" they are referring to 2 GPUs on one package, not just dual GPUs in a single core. (at least the people that know wtf they are talking about anyway).
Nvidia bans overclocking for its G80
Originally posted by: lopri
Those numbers make sense to me. Should I update the first post with the info?
Originally posted by: Gstanfor
Thats only vendor overclocking being banned if I'm reading that correctly (is there a correct way to read an INQ article?) I imagine endusers will still be able to overclock.
