SpeedZealot369
Platinum Member
Originally posted by: jiffylube1024
Originally posted by: SpeedZealot369
WTF does "free 4xAA" mean?
It means close to zero performance hit. Like how some people say there is "free" 2X AA on current gen cards.
NICE
Originally posted by: jiffylube1024
Originally posted by: SpeedZealot369
WTF does "free 4xAA" mean?
It means close to zero performance hit. Like how some people say there is "free" 2X AA on current gen cards.
From what I hear, yes.Originally posted by: apoppin
are we absolutely certain that G80 shaders are actually "unified"?
😕
It's not exactly like ATI's, but from what I gather it has a unified architecture nonetheless.i mean really unified like ATi's? this will be 2nd gen 'unified shaders' for ati and first for nvidia.
I can agree with that. The speculation must start somewhere. However, I see it as a forums job to really come out with the uneducated guesses and a sites job to be able to uncover more confidential, and therefore, more supported info.and yes, i DO read theInq and take it for what it is.... there are some things that are 'scooped' here first and not all of it belongs with the used kittly litter. 😛
Thankyou for doing that by the way. I guess I'm just sick of hearing their BS that I don't understand why anyone links it at all. I'm not attempting to change your posting habit, just attempting to tell people that the only way BS sites like theirs will cease to exist is to ignore them.i don't link to theinq 'hidden' . . . and even took your advice to note the source in the Topic's summary and provide an alternate source --whenever possible. What more could you ask?
:Q
Fair enough. I can respect that....just don't ask me to stop posting what i find interesting . . . whatever the source . . . please ignore what you are not interested in . . . and if it turns out to be crap as many of my threads do . . . it will fall to the bottom quickly.
Unless it keeps people from making the same mistake twice. Still, if you want to read the Inquirer that's not my business. However, if you read the Inquirer and then complain about them, that's a different story....there is nothing worse than bringing more attention to crap by calling it crap
While I agree with making topics over things that hold your interest, I can't seem to warrant the fact that the source instigating that interest is 99% completely false. Making threads that concern an interest is one thing, making threads that the Inquirer knew you would have an interest in is another.... and i created 'that' thread because theInq brought up something i was interested in . . . it is a valid reason and certainly better than any flamethread.
Originally posted by: josh6079
From what I hear, yes.Originally posted by: apoppin
are we absolutely certain that G80 shaders are actually "unified"?
😕
It's not exactly like ATI's, but from what I gather it has a unified architecture nonetheless.i mean really unified like ATi's? this will be 2nd gen 'unified shaders' for ati and first for nvidia.
Originally posted by: apoppin
are we absolutely certain that G80 shaders are actually "unified"?
😕
i mean really unified like ATi's? this will be 2nd gen 'unified shaders' for ati and first for nvidia.
Originally posted by: jiffylube1024
Originally posted by: apoppin
are we absolutely certain that G80 shaders are actually "unified"?
😕
i mean really unified like ATi's? this will be 2nd gen 'unified shaders' for ati and first for nvidia.
Let's get down to brass tacks here - it really doesn't matter if Nvidia's shaders are "really really" unified or "sorta" unified if it performs well.
Especially if you're fighting on the "DX10 won't be a factor for at least another year" side of the fence 😉 .
Originally posted by: josh6079
That link goes to a thread with only 6 posts.Originally posted by: Elfear
On a side note about the Inquirer's trustworthiness, check this thread out, post #88 and #90. Kinda funny actually.
Which means the refreshes will be even sweeter!Back on topic- I don't want to get my hopes up for either G80 or R600 but dang, the rumored specs look sweet.
Originally posted by: Elfear
Originally posted by: josh6079
That link goes to a thread with only 6 posts.Originally posted by: Elfear
On a side note about the Inquirer's trustworthiness, check this thread out, post #88 and #90. Kinda funny actually.
Which means the refreshes will be even sweeter!Back on topic- I don't want to get my hopes up for either G80 or R600 but dang, the rumored specs look sweet.
Whoops. 😱 Link is fixed now and here it is again (posts #88 and #90). Summary for those who don't care to link through: When ATI engineers get bored they call the Inq and make up stuff just to see if Fuad will post it.
From what we hear, the new [nvidia] card might not adopt the Microsoft graphics standard of a true Direct 3D 10 engine with a unified shader architecture. Unified shader units can be changed via a command change from a vertex to geometry or pixel shader as the need arises. This allows the graphics processor to put more horsepower where it needs it. We would not put it past Nvidia engineers to keep a fixed pipeline structure. Why not? They have kept the traditional pattern for all of their cards. It was ATI that deviated and fractured the "pipeline" view of rendering; the advent of the Radeon X1000 introduced the threaded view of instructions and higher concentrations of highly programmable pixel shaders, to accomplish tasks beyond the "traditional" approach to image rendering.
One thing is for sure; ATI is keeping the concept of the fragmented pipeline and should have unified and highly programmable shaders. We have heard about large cards - like ones 12" long that will require new system chassis designs to hold them - and massive power requirements to make them run.
Why shouldn't the new cards with ATI's R600 require 200-250 watts a piece and the equivalent of a 500 W power supply to run in CrossFire or G80 in SLI? We are talking about adding more hardware to handle even greater tasks and functions, like geometry shaders that can take existing data and reuse it for the subsequent frames. More instructions and functions means more demand for dedicated silicon. We should assume that there will need to be a whole set of linkages and caches designed to hold the data from previous frames, as well as other memory interfaces. Why wouldn't we assume that this would require more silicon?
Although we are not in favour of pushing more power to our graphics processors and the massive amounts of memory on the card, we are excited to think about the prospects of more in our games. Currently one can perform Folding at Home on your graphics processor, and soon we will be able to do effects physics on them too
Originally posted by: josh6079
A chassis designed to help hold the card in place?
:shocked:
I wonder if that chassis will be compatible with a lot of full tower cases.
the new [nvidia] card might not adopt the Microsoft graphics standard of a true Direct 3D 10 engine with a unified shader architecture. Unified shader units can be changed via a command change from a vertex to geometry or pixel shader as the need arises. This allows the graphics processor to put more horsepower where it needs it. We would not put it past Nvidia engineers to keep a fixed pipeline structure.
Originally posted by: apoppin
Originally posted by: josh6079
A chassis designed to help hold the card in place?
:shocked:
I wonder if that chassis will be compatible with a lot of full tower cases.
i guess i bolded the wrong part . . . it looks like G80 is not "unified' 😛
the new [nvidia] card might not adopt the Microsoft graphics standard of a true Direct 3D 10 engine with a unified shader architecture. Unified shader units can be changed via a command change from a vertex to geometry or pixel shader as the need arises. This allows the graphics processor to put more horsepower where it needs it. We would not put it past Nvidia engineers to keep a fixed pipeline structure.