Originally posted by: Drayvn
Theyve even said that its easier to code for the PC than for the console, because the console require masses upon masses of optimisation.
Because it doesn't appear to have more storage space than 1920x1080i (AKA 1920x540) and I can't imagine a Z cache being bigger than the frame buffer.Next- what on Earth gives you the impression that they would remove the cache from the R500's design?
Huh? What does that have to do with list of effects you're missing out with SM 1.x?Active camo is totally hosed under 2.0. Bungie can say what they want- check it out yourself.
I agree that the X-Box will beat a PC with indentical specs. The problem is that it's not very hard to come up with a PC that will beat the X-Box.Quite clearly you haven't seen the PC running in the lowest quality settings-
That depends on how it was ported. Tying shaders to maps is poor software design because it discourages software reuse and modular design.if the sky started falling and a decent racing game such as that were ported to the PC would that be another example of poor coding?
Uh...what? How can a developer be cheating by adjusting their own code? Such a concept is simply nonsensical.I find that statement highly amusing given your rabid anti shader swap stance- no matter if it effects IQ or not that you held not that long ago.
Because it doesn't appear to have more storage space than 1920x1080i (AKA 1920x540) and I can't imagine a Z cache being bigger than the frame buffer.
Huh? What does that have to do with list of effects you're missing out with SM 1.x?
The problem is that it's not very hard to come up with a PC that will beat the X-Box.
Tying shaders to maps is poor software design because it discourages software reuse and modular design.
How can a developer be cheating by adjusting their own code?
I was referring to the IHVs doing it. Imagine if they are substituting Halo's shaders already and Gearbox comes out with new ones. One of two things can happen:
(1) Performance drops right back down where it was before.
(2) The application falls over.
But there's not enough room for 4xAA at the higher resolutions.The eDRAM is used to tile out framebuffer data and it is designed for 4x AA at all times
What does the Camo effect have to do with the degraded IQ that the lower SM paths provide? 2.0 is better than 1.4 which in turn is better than 1.1 which is what the X-Box runs. In addition to better IQ it also appears that less passes may be required too.All I can really say is to check it out for yourself.
Either you're really trying to be difficult or you've completely missed the point of my comments about this issue in the past.Comparing the performance of one code base with another.
But there's not enough room for 4xAA at the higher resolutions.
Let's try this another way: we've established 1920x540 @ 60 FPS or 1920x1080 @ 30 FPS, right? What about over 100 FPS at 2048x1536 with 4xAA??
Those consoles can't even run at that resolution, much less at a playable speeds.
What does the Camo effect have to do with the degraded IQ that the lower SM paths provide?
2.0 is better than 1.4 which in turn is better than 1.1 which is what the X-Box runs. In addition to better IQ it also appears that less passes may be required too.
Either you're really trying to be difficult or you've completely missed the point of my comments about this issue in the past.
Im sorry? 1280*720 with 2xAA has already exceeded the 10 MB cache. If you're planning to allocate to the VRAM (which you should have said before instead of pretending it's all about the cache) then you're at a disadvantage to a single 7800 GTX, much less two of them.It is tiling the data- there is plenty enough room for 4096x3072 w/8x AA if you increase the number of tiles.
How is it hosed? And why are you focusing on this one point while ignoring the other benefits of the higher shader level paths? The SM 2.0 path is superior to the X-Box path no matter how hung up you appear to be with the camo.Active came is hosed under 2.0- it only displays properly using a lower level shader path.
Sure they are, but that's because they're the developers and they develop the code. The IHV shouldn't be replacing the code unless the optimizations are generic enough to work without falling over at the slightest changes (see Futuremark's, Carmack's and Newell's comments about making innocuous code changes which caused nVidia's performance to plummet while having no effect on ATi's parts).what you seem to be missing is that developers are free to mimic the same shader optimizations the IHVs have and emulate their performance increases on their own.
Doom 3 already doesn't apply AF to all surfaces but that's neither here nor there in the grand context of the discussion. The person at fault isn't the developer, it's the IHV for producing fragile and illegitimate optimizations.If id adopted ATi's 'cheating' AF from D3 into the engine would you consider that bad?
Im sorry? 1280*720 with 2xAA has already exceeded the 10 MB cache. If you're planning to allocate to the VRAM (which you should have said before instead of pretending it's all about the cache) then you're at a disadvantage to a single 7800 GTX, much less two of them.
ATI and Microsoft decided to take advantage of the Z only rendering pass which is the expected performance path independent of tiling. They found a way to use this Z only pass to assist with tiling the screen to optimise the eDRAM utilisation. During the Z only rendering pass the max extents within the screen space of each object is calculated and saved in order to alleviate the necessity for calculation of the geometry multiple times. Each command is tagged with a header of which screen tile(s) it will affect. After the Z only rendering pass the Hierarchical Z Buffer is fully populated for the entire screen which results in the render order not being an issue. When rendering a particular tile the command fetching processor looks at the header that was applied in the Z only rendering pass to see whether its resultant data will fall into the tile it is currently processing and if so it will queue it, if not it will discard it until the next tile is ready to render. This process is repeated for each tile that requires rendering. Once the first tile has been fully rendered the tile can be resolved (FSAA down-sample) and that tile of the back-buffer data can be written to system RAM; the next tile can begin rendering whilst the first is still being resolved. In essence this process has similarities with tile based deferred rendering, except that it is not deferring for a frame and that the "tile" it is operating on is order of magnitudes larger than most other tilers have utilised before.
There is going to be an increase in cost here as the resultant data of some objects in the command queue may intersect multiple tiles, in which case the geometry will be processed for each tile (note that once it is transformed and setup the pixels that fall outside of the current rendering tile can be clipped and no further processing is required), however with the very large size of the tiles this will, for the most part, reduce the number of commands that span multiple tiles and need to be processed more than once. Bear in mind that going from one FSAA depth to the next one up in the same resolution shouldn't affect Xenos too much in terms of sample processing as the ROP's and bandwidth are designed to operate with 4x FSAA all the time, so there is no extra cost in terms of sub sample read / write / blends, although there is a small cost in the shaders where extra colour samples will need to be calculated for pixels that cover geometry edges. So in terms of supporting FSAA the developers really only need to care about whether they wish to utilise this tiling solution or not when deciding what depth of FSAA to use (with consideration to the depth of the buffers they require as well). ATI have been quoted as suggesting that 720p resolutions with 4x FSAA, which would require three tiles, has about 95% of the performance of 2x FSAA.
How is it hosed?
And why are you focusing on this one point while ignoring the other benefits of the higher shader level paths?
Sure they are, but that's because they're the developers and they develop the code.
The IHV shouldn't be replacing the code unless the optimizations are generic enough to work without falling over at the slightest changes (see Futuremark's, Carmack's and Newell's comments about making innocuous code changes which caused nVidia's performance to plummet while having no effect on ATi's parts).
Doom 3 already doesn't apply AF to all surfaces but that's neither here nor there in the grand context of the discussion.
The person at fault isn't the developer, it's the IHV for producing fragile and illegitimate optimizations.
I saw it. My question is how it's relevant to 1080p?I know you understand everything that is written here so my only assumption can be that you didn't bother to read the entire page- instead getting to the part where it said the buffer wasn't large enough and then you linked it.
Ah, now I remember this discussion. The ATi part displays a cloaking effect while nVidia displays nothing (i.e. it's invisible). Is that it?Active camo hoses gameplay- the rest is extremely trivial
Is detecting a splash screen and inserting static clip planes superior programming? What happens to that "superior programming" when the splash screen is removed and/or another path is followed in the benchmark? Well, you know what happens. Just look at the benchmarks of this "superior code" when it's subjected to actual test conditions instead of a lab rat scenario.So superior performing code is bad, why?
That's quite true, now.The bolded part is a lie- ATi is using shader replacement also and taking a sizeable hit when it is removed running D3 as an example.
In that case so is nVidia given they do exactly the same AF as ATi does. In any case that has nothing to do with shader replacement though.ATi is introducing artifacts and using a less accurate implementation on top of D3's less then ideal AF.
I can see how that may be the case if you've never performed optimizations on medium/large code projects. Trickery like that just isn't feasible unless you plan on creating a software engineering nightmare. Why do you think nVidia is so proud their 7800 doesn't (allegedly) need to perform any shader subsitution?My stance hasn't changed, if IQ isn't effected I don't care
I saw it. My question is how it's relevant to 1080p?
Firstly, I'm not sure how nothing equates to better IQ.
Secondly IIRC last time Snowman posted some X-Box screenshots that showed the camo path being indentical to the Radeon and in fact it was the nVidia card rendering it wrongly.
Thirdly, I find it highly amusing you think you know better than Bungie's own design team given their own screenshot from their own website (picture 8) shows an active camo effect.
If your opinion is that a camo should be invisible that's fine but you can't use that to claim a shader path is broken just because it doesn't look like what you think it should look like.
Is detecting a splash screen and inserting static clip planes superior programming?
In that case so is nVidia given they do exactly the same AF as ATi does. In any case that has nothing to do with shader replacement though.
Trickery like that just isn't feasible unless you plan on creating a software engineering nightmare.
Why do you think nVidia is so proud their 7800 doesn't (allegedly) need to perform any shader subsitution?
Originally posted by: HDTVMan
Would have hoped this thread be dead by now.
Dont judge until its done and available. Doom 3 plays on the Xbox1 and it looks good.
Developers in pursuit of cash will make games work on any console.
If enough money could be made they would make doom 3 work on the Atari 2600.
Of course but it's already way outside of the scope of the 10 MB cache which means it's no match for the effective ~70 GB/sec a pair of 7800 GPUs give.If they were running 1080p all they would have to do is increase the number of tiles they were using
Your opinion of gameplay is being used to claim a shader path is hosed and that's an invalid inference. Your opinion of what a cloak should look like has nothing to do with the technical merits of the shader path.It isn't nothing, just very hard to see and second as I mentioned this is about gameplay not a technology demo.
Yes he did. He even posted a receipt of his Halo rental to prove he rented it. Also multiple people in that thread agreed the ATi card looks like the X-Box.Snowman didn't post XB shots,
And what happens when patches break it (e.g. Doom 3 1.3 which causes a performance drop in timedemo1 on nVidia cards)?If an IHV provides a codepath that makes a GAME run faster without reducing IQ I'm all for it.
If you don't like it then turn it off. Where is the option to turn off nVidia's detection?They are attempting to use shader hardware to mimic look up tables to save themselves bandwidth. It doesn't work that great.
ATi hasn't been doing it for "years". They started with CCC and included the option to turn it off at the same time.Show us some examples of this 'engineering nightmare'- it must be readily apparent as both ATi and nVidia have been doing it for years.
Of course but it's already way outside of the scope of the 10 MB cache which means it's no match for the effective ~70 GB/sec a pair of 7800 GPUs give.
Yes he did. He even posted a receipt of his Halo rental to prove he rented it. Also multiple people in that thread agreed the ATi card looks like the X-Box.
And what happens when patches break it (e.g. Doom 3 1.3 which causes a performance drop in timedemo1 on nVidia cards)?
If you don't like it then turn it off.
Where is the option to turn off nVidia's detection?
ATi hasn't been doing it for "years".
Originally posted by: blckgrffn
For years? Before 2000? With what? A RAGEPROFURYMAXII or some other BS? Nice point there :roll: Who's the fool there? That has to be the very first "you can eat your foot now" statement that you have made so far...
Did you know that it takes time to check two caches instead of one? Just ponder that for a second...
WTF is up with the Halo discussion. It was sh!tty port that ran like butt with shader model 2, thank you microsoft for even including it... Has anybody looked at the game with the most recent driver releases? Since you are all about letting the IHV sub in code, they could look totally different at this point, if nvidia or ati cared enough to change it. How cool is that? Now you grab active camo, and you are hunter orange! It's ok though, the deer still can't see you!
Getting tired of this lazy coder nonsense as well. The reason that console programmers have to work so much harder is because they are much more limited by power. Most PC games still scale pretty well, HL2, D3, and Guild Wars all come to mind. You could easily play them on pretty much anything, especially if you ran them at 640*480 and turned off any DX9 features. Actually, I bet that you could get a 733 mhz pc with a GF3 to run D3 at 640*480, granted you would need more ram, due to the aforementioned snazzy streaming data access that a console can achieve in its controlled environment. The point is, most PC games run on a remarkable range of hardware, whether you want to believe it or not.
I will continue to stand by my statement that graphics on PC's will rival those of the consoles right when they are released. I can see no reason that it would be any other way. There won't be a magical leap forward by either GPU that will transcend what a SLI'd set of 6800+ could do when backed by a highend processor, whether that it an fx-57 or even a lowly 3200+. Cost might be a factor, you argue... well, it sounds like to really use the full features of the PS3, I would have to buy a new freakin' HDTV w/HDMI. Wow, that is just so much cheaper :roll:
I played it side by side and I posted pictures for those who couldn't, and the commonly accepted conclusion was that you are wrong. BFG linked the thread, feel free to review it yourself. I can always get together some more pics to if it has to come to that.Originally posted by: BenSkywalker
Yes he did. He even posted a receipt of his Halo rental to prove he rented it. Also multiple people in that thread agreed the ATi card looks like the X-Box.
None of them played it side by side- the XBox certainly doesn't look like the SM 2.0 path.
I played it side by side and I posted pictures for those who couldn't, and the commonly accepted conclusion was that you are wrong.
For years? Before 2000? With what? A RAGEPROFURYMAXII or some other BS?
Did you know that it takes time to check two caches instead of one? Just ponder that for a second...
The reason that console programmers have to work so much harder is because they are much more limited by power.
I will continue to stand by my statement that graphics on PC's will rival those of the consoles right when they are released.
I can see no reason that it would be any other way. There won't be a magical leap forward by either GPU that will transcend what a SLI'd set of 6800+ could do when backed by a highend processor, whether that it an fx-57 or even a lowly 3200+.
Cost might be a factor, you argue... well, it sounds like to really use the full features of the PS3, I would have to buy a new freakin' HDTV w/HDMI. Wow, that is just so much cheaper
Originally posted by: BenSkywalker
The reason that console programmers have to work so much harder is because they are much more limited by power.
GT4 on the PS2 looks vastly superior to almost every racer on the PC by a wide margin running on a MIPS 333MHZ processor and a souped up Voodoo1 level rasterizer(actually, it doesn't have as many features as a Voodoo1 even).
I will continue to stand by my statement that graphics on PC's will rival those of the consoles right when they are released.
Keep telling yourself that, by the sounds of it even when the reality is staring you in the face you won't be able to see it.
Cost might be a factor, you argue... well, it sounds like to really use the full features of the PS3, I would have to buy a new freakin' HDTV w/HDMI. Wow, that is just so much cheaper
You can buy an HDTV at Wal-Mart for a few hundred or so- unless you are on welfare I'm sure you can afford one easily and even then - don't you have a monitor for your computer? I bet you can get a significantly larger HDTV then whatever monitor you have for considerably less then you paid for your monitor- forget the rest of your system.
You can buy an HDTV at Wal-Mart for a few hundred or so- unless you are on welfare I'm sure you can afford one easily and even then - don't you have a monitor for your computer? I bet you can get a significantly larger HDTV then whatever monitor you have for considerably less then you paid for your monitor- forget the rest of your system.
Keep telling yourself that, by the sounds of it even when the reality is staring you in the face you won't be able to see it.
Do you understand what tiling is? Do you understand the difference between having a dedicated tile for the backbuffer and having texture RAM stored in a seperate place?
LinkThis daughter die is connected to the GPU proper via a 32GB/sec interconnect.
ATi was cheating with their very first 3D part- the RagePro(and badly, they were using a technique to detect certain benches and then simply skip every third frame).
GT4 on the PS2 looks vastly superior to almost every racer on the PC by a wide margin running on a MIPS 333MHZ processor and a souped up Voodoo1 level rasterizer(actually, it doesn't have as many features as a Voodoo1 even).
The UE3 engine is getting used on the Xbox360 and PS3 and if you actually read their official site, they have to actually reduce texture size so that the games can run at reasonably frame rates.
And about the HDTV. If its just a couple of hundred, then its pretty much absolutely crap quality and it will be absolutely poo. Remember our PCs costs a lot because we get the best money can buy. So you have to choose an HDTV HDMI wihch is very good quality and fairly large also.
Well, since I just checked, I would have to get a $700-800 TV from wal-mart
I have a 17" Planar LCD
Ever hooked a up a PC via DVI to an HDTV screen?
Look at the last gen of consoles - none of them eclipsed the PC when launched,
They don't have superior hardware, what is the magic bullet going to be?
LOL Quake2 on my PC that I owned in 99 looked better than any shooter made in the first couple years of each platform
I was playing NFSorsche Unleashed (the last good racing game made for the PC IMHO) when you were playing GT:2, and there was and is no comparison.
I have Colin Mcray Ralley for my PC, and it looks awesome at 1600*1200 4xAA/8xAF, easily besting GT4 and rivaling the Project Gotham by looking at the screen shots.
You are the expert and claimed that there was so much bandwidth to this cache, silly me. You mean to say that it really only has, what, 20GB per second bandwidth to texture memory? How, then, praytell, is that vastly superior to the 34.4 GBs per second on my lowly oc'd x800xl?
Damn, that is less than what my GPU has to its main memory. Man, doesn't that blow? So that is really like, 50 GB's per second for combined bandwidth... hmmm... read any GPU articles lately? Memory bandwidth isn't the end all now as most applications are still very much GPU bound. Why else is a 9800 Pro slower than a vanilla 6600 in most applications?
What, in 1998? Years and years before BFG joined the forums, surely!
What real racing games for the PC's aren't ports?
And if you are going to reply, do me the favor of not straw-manning my entire argument based on one thing.
Wow, it is only 22.4 GB/sec. That's even worse than the 32 GB/sec I thought it was. :shocked:while it also the 20GB/sec+ mem bandwidth for texture/shader/vertex reads
Miniscule? A single 7800 has over 38 GB/sec bandwith while two of them have almost 80 GB/sec effective bandwidth. I'm no math professor but even I can see ~22 < ~38 < ~80.why would you be under the impression that it is faster then the minisculs bandwidth of the 7800GTX?
If they were running 1080p all they would have to do is increase the number of tiles they were using
At 1920x1080px4AA you are going to be writing a lot out to system RAM. I don't see why you keep pretending this 10 MB cache is a magic rubber band that can stretch to fit anything it likes in there. It can't and when that happens it goes out to system RAM where the console is inferior to one 7800, much less two of them.You major mem access is going straight to the eDRAM which then has to write a small amount to system level ram 3 times per frame
Exactly. You need to ponder on the implications of this for a moment.Then the IHV does it again- where is the major problem? Your game may not run quite as fast for a week...?
If it's that important to you then use a third party tweaker or edit the registry yourself. Editing a single key really isn't that difficult.I'd rather not blue screen all day and despite your utterly illogical beliefs about what causes it I am neither running a third party utlity nor am I overclocking.
You mean like when nVidia were caught rendering AA on only every alternating frame?ATi was cheating with their very first 3D part- the RagePro(and badly, they were using a technique to detect certain benches and then simply skip every third frame).
If you're going to compare a game with three animated characters on the screen at once (like a typical console) you need to be looking at something like Dawn, Nalu or Dusk. What console can match those?DOA- still looks better then most current PC games