Poor console CPU performance, claim game devs

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hacp

Lifer
Jun 8, 2005
13,923
2
81
Ya I know that smaller specialied cpus are in intel's roadmap, but I was saying that the IBM cell processor wouldn't replace the x86 ones.
 

Hacp

Lifer
Jun 8, 2005
13,923
2
81
Originally posted by: Drayvn

Theyve even said that its easier to code for the PC than for the console, because the console require masses upon masses of optimisation.

I think its a given fact that consoles are harder to encode for.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
Next- what on Earth gives you the impression that they would remove the cache from the R500's design?
Because it doesn't appear to have more storage space than 1920x1080i (AKA 1920x540) and I can't imagine a Z cache being bigger than the frame buffer.

Active camo is totally hosed under 2.0. Bungie can say what they want- check it out yourself.
Huh? What does that have to do with list of effects you're missing out with SM 1.x?

Quite clearly you haven't seen the PC running in the lowest quality settings-
I agree that the X-Box will beat a PC with indentical specs. The problem is that it's not very hard to come up with a PC that will beat the X-Box.

if the sky started falling and a decent racing game such as that were ported to the PC would that be another example of poor coding?
That depends on how it was ported. Tying shaders to maps is poor software design because it discourages software reuse and modular design.

I find that statement highly amusing given your rabid anti shader swap stance- no matter if it effects IQ or not that you held not that long ago.
Uh...what? How can a developer be cheating by adjusting their own code? Such a concept is simply nonsensical.

I was referring to the IHVs doing it. Imagine if they are substituting Halo's shaders already and Gearbox comes out with new ones. One of two things can happen:

(1) Performance drops right back down where it was before.
(2) The application falls over.

In either case you'd need a driver update from the IHV to fix it. OTOH if the IHV was performing generic optimizations they would be unaffected.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Because it doesn't appear to have more storage space than 1920x1080i (AKA 1920x540) and I can't imagine a Z cache being bigger than the frame buffer.

Why can't you imagine the Z cache being larger then the framebuffer storage? The eDRAM is used to tile out framebuffer data and it is designed for 4x AA at all times- I would imagine they would have more space for Z then framebuffer space.

Huh? What does that have to do with list of effects you're missing out with SM 1.x?

All I can really say is to check it out for yourself. The 'added' effects of SM2.0 are rarely visible while active camo you are constantly dealing with. Mind you I can take screenshots that demonstrate quite nicely what SM 2.0 gains you- but using my flashlight inside of a level that is already well lit and aiming it at the floor in front of me doesn't quite qualify as a major visual benefit(the game would require HDR to show off the shaders properly without requiring an awkward set of game circumstances.

The problem is that it's not very hard to come up with a PC that will beat the X-Box.

I would hope not, the XBox is in the process of being discontinued :)

Tying shaders to maps is poor software design because it discourages software reuse and modular design.

In the context of a fixed platform with UMA it makes perfect sense though. Modularity simplifies things when you have resources to spare- purpose built will almost always run faster(although obviously at increased development costs).

How can a developer be cheating by adjusting their own code?

Comparing the performance of one code base with another.

I was referring to the IHVs doing it. Imagine if they are substituting Halo's shaders already and Gearbox comes out with new ones. One of two things can happen:

(1) Performance drops right back down where it was before.
(2) The application falls over.

(3) The developer implements the same simplified/faster code the IHVs were using for the same end results.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
The eDRAM is used to tile out framebuffer data and it is designed for 4x AA at all times
But there's not enough room for 4xAA at the higher resolutions.

Let's try this another way: we've established 1920x540 @ 60 FPS or 1920x1080 @ 30 FPS, right? What about over 100 FPS at 2048x1536 with 4xAA??

Those consoles can't even run at that resolution, much less at a playable speeds.

All I can really say is to check it out for yourself.
What does the Camo effect have to do with the degraded IQ that the lower SM paths provide? 2.0 is better than 1.4 which in turn is better than 1.1 which is what the X-Box runs. In addition to better IQ it also appears that less passes may be required too.

Comparing the performance of one code base with another.
Either you're really trying to be difficult or you've completely missed the point of my comments about this issue in the past.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
But there's not enough room for 4xAA at the higher resolutions.

It is tiling the data- there is plenty enough room for 4096x3072 w/8x AA if you increase the number of tiles.

Let's try this another way: we've established 1920x540 @ 60 FPS or 1920x1080 @ 30 FPS, right? What about over 100 FPS at 2048x1536 with 4xAA??

Those consoles can't even run at that resolution, much less at a playable speeds.

The PS3 can handle an effective 3840x1080(1920x1080 for two displays simmultaneously). As far as framerates- that is a limitation of TVs much as there is no consumer monitor that can hit 100Hz running 2048x1536(I know, I own the fastest there is @86Hz) nor are there any consumer graphics cards that can push that high of a refresh rate either at that resolution(85Hz max). TVs are limited to 1080p 30 and 1080i 60- the consoles simply deal within those limits.

What does the Camo effect have to do with the degraded IQ that the lower SM paths provide?

Active came is hosed under 2.0- it only displays properly using a lower level shader path.

2.0 is better than 1.4 which in turn is better than 1.1 which is what the X-Box runs. In addition to better IQ it also appears that less passes may be required too.

XB runs 1.3 code- but anyway the addtional features they added with the more complex shader routines are miniscule in terms of how noticeable they are particularly when compared to how bad the active camo is screwed up running that code path.

Either you're really trying to be difficult or you've completely missed the point of my comments about this issue in the past.

I know what your points have been- what you seem to be missing is that developers are free to mimic the same shader optimizations the IHVs have and emulate their performance increases on their own. An IHV has a much larger interest in seeing large titles perform optimally on their hardware so they can spend the R&D funds to produce optimized code that the publishers are unlikely to authorize for an already shipping and stable product. If id adopted ATi's 'cheating' AF from D3 into the engine would you consider that bad?
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
It is tiling the data- there is plenty enough room for 4096x3072 w/8x AA if you increase the number of tiles.
Im sorry? 1280*720 with 2xAA has already exceeded the 10 MB cache. If you're planning to allocate to the VRAM (which you should have said before instead of pretending it's all about the cache) then you're at a disadvantage to a single 7800 GTX, much less two of them.

Active came is hosed under 2.0- it only displays properly using a lower level shader path.
How is it hosed? And why are you focusing on this one point while ignoring the other benefits of the higher shader level paths? The SM 2.0 path is superior to the X-Box path no matter how hung up you appear to be with the camo.

what you seem to be missing is that developers are free to mimic the same shader optimizations the IHVs have and emulate their performance increases on their own.
Sure they are, but that's because they're the developers and they develop the code. The IHV shouldn't be replacing the code unless the optimizations are generic enough to work without falling over at the slightest changes (see Futuremark's, Carmack's and Newell's comments about making innocuous code changes which caused nVidia's performance to plummet while having no effect on ATi's parts).

If id adopted ATi's 'cheating' AF from D3 into the engine would you consider that bad?
Doom 3 already doesn't apply AF to all surfaces but that's neither here nor there in the grand context of the discussion. The person at fault isn't the developer, it's the IHV for producing fragile and illegitimate optimizations.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Im sorry? 1280*720 with 2xAA has already exceeded the 10 MB cache. If you're planning to allocate to the VRAM (which you should have said before instead of pretending it's all about the cache) then you're at a disadvantage to a single 7800 GTX, much less two of them.

Try reading your links-

ATI and Microsoft decided to take advantage of the Z only rendering pass which is the expected performance path independent of tiling. They found a way to use this Z only pass to assist with tiling the screen to optimise the eDRAM utilisation. During the Z only rendering pass the max extents within the screen space of each object is calculated and saved in order to alleviate the necessity for calculation of the geometry multiple times. Each command is tagged with a header of which screen tile(s) it will affect. After the Z only rendering pass the Hierarchical Z Buffer is fully populated for the entire screen which results in the render order not being an issue. When rendering a particular tile the command fetching processor looks at the header that was applied in the Z only rendering pass to see whether its resultant data will fall into the tile it is currently processing and if so it will queue it, if not it will discard it until the next tile is ready to render. This process is repeated for each tile that requires rendering. Once the first tile has been fully rendered the tile can be resolved (FSAA down-sample) and that tile of the back-buffer data can be written to system RAM; the next tile can begin rendering whilst the first is still being resolved. In essence this process has similarities with tile based deferred rendering, except that it is not deferring for a frame and that the "tile" it is operating on is order of magnitudes larger than most other tilers have utilised before.

There is going to be an increase in cost here as the resultant data of some objects in the command queue may intersect multiple tiles, in which case the geometry will be processed for each tile (note that once it is transformed and setup the pixels that fall outside of the current rendering tile can be clipped and no further processing is required), however with the very large size of the tiles this will, for the most part, reduce the number of commands that span multiple tiles and need to be processed more than once. Bear in mind that going from one FSAA depth to the next one up in the same resolution shouldn't affect Xenos too much in terms of sample processing as the ROP's and bandwidth are designed to operate with 4x FSAA all the time, so there is no extra cost in terms of sub sample read / write / blends, although there is a small cost in the shaders where extra colour samples will need to be calculated for pixels that cover geometry edges. So in terms of supporting FSAA the developers really only need to care about whether they wish to utilise this tiling solution or not when deciding what depth of FSAA to use (with consideration to the depth of the buffers they require as well). ATI have been quoted as suggesting that 720p resolutions with 4x FSAA, which would require three tiles, has about 95% of the performance of 2x FSAA.

I know you understand everything that is written here so my only assumption can be that you didn't bother to read the entire page- instead getting to the part where it said the buffer wasn't large enough and then you linked it.

How is it hosed?

It has the effect of highlighting the person using it instead of cloaking them.

And why are you focusing on this one point while ignoring the other benefits of the higher shader level paths?

I play games, not technology demos. Active camo hoses gameplay- the rest is extremely trivial.

Sure they are, but that's because they're the developers and they develop the code.

So superior performing code is bad, why?

The IHV shouldn't be replacing the code unless the optimizations are generic enough to work without falling over at the slightest changes (see Futuremark's, Carmack's and Newell's comments about making innocuous code changes which caused nVidia's performance to plummet while having no effect on ATi's parts).

The bolded part is a lie- ATi is using shader replacement also and taking a sizeable hit when it is removed running D3 as an example.

Doom 3 already doesn't apply AF to all surfaces but that's neither here nor there in the grand context of the discussion.

ATi is introducing artifacts and using a less accurate implementation on top of D3's less then ideal AF.

The person at fault isn't the developer, it's the IHV for producing fragile and illegitimate optimizations.

You mean it's ATi's fault for 'cheating'. This is simply using your definitions of the word the same for nV and ATi. My stance hasn't changed, if IQ isn't effected I don't care. ATi's hacks reduce IQ for improved performance- that I have an issue with(the same that I did when nV's hacks reduced IQ).
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
I know you understand everything that is written here so my only assumption can be that you didn't bother to read the entire page- instead getting to the part where it said the buffer wasn't large enough and then you linked it.
I saw it. My question is how it's relevant to 1080p?

Active camo hoses gameplay- the rest is extremely trivial
Ah, now I remember this discussion. The ATi part displays a cloaking effect while nVidia displays nothing (i.e. it's invisible). Is that it?

Firstly, I'm not sure how nothing equates to better IQ.

Secondly IIRC last time Snowman posted some X-Box screenshots that showed the camo path being indentical to the Radeon and in fact it was the nVidia card rendering it wrongly. You said you were going to provide your own X-Box snaps to disprove him and then the thread ended there.

Thirdly, I find it highly amusing you think you know better than Bungie's own design team given their own screenshot from their own website (picture 8) shows an active camo effect.

If your opinion is that a camo should be invisible that's fine but you can't use that to claim a shader path is broken just because it doesn't look like what you think it should look like.

So superior performing code is bad, why?
Is detecting a splash screen and inserting static clip planes superior programming? What happens to that "superior programming" when the splash screen is removed and/or another path is followed in the benchmark? Well, you know what happens. Just look at the benchmarks of this "superior code" when it's subjected to actual test conditions instead of a lab rat scenario.

The bolded part is a lie- ATi is using shader replacement also and taking a sizeable hit when it is removed running D3 as an example.
That's quite true, now.

ATi is introducing artifacts and using a less accurate implementation on top of D3's less then ideal AF.
In that case so is nVidia given they do exactly the same AF as ATi does. In any case that has nothing to do with shader replacement though.

My stance hasn't changed, if IQ isn't effected I don't care
I can see how that may be the case if you've never performed optimizations on medium/large code projects. Trickery like that just isn't feasible unless you plan on creating a software engineering nightmare. Why do you think nVidia is so proud their 7800 doesn't (allegedly) need to perform any shader subsitution?
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
I saw it. My question is how it's relevant to 1080p?

If they were running 1080p all they would have to do is increase the number of tiles they were using- but the XB360 only supports 720p 60 and 1080i 60 as 1080p sets are still very rare(short sighted on MS's part of course, but that is how it is).

Firstly, I'm not sure how nothing equates to better IQ.

It isn't nothing, just very hard to see and second as I mentioned this is about gameplay not a technology demo.

Secondly IIRC last time Snowman posted some X-Box screenshots that showed the camo path being indentical to the Radeon and in fact it was the nVidia card rendering it wrongly.

Snowman didn't post XB shots, and I don't have a vid capture card(though I suppose I could use my dig cam to do it)- there were some vid clips posted by someone else that showed the differing rendering paths on the PC though(not the XB).

Thirdly, I find it highly amusing you think you know better than Bungie's own design team given their own screenshot from their own website (picture 8) shows an active camo effect.

The render? That isn't a screenshot.

If your opinion is that a camo should be invisible that's fine but you can't use that to claim a shader path is broken just because it doesn't look like what you think it should look like.

It's not invisible- just hard to see versus the extreme highlighting of the SM 2.0 path.

Is detecting a splash screen and inserting static clip planes superior programming?

You still talking about 3DM? You'll never find me defending anything they did with that- and feel free to look. I'm talking about GAMES not technology demos. If an IHV provides a codepath that makes a GAME run faster without reducing IQ I'm all for it.

In that case so is nVidia given they do exactly the same AF as ATi does. In any case that has nothing to do with shader replacement though.

They are attempting to use shader hardware to mimic look up tables to save themselves bandwidth. It doesn't work that great.

Trickery like that just isn't feasible unless you plan on creating a software engineering nightmare.

Show us some examples of this 'engineering nightmare'- it must be readily apparent as both ATi and nVidia have been doing it for years.

Why do you think nVidia is so proud their 7800 doesn't (allegedly) need to perform any shader subsitution?

They have the first part with very slow shader performance(versus disgustingly inadequate).
 

HDTVMan

Banned
Apr 28, 2005
1,534
0
0
Would have hoped this thread be dead by now.

Dont judge until its done and available. Doom 3 plays on the Xbox1 and it looks good.

Developers in pursuit of cash will make games work on any console.

If enough money could be made they would make doom 3 work on the Atari 2600.
 

Avalon

Diamond Member
Jul 16, 2001
7,571
178
106
Originally posted by: HDTVMan
Would have hoped this thread be dead by now.

Dont judge until its done and available. Doom 3 plays on the Xbox1 and it looks good.

Developers in pursuit of cash will make games work on any console.

If enough money could be made they would make doom 3 work on the Atari 2600.


I'd buy it :D
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
If they were running 1080p all they would have to do is increase the number of tiles they were using
Of course but it's already way outside of the scope of the 10 MB cache which means it's no match for the effective ~70 GB/sec a pair of 7800 GPUs give.

It isn't nothing, just very hard to see and second as I mentioned this is about gameplay not a technology demo.
Your opinion of gameplay is being used to claim a shader path is hosed and that's an invalid inference. Your opinion of what a cloak should look like has nothing to do with the technical merits of the shader path.

Snowman didn't post XB shots,
Yes he did. He even posted a receipt of his Halo rental to prove he rented it. Also multiple people in that thread agreed the ATi card looks like the X-Box.

If an IHV provides a codepath that makes a GAME run faster without reducing IQ I'm all for it.
And what happens when patches break it (e.g. Doom 3 1.3 which causes a performance drop in timedemo1 on nVidia cards)?

They are attempting to use shader hardware to mimic look up tables to save themselves bandwidth. It doesn't work that great.
If you don't like it then turn it off. Where is the option to turn off nVidia's detection?

Show us some examples of this 'engineering nightmare'- it must be readily apparent as both ATi and nVidia have been doing it for years.
ATi hasn't been doing it for "years". They started with CCC and included the option to turn it off at the same time.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Of course but it's already way outside of the scope of the 10 MB cache which means it's no match for the effective ~70 GB/sec a pair of 7800 GPUs give.

You joking? It is tiling the data to 250GB/sec RAM while it also the 20GB/sec+ mem bandwidth for texture/shader/vertex reads- why would you be under the impression that it is faster then the minisculs bandwidth of the 7800GTX? You major mem access is going straight to the eDRAM which then has to write a small amount to system level ram 3 times per frame.... I don't see how you can honestly think that the GTX's running in SLI OCd heavily would have close to the effective bandwidth of the R500.

Yes he did. He even posted a receipt of his Halo rental to prove he rented it. Also multiple people in that thread agreed the ATi card looks like the X-Box.

None of them played it side by side- the XBox certainly doesn't look like the SM 2.0 path.

And what happens when patches break it (e.g. Doom 3 1.3 which causes a performance drop in timedemo1 on nVidia cards)?

Then the IHV does it again- where is the major problem? Your game may not run quite as fast for a week...?

If you don't like it then turn it off.

I'd rather not blue screen all day and despite your utterly illogical beliefs about what causes it I am neither running a third party utlity nor am I overclocking.

Where is the option to turn off nVidia's detection?

Which games are showing problems....? I could care less if ATi subs every line of code if it doesn't impact IQ. Sorry, I'm not a tech demo whore- I want my games to run as well as possible.

ATi hasn't been doing it for "years".

ATi was doing it years before you registered for these forums. You are an absolute fool if you think otherwise.
 

blckgrffn

Diamond Member
May 1, 2003
9,686
4,345
136
www.teamjuchems.com
For years? Before 2000? With what? A RAGEPROFURYMAXII or some other BS? Nice point there :roll: Who's the fool there? That has to be the very first "you can eat your foot now" statement that you have made so far...

Did you know that it takes time to check two caches instead of one? Just ponder that for a second...

WTF is up with the Halo discussion. It was sh!tty port that ran like butt with shader model 2, thank you microsoft for even including it... Has anybody looked at the game with the most recent driver releases? Since you are all about letting the IHV sub in code, they could look totally different at this point, if nvidia or ati cared enough to change it. How cool is that? Now you grab active camo, and you are hunter orange! It's ok though, the deer still can't see you! :p

Getting tired of this lazy coder nonsense as well. The reason that console programmers have to work so much harder is because they are much more limited by power. Most PC games still scale pretty well, HL2, D3, and Guild Wars all come to mind. You could easily play them on pretty much anything, especially if you ran them at 640*480 and turned off any DX9 features. Actually, I bet that you could get a 733 mhz pc with a GF3 to run D3 at 640*480, granted you would need more ram, due to the aforementioned snazzy streaming data access that a console can achieve in its controlled environment. The point is, most PC games run on a remarkable range of hardware, whether you want to believe it or not.

I will continue to stand by my statement that graphics on PC's will rival those of the consoles right when they are released. I can see no reason that it would be any other way. There won't be a magical leap forward by either GPU that will transcend what a SLI'd set of 6800+ could do when backed by a highend processor, whether that it an fx-57 or even a lowly 3200+. Cost might be a factor, you argue... well, it sounds like to really use the full features of the PS3, I would have to buy a new freakin' HDTV w/HDMI. Wow, that is just so much cheaper :roll:
 

Drayvn

Golden Member
Jun 23, 2004
1,008
0
0
Originally posted by: blckgrffn
For years? Before 2000? With what? A RAGEPROFURYMAXII or some other BS? Nice point there :roll: Who's the fool there? That has to be the very first "you can eat your foot now" statement that you have made so far...

Did you know that it takes time to check two caches instead of one? Just ponder that for a second...

WTF is up with the Halo discussion. It was sh!tty port that ran like butt with shader model 2, thank you microsoft for even including it... Has anybody looked at the game with the most recent driver releases? Since you are all about letting the IHV sub in code, they could look totally different at this point, if nvidia or ati cared enough to change it. How cool is that? Now you grab active camo, and you are hunter orange! It's ok though, the deer still can't see you! :p

Getting tired of this lazy coder nonsense as well. The reason that console programmers have to work so much harder is because they are much more limited by power. Most PC games still scale pretty well, HL2, D3, and Guild Wars all come to mind. You could easily play them on pretty much anything, especially if you ran them at 640*480 and turned off any DX9 features. Actually, I bet that you could get a 733 mhz pc with a GF3 to run D3 at 640*480, granted you would need more ram, due to the aforementioned snazzy streaming data access that a console can achieve in its controlled environment. The point is, most PC games run on a remarkable range of hardware, whether you want to believe it or not.

I will continue to stand by my statement that graphics on PC's will rival those of the consoles right when they are released. I can see no reason that it would be any other way. There won't be a magical leap forward by either GPU that will transcend what a SLI'd set of 6800+ could do when backed by a highend processor, whether that it an fx-57 or even a lowly 3200+. Cost might be a factor, you argue... well, it sounds like to really use the full features of the PS3, I would have to buy a new freakin' HDTV w/HDMI. Wow, that is just so much cheaper :roll:

Mate they were able to run Doom 3 with SLid Voodoo 5500 ( i think it was those?)

Granted there was no lighting, pretty much no on textures and all that but it still played reasonably.

 

kylebisme

Diamond Member
Mar 25, 2000
9,396
0
0
Originally posted by: BenSkywalker
Yes he did. He even posted a receipt of his Halo rental to prove he rented it. Also multiple people in that thread agreed the ATi card looks like the X-Box.

None of them played it side by side- the XBox certainly doesn't look like the SM 2.0 path.
I played it side by side and I posted pictures for those who couldn't, and the commonly accepted conclusion was that you are wrong. BFG linked the thread, feel free to review it yourself. I can always get together some more pics to if it has to come to that.

Oh, and BFG, if you look back in that thread you will see it wasn't just the recipt for the Halo rental but rather the orginal recipt for my xbox and the games I bought at launch as Ben called me a lier for simply mentioning that I bought the game at launch.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
I played it side by side and I posted pictures for those who couldn't, and the commonly accepted conclusion was that you are wrong.

Here is GameSpot's XBox screenshot with active camo on. Doesn't look anything at all like the day glow effect you claim your "XBox screenshots" look like. I figure linking to a neutral 3rd party it eliminates dishonesty and doctoring of images.

For years? Before 2000? With what? A RAGEPROFURYMAXII or some other BS?

ATi was cheating with their very first 3D part- the RagePro(and badly, they were using a technique to detect certain benches and then simply skip every third frame).

Did you know that it takes time to check two caches instead of one? Just ponder that for a second...

Do you understand what tiling is? Do you understand the difference between having a dedicated tile for the backbuffer and having texture RAM stored in a seperate place?

The reason that console programmers have to work so much harder is because they are much more limited by power.

GT4 on the PS2 looks vastly superior to almost every racer on the PC by a wide margin running on a MIPS 333MHZ processor and a souped up Voodoo1 level rasterizer(actually, it doesn't have as many features as a Voodoo1 even).

I will continue to stand by my statement that graphics on PC's will rival those of the consoles right when they are released.

Keep telling yourself that, by the sounds of it even when the reality is staring you in the face you won't be able to see it.

I can see no reason that it would be any other way. There won't be a magical leap forward by either GPU that will transcend what a SLI'd set of 6800+ could do when backed by a highend processor, whether that it an fx-57 or even a lowly 3200+.

You can't grasp LCD then.

Cost might be a factor, you argue... well, it sounds like to really use the full features of the PS3, I would have to buy a new freakin' HDTV w/HDMI. Wow, that is just so much cheaper

You can buy an HDTV at Wal-Mart for a few hundred or so- unless you are on welfare I'm sure you can afford one easily and even then - don't you have a monitor for your computer? I bet you can get a significantly larger HDTV then whatever monitor you have for considerably less then you paid for your monitor- forget the rest of your system.
 

Drayvn

Golden Member
Jun 23, 2004
1,008
0
0
Originally posted by: BenSkywalker

The reason that console programmers have to work so much harder is because they are much more limited by power.

GT4 on the PS2 looks vastly superior to almost every racer on the PC by a wide margin running on a MIPS 333MHZ processor and a souped up Voodoo1 level rasterizer(actually, it doesn't have as many features as a Voodoo1 even).

I will continue to stand by my statement that graphics on PC's will rival those of the consoles right when they are released.

Keep telling yourself that, by the sounds of it even when the reality is staring you in the face you won't be able to see it.

Cost might be a factor, you argue... well, it sounds like to really use the full features of the PS3, I would have to buy a new freakin' HDTV w/HDMI. Wow, that is just so much cheaper

You can buy an HDTV at Wal-Mart for a few hundred or so- unless you are on welfare I'm sure you can afford one easily and even then - don't you have a monitor for your computer? I bet you can get a significantly larger HDTV then whatever monitor you have for considerably less then you paid for your monitor- forget the rest of your system.

So Ben i suppose PC game devs have 5 or 6 years on the same exact engine to work with? Because thats how long its taken the devs to make GT4 look that good. I doubt you see many racing devs on the PC take that long to make one game, and not many of the devs for PCs bring out 4 of them for minor improvements, thats what modding, patching and expansions are for.

So you think the consoles graphics will be far superior to PCs when it gets released. So tell me this then. The UE3 engine is getting used on the Xbox360 and PS3 and if you actually read their official site, they have to actually reduce texture size so that the games can run at reasonably frame rates. While on the PC it can run at its fullest. So err. How if that is happening that console games would look far superior then?

And about the HDTV. If its just a couple of hundred, then its pretty much absolutely crap quality and it will be absolutely poo. Remember our PCs costs a lot because we get the best money can buy. So you have to choose an HDTV HDMI wihch is very good quality and fairly large also.

 

blckgrffn

Diamond Member
May 1, 2003
9,686
4,345
136
www.teamjuchems.com
Thanks for that link, BFG! I had no idea that this quibbling went so far back, so many entrenched people discussing... :)

You can buy an HDTV at Wal-Mart for a few hundred or so- unless you are on welfare I'm sure you can afford one easily and even then - don't you have a monitor for your computer? I bet you can get a significantly larger HDTV then whatever monitor you have for considerably less then you paid for your monitor- forget the rest of your system.

Well, since I just checked, I would have to get a $700-800 TV from wal-mart that has no HDMI connection to get a bigger monitor than what I have. Not to mention how horrible the dot pitch is on those, etc. etc. I have a 17" Planar LCD and 21" Sony Trinitron Professional. So yeah, if you want a big TV, you pay big bucks, and they look crappy. Ever hooked a up a PC via DVI to an HDTV screen? If not, you don't get to talk about that anymore. It doesn't look great, in other words, even with nvidias newest drivers having great overscan flexibility. Will these games look pretty good? Probably. We don't know yet. Painkiller looked good but fuzzy on a 52" tv, that much is certain.


Keep telling yourself that, by the sounds of it even when the reality is staring you in the face you won't be able to see it.

Look at the last gen of consoles - none of them eclipsed the PC when launched, why is this going to be different? They don't have superior hardware, what is the magic bullet going to be? SMP? LOL Quake2 on my PC that I owned in 99 looked better than any shooter made in the first couple years of each platform, and ran better than anything made in the second half of their life. I was playing NFS:porsche Unleashed (the last good racing game made for the PC IMHO) when you were playing GT:2, and there was and is no comparison. I have Colin Mcray Ralley for my PC, and it looks awesome at 1600*1200 4xAA/8xAF, easily besting GT4 and rivaling the Project Gotham by looking at the screen shots. Especially as it is running at more FPS than 20-30. Too bad it doesn't detect the throttle correctly on my Saitek, or I might actually play it more :)

Do you understand what tiling is? Do you understand the difference between having a dedicated tile for the backbuffer and having texture RAM stored in a seperate place?

Oh, oops, my bad. You are the expert and claimed that there was so much bandwidth to this cache, silly me. You mean to say that it really only has, what, 20GB per second bandwidth to texture memory? How, then, praytell, is that vastly superior to the 34.4 GBs per second on my lowly oc'd x800xl? I don't understand? You mean not everything the GPU does goes over the mammoth 250 GB per second bandwidth that they are advertising for the GPU? Oh, shucks, I must have been drawn into there marketing or something to believe that is such a huge deal.... They better be damn sure that is the golden bullet, I can only imagine how much that silicon costs them. Oh, what is this, the GPU doesn't really have 250GB's bandwidth to the cache?

This daughter die is connected to the GPU proper via a 32GB/sec interconnect.
Link

Damn, that is less than what my GPU has to its main memory. Man, doesn't that blow? So that is really like, 50 GB's per second for combined bandwidth... hmmm... read any GPU articles lately? Memory bandwidth isn't the end all now as most applications are still very much GPU bound. Why else is a 9800 Pro slower than a vanilla 6600 in most applications?

ATi was cheating with their very first 3D part- the RagePro(and badly, they were using a technique to detect certain benches and then simply skip every third frame).

What, in 1998? Years and years before BFG joined the forums, surely!

GT4 on the PS2 looks vastly superior to almost every racer on the PC by a wide margin running on a MIPS 333MHZ processor and a souped up Voodoo1 level rasterizer(actually, it doesn't have as many features as a Voodoo1 even).

What are you saying here? What real racing games for the PC's aren't ports? Not many buy racing games for the PC, pure and simple. Now, if they had any decent looking FPS, that would something different entirely. I played Halo2 in HD on the mentioned 52" screen, and it looked like crap. On a 27", the same. Saying that a console racing game looks better than it's non-existant competition on the PC doesn't do much for me. Just think how freaking awesome it would look on the PC! No market, no product... I would have bought it, but one copy doesn't make it worthwhile...

I think that you have effectively proven to me through your arguments that the next gen of consoles holds little appeal for me. Sad... Maybe the Revolution, but not either the PS3 or xbox360, then again, they held no power over me last gen, either, basically because I already had a better PC. Again, this looks to be the case.


And if you are going to reply, do me the favor of not straw-manning my entire argument based on one thing. It's annoying, but then again, that is probably what you are going for at this point.


Nat


 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
The UE3 engine is getting used on the Xbox360 and PS3 and if you actually read their official site, they have to actually reduce texture size so that the games can run at reasonably frame rates.

Or you could check out a title like PGR3 where they are packing several GBs worth of texture data- of course we are comparing an inept console developer(Epic) to one that has proven they at least are reasonable at writing decent code(certainly no Polyphony).

And about the HDTV. If its just a couple of hundred, then its pretty much absolutely crap quality and it will be absolutely poo. Remember our PCs costs a lot because we get the best money can buy. So you have to choose an HDTV HDMI wihch is very good quality and fairly large also.

Get real. Look how many people on these forums are running LCDs which aren't worth wiping your @ss with when it comes to gaming- people not only don't care if they are getting the best money can buy- they will spend a lot to buy absolute junk and then boast about it.

Well, since I just checked, I would have to get a $700-800 TV from wal-mart

$647

I have a 17" Planar LCD

And you are criticizing the WalMart TV? Is that where you got your LCD?

Ever hooked a up a PC via DVI to an HDTV screen?

Yes, most PCs have extremely poor TV out- nVidia boards in particular(ATi is better, Matrox considerably better then either of them).

Look at the last gen of consoles - none of them eclipsed the PC when launched,

Hard to believe you are that ignorant.

They don't have superior hardware, what is the magic bullet going to be?

Fixed platform.

LOL Quake2 on my PC that I owned in 99 looked better than any shooter made in the first couple years of each platform

DOA- still looks better then most current PC games. Quake2 is PS1/N64 level visuals.

I was playing NFSorsche Unleashed (the last good racing game made for the PC IMHO) when you were playing GT:2, and there was and is no comparison.

NFS:pU was a port to the PC from the consoles.

I have Colin Mcray Ralley for my PC, and it looks awesome at 1600*1200 4xAA/8xAF, easily besting GT4 and rivaling the Project Gotham by looking at the screen shots.

CMR is also a port to the PC from the consoles(you figuring this out yet?). GT4 is easily its superior in visuals and it runs @60FPS(your ignorance is all over this post of yours).

You are the expert and claimed that there was so much bandwidth to this cache, silly me. You mean to say that it really only has, what, 20GB per second bandwidth to texture memory? How, then, praytell, is that vastly superior to the 34.4 GBs per second on my lowly oc'd x800xl?

The overwhelming majority of mem access that a GPU has is for the back buffer- anyone with a miniscule knowledge of vid cards knows this. Your 34.4GB/s is largely consumed by FB mem writes(and reads depending on the rendering technique).

Damn, that is less than what my GPU has to its main memory. Man, doesn't that blow? So that is really like, 50 GB's per second for combined bandwidth... hmmm... read any GPU articles lately? Memory bandwidth isn't the end all now as most applications are still very much GPU bound. Why else is a 9800 Pro slower than a vanilla 6600 in most applications?

You really should start at the basics and learn a bit more about GPU architecture. The daughter die isn't just the eDRAM- it has additional logic- the FB writes aren't handled until after the daughter die handles additional calculations.

What, in 1998? Years and years before BFG joined the forums, surely!

RagePro was released in 1997- BFG joined these forums in 2000- that would be three years. Is English not your first language?

What real racing games for the PC's aren't ports?

GTR I listened to the fanboys run off at the mouth about- GPL is another.

And if you are going to reply, do me the favor of not straw-manning my entire argument based on one thing.

Nothing you said in your post is accurate, logical or very sensible- you didn't leave much worth responding to as you don't have a valid point in the entirety of your post.
 

Hacp

Lifer
Jun 8, 2005
13,923
2
81
OMFG where are the mods? This should be locked or moved to highly technical!!!!!!!
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
while it also the 20GB/sec+ mem bandwidth for texture/shader/vertex reads
Wow, it is only 22.4 GB/sec. That's even worse than the 32 GB/sec I thought it was. :shocked:

why would you be under the impression that it is faster then the minisculs bandwidth of the 7800GTX?
Miniscule? A single 7800 has over 38 GB/sec bandwith while two of them have almost 80 GB/sec effective bandwidth. I'm no math professor but even I can see ~22 < ~38 < ~80.

If they were running 1080p all they would have to do is increase the number of tiles they were using
You major mem access is going straight to the eDRAM which then has to write a small amount to system level ram 3 times per frame
At 1920x1080px4AA you are going to be writing a lot out to system RAM. I don't see why you keep pretending this 10 MB cache is a magic rubber band that can stretch to fit anything it likes in there. It can't and when that happens it goes out to system RAM where the console is inferior to one 7800, much less two of them.

Then the IHV does it again- where is the major problem? Your game may not run quite as fast for a week...?
Exactly. You need to ponder on the implications of this for a moment.

Hypothetical example: would you like a miniOGL that needs to be updated with game-specific code every time a new game comes along or a full OpenGL ICD that is generic and robust enough to run anything that's thrown at it?

I certainly don't recall you ever signing praises about 3dfx so why the double-standard when nVidia's past actions were effectively creating miniGL and miniD3D drivers?

I'd rather not blue screen all day and despite your utterly illogical beliefs about what causes it I am neither running a third party utlity nor am I overclocking.
If it's that important to you then use a third party tweaker or edit the registry yourself. Editing a single key really isn't that difficult.

ATi was cheating with their very first 3D part- the RagePro(and badly, they were using a technique to detect certain benches and then simply skip every third frame).
You mean like when nVidia were caught rendering AA on only every alternating frame?

DOA- still looks better then most current PC games
If you're going to compare a game with three animated characters on the screen at once (like a typical console) you need to be looking at something like Dawn, Nalu or Dusk. What console can match those?