Originally posted by: BenSkywalker
Of course all should tremble in front of an intellectual giant such as yourself
Never claimed any such thing, just that I find it amusing.
Given the basic nature of the XBox you have either two target framerates, 30Hz or 60Hz. With the explicit and basic understanding of why it is improper and impractical to disable VSync on a console developers are left with two choices, they either go for a higher performing engine with lower levels of visual quality or they go for higher levels of visual quality and lower framerate. Splinter Cell fell under the latter. Since you are dealing with a fixed platform you have a very clearly defined code path that you optimize to allow you the best performance given the visual load that the machine can handle(or CPU load depending on where the bottleneck is in the particular instance).
Correct. And how does this equate to the FX5800U should ultimately be slower than the R9800P?
Since none of the above is remotely beyond a typical thirteen year old with a vague familiarity with the console gaming market, what should be readily apparent is that the only way you would see the type of optimizations that were utilized on the XBox would be to utilize the closest code base to what was used on the lead dev platform. Given that everyone is aware that the lead development platform runs with features enabled that are beyond the grasp of the R3X0 parts to which you make mention of in your post, it is obvious that such benchmarks would be lacking in their relation back to the XBox.
Possibly, but given the same hardware, you could emulate such things, albeit perhaps not as reliably as with a straight PC-PC part comparison. That being said, and all other things equal, a card with said features which are 'beyond the grasp of the R3X0 parts' can't actually beat out these R3X0 parts that have a performance disadvantage.
Since you clearly indicated that you put faith in to H's statements involving testing methodology in regards to SC, why is that you would expect nV PC hardware to show an improvement due to its XBox roots when it is running different rendering settings in the first place?
Because it wouldn't be, unless it was made specifically impossible to properly port it by the developers -- and I doubt it was.
How is it that you come to the conclusion that an alternate code path from a different platform will aid another platform's code path performance? If they had compared the performance of both boards at their respective maximum settings then it would have obviously heavily tilted the field in nV's favor in terms of image quality, but what would the performance impact be? Given that that is where the XBox relation becomes relevant, that is where it should be considered.
You've obviously not done a lot of large-project enterprise-level development, especially such that needs to be ported to various different architectures. In such cases, the core libraries are built to ensure things are as easily portable as possible (look at how id and the Q3 engine work, or Valve and HL). Your lovely surface comparison of saying "it's comparing apples to oranges" has a couple flaws, namely that the XBOX essentially runs on top of X86 hardware anyways, but also in that you assume totally different code paths for different platforms. In large scale difficult development, that is the absolute worst way you can do things, and I feel safe in saying it's not how it's done in the PC gaming world generally speaking.
In specifically Splinter Cell's case, you have a game that was written off the main development path: Intel Chips, nVidia graphics core in an XBOX. Now, the game devs would not have to change much else than their low-level I/O interfacing to port it. If it were Dreamcast -> PC, then maybe there'd be a lot more difficulty in it. So now, you've got Platform B, the PC, which has a fundamentally identical architecture to the lead development platform, with some window dressing changes to make the game run on top of it. The NV25 Optimizations would still be in place, that would be part of what's re-written in their low-level porting. This should cause any NV25 part to excel at running SC on a PC. Now, scale that upwards to the newer NV30 part. Is it not reasonable to expect that the NV25 optimizations should give the NV30 part a significant advantage over a graphics architecture that doesn't have these NV25 capabilities?
I did not say that the NV30 should run it at 300fps, just that I'm surprised it DIDN'T beat out the R9x00 cards for the reasons above (and below).
[...]That would have framed your conversation much better. If, as you implied, you place faith in H's testing methodology then either you are ignoring the impact that code optimizations have, although that would be odd as that was the basis for your comment, or you are giving the impression that you think code optimizations will hold up when you are running different code which doesn't make too much sense either.
As I said before, the level of optimization would remain very close across the two platforms, especially if these advanced shadowing techniques are such a core feature of SC's engine, which, coincidentally, wouldn't change across the two platforms. You seem to have trouble grasping the concept that this is in essence still just x86 code running on a PC. The difference lies in the OS and the UI. A similar comparison might be drawn between versions of games ported between Linux and Windows. It seems to me like you believe vast amounts of the rendering engine would change because of a simple port, when in fact, it's possible and done now in a way that you just recompile for your architecture. Look at Linux/Windows games that originated on Linux. The same source code base are used to compile both the linux version and the windows version of the games (check
BZFlag as an example).
What it basically comes down to is that without a developer who worked on Splinter Cell's rendering engine here to comment, there's really no concrete way one way or the other to know how they did things. From looking at it, I don't quite get why the FX5800U doesn't perform better, although some more interesting numbers might be how the Ti4x00 series perform, since they are closer parts. It might be good to see if nVidia did cripple some particular functionality that could slow it down. Maybe a Ti4x00 would outperform an FX5800U, I'd like to see numbers on them both to be sure. Since you appear to own an FX5800U, and an R300, and Splinter Cell, is it possible you could post some numbers? Maybe with a little help from someone with a Ti4600, we could figure out if that's the case or not. We could at least narrow it down to the 9800Pro being faster or SC's developers changing some of the low-level functionality due to limitations in Windows that don't exist in whatever basic OS they use on the XBOX (Win2K Embedded?). Given the closeness of the architectures, and of both the R350 parts and the NV30 parts in non-AA/AF situations, I'm not sure which would be the more obvious answer.