GF FX 5800 Ultra Vs. Radeon 9700 Pro

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Very familiar, but then, if you don't know, and aren't interested in finding out, why open your mouth (or put your fingers to use as the case may be) at all?

You stated you were surprised by nV not having an advantage due to the XBox lead dev platform status for SC. Why did you bother to open your mouth? ATi boards are lacking one of the best visual features of SC, the nV advantage due to the XBox is extremely apparent. You seriously have to be damn near blind to miss what ATi can't do in those screenshots.

Rogo-

IN what way? The r300s with 16x quality is using trilinear 128tap AF, while the NV30 and NV20 are using 8x trilinear 64 tap AF.

Click on the links. You can not miss it(the cause for it is ATi's inability to render to depth textures). The R300 core boards are also missing WBuffer. None of the boards are totally feature complete, though many of the die hard loyalists from either camp may try and lead you to believe otherwise.
 

chsh1ca

Golden Member
Feb 17, 2003
1,179
0
0
Originally posted by: BenSkywalker
You stated you were surprised by nV not having an advantage due to the XBox lead dev platform status for SC. Why did you bother to open your mouth? ATi boards are lacking one of the best visual features of SC, the nV advantage due to the XBox is extremely apparent. You seriously have to be damn near blind to miss what ATi can't do in those screenshots.
Amusing twist of what I was talking about. I love how people around here only partially quote what they're referring to to make themselves look intelligent. Let me remind you I was surprised by NV not having an advantage with Splinter Cell on their FX line of cards -- specifically a speed advantage.
SC is supposed to be a highly-nVidia friendly benchmark for that very reason, yet the apparently uber performing GFFX5800U couldn't outperform the R9800Pro in the benchmarks, and AA/AF weren't a factor.

Again, what I said:
...that used the Splinter Cell demo as a comparison, and the R9x00s performed about 5-10fps better on average, which surprised me, given Splinter Cell's history with the XBox.
 

NYHoustonman

Platinum Member
Dec 8, 2002
2,642
0
0
You want an explanation?

From the SC 1.2 patch readme...surprised nobody brought this up...



"Class 2 Graphic Adaptors:
NV2x/NV3x chips
Dynamic Lighting system = Shadow Buffer
Vertex position modifiers = Yes
Light beams stopped by depth texturing = Yes
Pixel Shader effects/filters/water = Yes
Reflection/Details texturing/Specular = Yes

Class 1 Graphic Adaptors:
R2xx/R3xx/Parhelia/Xabre 200/Xabre 400/Xabre 600/chips/Creative P9
Dynamic Lighting system = Shadow Projector
Vertex position modifiers = No
Light beams stopped by depth texturing = No
Pixel Shader effects/filters/water = Yes
Reflection/Details texturing/Specular = Yes

Class 0 Graphic Adaptors:
R1xx/NV1x chips
Dynamic Lighting system = Shadow Projector
Vertex position modifiers = No
Light beams stopped by depth texturing = No
Pixel Shader effects/filters/water = No
Reflection/Details texturing/Specular = No

Class 2 adaptors can run as Class 2, Class 1 or Class 0 adaptors while Class 1 adaptors can run as Class 1 or Class 0 adaptors. Class 0 adaptors are only able to run Splinter Cell as Class 0 adaptors.
You can force a class 1 or class 2 adaptor to run as a different class by editing the splintercell.ini file in the \system directory. Uncomment ?ForceShadowMode = 0? to force the card to run as class 1 adaptor (if able to) or change ?EmulateGF2Mode=0? to ?EmulateGF2Mode=1? to run as a class 0 adaptor.

Why does Splinter Cell have a special mode for NV2x/NV3x graphic chips?

Splinter Cell was originally developed on XBOXTM. Features only available on NV2x chips were used and it was decided to port them to the PC version even if these chips would be the only one able to support them. Considering the lighting system of XBOXTM was well validated, it was easy to keep that system intact.

Splinter Cell Dynamic lighting system

Splinter Cell shadow system is a major part of the game. On NV2x/NV3x hardware, it runs using a technique called Shadow Buffers. This technique is rendering the scene from every shadow casting light and store a depth buffer that represent each pixel viewed by this light source. Each pixel has an X, Y, Z coordinate in the light system and these coordinates can be transformed, per pixel, in the viewer coordinate system. It?s then easy to compare with the actual depth stored in the Z buffer to figure out if the pixel viewed by the camera is the same or is occluded by the pixel viewed by the light. If they are the same, it means the pixel is lighted, if the light pixel is in front of the viewer pixel, it means the pixel is in the shadow. On all other current hardware, the game is using another technique called projected shadows (shadow projectors). The technique is somewhat similar, we render the scene from the light point of view but instead of storing the depth, we are storing the color intensity in a texture. That texture is then mapped per vertex on each object that is going to receive the shadow. To be able to have objects casting shadows on other objects that are themselves casting shadows, Splinter Cell is using a 3-depth levels shadow casting algorithm. In general, the first level is used to compute the shadow to be used on the dynamic actors like Sam. The second level is used to compute the shadow used by the static meshes like a table or boxes. The final level is used for the projection on the BSP. This system is allowing Sam to receive the shadow of a gate on him, then Sam and the gate can cast on a box and finally all three objects can cast on the BSP (ground). This system also has a distance check algorithm to determine if Sam?s shadow should be projected on a static mesh (like a box) or if it shouldn?t base on their relative position. Both systems have their own strength/weaknesses. The main advantage of the Shadow Buffer algorithm is how easy it is to work with. Shadow Projectors are tricky and difficult to use."




Maybe that has some impact? It is clear to see here that this game was made with the GeForce cards in mind and should, I would guess, perform AT LEAST as well at the same levels.


And whoever said SC is a bad game...Wow...It was one of the best I have ever played, great graphics, gameplay, and story = one hell of a game.

And Rogozhin, were you talking about how those shadows slow the game down alot? They do for me, at least...I can run any level at 1280x1024 with max settings except for that one, thus I run it at 1024x768, with no problems.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Amusing twist of what I was talking about. I love how people around here only partially quote what they're referring to to make themselves look intelligent.

Of course all should tremble in front of an intellectual giant such as yourself :)

Again, what I said:

I also saw a review (HardOCP's Visiontek XTasy9800Pro IIRC) that used the Splinter Cell demo as a comparison, and the R9x00s performed about 5-10fps better on average, which surprised me, given Splinter Cell's history with the XBox.

This is the actual quote, not selectively slicing up comments.

Given the basic nature of the XBox you have either two target framerates, 30Hz or 60Hz. With the explicit and basic understanding of why it is improper and impractical to disable VSync on a console developers are left with two choices, they either go for a higher performing engine with lower levels of visual quality or they go for higher levels of visual quality and lower framerate. Splinter Cell fell under the latter. Since you are dealing with a fixed platform you have a very clearly defined code path that you optimize to allow you the best performance given the visual load that the machine can handle(or CPU load depending on where the bottleneck is in the particular instance).

Since none of the above is remotely beyond a typical thirteen year old with a vague familiarity with the console gaming market, what should be readily apparent is that the only way you would see the type of optimizations that were utilized on the XBox would be to utilize the closest code base to what was used on the lead dev platform. Given that everyone is aware that the lead development platform runs with features enabled that are beyond the grasp of the R3X0 parts to which you make mention of in your post, it is obvious that such benchmarks would be lacking in their relation back to the XBox.

Since you clearly indicated that you put faith in to H's statements involving testing methodology in regards to SC, why is that you would expect nV PC hardware to show an improvement due to its XBox roots when it is running different rendering settings in the first place? How is it that you come to the conclusion that an alternate code path from a different platform will aid another platform's code path performance? If they had compared the performance of both boards at their respective maximum settings then it would have obviously heavily tilted the field in nV's favor in terms of image quality, but what would the performance impact be? Given that that is where the XBox relation becomes relevant, that is where it should be considered.

Now, if you had posted

"I'm surprised the FX 5800 Ultra doesn't have a better showing because the same game is available on the XBox using different features which causes it to run a different code path that looks superior to what they are testing but they both use graphics chips made by nVidia so none of that should matter"

That would have framed your conversation much better. If, as you implied, you place faith in H's testing methodology then either you are ignoring the impact that code optimizations have, although that would be odd as that was the basis for your comment, or you are giving the impression that you think code optimizations will hold up when you are running different code which doesn't make too much sense either.
 

Rogozhin

Senior member
Mar 25, 2003
483
0
0
Houstonman

I wasn't involved in this SC debate, just asked what Ben sees in the nv hardware that the R300 supposedly isn't rendering.

I play it at 1024x768 with all in game options on highest possible, including shadows, but it does get a little bogged down sometimes ;)

Thanks though for pointing out the readme!

I never looked at and it does hold some valuable info!

Rogo
 

chsh1ca

Golden Member
Feb 17, 2003
1,179
0
0
Originally posted by: BenSkywalker
Of course all should tremble in front of an intellectual giant such as yourself :)
Never claimed any such thing, just that I find it amusing.

Given the basic nature of the XBox you have either two target framerates, 30Hz or 60Hz. With the explicit and basic understanding of why it is improper and impractical to disable VSync on a console developers are left with two choices, they either go for a higher performing engine with lower levels of visual quality or they go for higher levels of visual quality and lower framerate. Splinter Cell fell under the latter. Since you are dealing with a fixed platform you have a very clearly defined code path that you optimize to allow you the best performance given the visual load that the machine can handle(or CPU load depending on where the bottleneck is in the particular instance).
Correct. And how does this equate to the FX5800U should ultimately be slower than the R9800P?

Since none of the above is remotely beyond a typical thirteen year old with a vague familiarity with the console gaming market, what should be readily apparent is that the only way you would see the type of optimizations that were utilized on the XBox would be to utilize the closest code base to what was used on the lead dev platform. Given that everyone is aware that the lead development platform runs with features enabled that are beyond the grasp of the R3X0 parts to which you make mention of in your post, it is obvious that such benchmarks would be lacking in their relation back to the XBox.
Possibly, but given the same hardware, you could emulate such things, albeit perhaps not as reliably as with a straight PC-PC part comparison. That being said, and all other things equal, a card with said features which are 'beyond the grasp of the R3X0 parts' can't actually beat out these R3X0 parts that have a performance disadvantage.

Since you clearly indicated that you put faith in to H's statements involving testing methodology in regards to SC, why is that you would expect nV PC hardware to show an improvement due to its XBox roots when it is running different rendering settings in the first place?
Because it wouldn't be, unless it was made specifically impossible to properly port it by the developers -- and I doubt it was.

How is it that you come to the conclusion that an alternate code path from a different platform will aid another platform's code path performance? If they had compared the performance of both boards at their respective maximum settings then it would have obviously heavily tilted the field in nV's favor in terms of image quality, but what would the performance impact be? Given that that is where the XBox relation becomes relevant, that is where it should be considered.
You've obviously not done a lot of large-project enterprise-level development, especially such that needs to be ported to various different architectures. In such cases, the core libraries are built to ensure things are as easily portable as possible (look at how id and the Q3 engine work, or Valve and HL). Your lovely surface comparison of saying "it's comparing apples to oranges" has a couple flaws, namely that the XBOX essentially runs on top of X86 hardware anyways, but also in that you assume totally different code paths for different platforms. In large scale difficult development, that is the absolute worst way you can do things, and I feel safe in saying it's not how it's done in the PC gaming world generally speaking.
In specifically Splinter Cell's case, you have a game that was written off the main development path: Intel Chips, nVidia graphics core in an XBOX. Now, the game devs would not have to change much else than their low-level I/O interfacing to port it. If it were Dreamcast -> PC, then maybe there'd be a lot more difficulty in it. So now, you've got Platform B, the PC, which has a fundamentally identical architecture to the lead development platform, with some window dressing changes to make the game run on top of it. The NV25 Optimizations would still be in place, that would be part of what's re-written in their low-level porting. This should cause any NV25 part to excel at running SC on a PC. Now, scale that upwards to the newer NV30 part. Is it not reasonable to expect that the NV25 optimizations should give the NV30 part a significant advantage over a graphics architecture that doesn't have these NV25 capabilities?

I did not say that the NV30 should run it at 300fps, just that I'm surprised it DIDN'T beat out the R9x00 cards for the reasons above (and below).

[...]That would have framed your conversation much better. If, as you implied, you place faith in H's testing methodology then either you are ignoring the impact that code optimizations have, although that would be odd as that was the basis for your comment, or you are giving the impression that you think code optimizations will hold up when you are running different code which doesn't make too much sense either.
As I said before, the level of optimization would remain very close across the two platforms, especially if these advanced shadowing techniques are such a core feature of SC's engine, which, coincidentally, wouldn't change across the two platforms. You seem to have trouble grasping the concept that this is in essence still just x86 code running on a PC. The difference lies in the OS and the UI. A similar comparison might be drawn between versions of games ported between Linux and Windows. It seems to me like you believe vast amounts of the rendering engine would change because of a simple port, when in fact, it's possible and done now in a way that you just recompile for your architecture. Look at Linux/Windows games that originated on Linux. The same source code base are used to compile both the linux version and the windows version of the games (check BZFlag as an example).

What it basically comes down to is that without a developer who worked on Splinter Cell's rendering engine here to comment, there's really no concrete way one way or the other to know how they did things. From looking at it, I don't quite get why the FX5800U doesn't perform better, although some more interesting numbers might be how the Ti4x00 series perform, since they are closer parts. It might be good to see if nVidia did cripple some particular functionality that could slow it down. Maybe a Ti4x00 would outperform an FX5800U, I'd like to see numbers on them both to be sure. Since you appear to own an FX5800U, and an R300, and Splinter Cell, is it possible you could post some numbers? Maybe with a little help from someone with a Ti4600, we could figure out if that's the case or not. We could at least narrow it down to the 9800Pro being faster or SC's developers changing some of the low-level functionality due to limitations in Windows that don't exist in whatever basic OS they use on the XBOX (Win2K Embedded?). Given the closeness of the architectures, and of both the R350 parts and the NV30 parts in non-AA/AF situations, I'm not sure which would be the more obvious answer.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
It seems to me like you believe vast amounts of the rendering engine would change because of a simple port, when in fact, it's possible and done now in a way that you just recompile for your architecture.

You are ignoring completely the fact that some of the core rednering paths that H was using a totally different from what was run on the Xbox. The most intensive rasterization feature utilized by Splinter Cell is the shadowing implementation, in particular the shadow buffer technique utilized by the NV2X/3X parts. If you are spending the longest amount of rasterization on a technique for one platform, what leads you to believe that running a different code path would benefit from optimzations to it? We are talking about vid card limited performance here, and anyone with SC can easily see for themselves that dropping shadows down to the lowest settings speeds the game up in a significant fashion.

To the broader point of not understanding the porting process. One of your assertions is that at its core the XBox is PC hardware and should hence offer a straightforward port. In relation to the PS2 or Cube that is a valid line of thought, but it is nowhere near as straightforward as you are implying. First off is the entirely different memory architecture. As anyone who had handled ports should know, there is no simple way to move code from an UMA to a segmented memory architecture, particularly not any code that has any sort of system level optimizations involved.

Since the XBox utilizes UMA including sharing its system memory for vid based storage and frame buffer too, the PC port would offer comparable memory requirements, allowing some overhead for the OS of course, to its XBox counterpart. Looking at the system requirements a 32MB vid card is required with a 64MB reccomended. We'll stick with the 32MB option for the purpose of this discussion. Given the XBox has a mere 64MB total, that would seem to indicate that you should only need another 32MB of system RAM for proper operation. Instead of doing that, we will assume that you only need 17MB, and that system memory requirements would require an additional 47MBs. Even if we factor 128MBs overhead for the OS the game would only need 175MBs system RAM to operate, and yet the requirements are 256MBs(even under Win9X).

Between the alternate rendering path utilized by H, the obvious huge disparity in system level requirements and the gap in end visuals using the settings H used in their tests, how are you basing your assumption that code optimzations should hold up across platforms?

Fixed architecture vs open architecture is another issue too. The ability to write tighter code for a fixed platform can have a significant impact on overall performance, Carmack stated around 100% difference in one of his posts on DooM3(comparing the XBox to a comparable PC). Any low level optimizations for the rendering would exist in their primary code path for the XBox, but anything outside of the core rendering path would have to be altered to allow it to run on non nV hardware for the PC.

What it basically comes down to is that without a developer who worked on Splinter Cell's rendering engine here to comment, there's really no concrete way one way or the other to know how they did things.

Check out the post NYHoustonman made above. There are comments stating some of the changes that were done involving the differing rendering techniques utilized by NV2X and up hardware and non nV hardware, and that is from UbiSoft.
 

OpStar

Member
Apr 26, 2003
75
0
0
wow. Interesting stuff.

On the other hand, yes, sure, if you wanna play SC and bench against me, that is fine. (Whether I win or lose ;) )

I play it at 10x7 everything highest setting 4x aa and 8x ansio
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Quite a few developers in B3D have said the "Meant to Be Played" logo is added by the game's producer, not developer, so it doesn't necessarily reflect dev. platforms. In terms of opimizing performance: UT2K3 has that logo, and ATi cards spank nV ones pretty handily. IMO, you're confusing driver quality with developer support in the first half of the quoted passage. They're both part of end-user experience, of course, and that's really what counts; but if we're debating the specifics of game bugs, I'm sure quite a few R300 problems are due to it being a new architecture, as you said in the second half of the quoted passage. I'm guessing ATi's current dominance will lead to many more developers taking ATi's cards into primary (rather than secondary) consideration during QA.

Try that on DAOC. ATi cards have bugs up the wazooo.

 

nRollo

Banned
Jan 11, 2002
10,460
0
0
On the other hand, yes, sure, if you wanna play SC and bench against me, that is fine. (Whether I win or lose

If you lose, will you admit to all the GF FX suxorz!?! Will you tell us how you can't stand the unbearable noise?!?!?

Please, please validate all of our decisions to buy 9700, 9800s by telling us how you hate your GF FX!

It's as plain as the nose on your face! The 9700/9800 win some benchmarks! They MUST be superior! The benchmarks FXs win don't matter to any sane man!

Aaaiieiieeeeeee!


LOL- enjoy the FX
 

chsh1ca

Golden Member
Feb 17, 2003
1,179
0
0
Originally posted by: BenSkywalker
You are ignoring completely the fact that some of the core rednering paths that H was using a totally different from what was run on the Xbox. The most intensive rasterization feature utilized by Splinter Cell is the shadowing implementation, in particular the shadow buffer technique utilized by the NV2X/3X parts. If you are spending the longest amount of rasterization on a technique for one platform, what leads you to believe that running a different code path would benefit from optimzations to it? We are talking about vid card limited performance here, and anyone with SC can easily see for themselves that dropping shadows down to the lowest settings speeds the game up in a significant fashion.

To the broader point of not understanding the porting process. One of your assertions is that at its core the XBox is PC hardware and should hence offer a straightforward port. In relation to the PS2 or Cube that is a valid line of thought, but it is nowhere near as straightforward as you are implying. First off is the entirely different memory architecture. As anyone who had handled ports should know, there is no simple way to move code from an UMA to a segmented memory architecture, particularly not any code that has any sort of system level optimizations involved.
I understand the differences here, and that is why I was questioning whether they would change a tonne of code. I didn't intend to come off like it would be as simple as recompiling it, that was an extreme example to put my point across -- and that point was that the XBOX and PC versions of Splinter Cell would be a LOT closer to one another than the GameCube or PS2 versions would be, and as such, I would expect that their nV-optimizations would remain consistent across platforms. I was stating that that obviously isn't the case, and presented reasons why. In a roundabout way, you are really just agreeing with me -- we're both just presenting things in a different way. :) Your contention is that I shouldn't be surprised, whereas mine is that I'm surprised the optimizations DIDN'T come across, or at least, not in any measurable way on the NV30 FX5800U. My problem with what you say is that in the end, it's still the same graphics card architecture powering both systems. None of this detracts from game play in any noticeable fashion (It looks sweet on my Tbird 1.2GHz with my GF3Ti200).
As for the memory architecture, I had thought Microsoft would have an API in their XBOX OS for handling memory instructions and such. If that's the case, then all memory handling could (and I would think should) be done in a homogenous fashion across both UMA and SMA architectures. Making it simpler to access various different hardware (and software) functions is the whole point of having an API right?

Since the XBox utilizes UMA including sharing its system memory for vid based storage and frame buffer too, the PC port would offer comparable memory requirements, allowing some overhead for the OS of course, to its XBox counterpart. Looking at the system requirements a 32MB vid card is required with a 64MB reccomended. We'll stick with the 32MB option for the purpose of this discussion. Given the XBox has a mere 64MB total, that would seem to indicate that you should only need another 32MB of system RAM for proper operation. Instead of doing that, we will assume that you only need 17MB, and that system memory requirements would require an additional 47MBs. Even if we factor 128MBs overhead for the OS the game would only need 175MBs system RAM to operate, and yet the requirements are 256MBs(even under Win9X).
You're being rather generous with your idea on how large the game is. :) Even still, I don't disagree that they could have written things differently to take advantage of the UMA they had to work with, however, the OS overhead is the likely culprit for the difference between the two. That being said, if the game detects an nVidia card, it should still be capable of enabling the nVidia optimizations they had in place for it, or at least utilizing the DirectX functionality (which may also indicate another area where more ram may be taken up).

Between the alternate rendering path utilized by H, the obvious huge disparity in system level requirements and the gap in end visuals using the settings H used in their tests, how are you basing your assumption that code optimzations should hold up across platforms?
Because those things noted shouldn't give either card a render advantage. The difference in configuration explains why the test was weighted towards the R350 series, however, I would like to see how an FX5800U does on SC when it's configured identically to the R350. It's too bad we'll never get to see that. :) I don't know rendering well enough though, is shadowing a very ultra intensive thing to do?

Fixed architecture vs open architecture is another issue too. The ability to write tighter code for a fixed platform can have a significant impact on overall performance, Carmack stated around 100% difference in one of his posts on DooM3(comparing the XBox to a comparable PC). Any low level optimizations for the rendering would exist in their primary code path for the XBox, but anything outside of the core rendering path would have to be altered to allow it to run on non nV hardware for the PC.
Yeah, I agree with that. My comments were not in saying that the only reason it should be faster is because of nVidia's ties with the XBOX, my comments were partially that, but also partially the fact that the FX5800U is generally speaking, faster than the 9800Pro in AA/AF disabled situations. If the FX5800U were on the whole slower than the 9800Pro when it comes to non-AA/AF, then I wouldn't have bothered suggesting it should be otherwise. But here's a card that appears to be about 5-10% faster than the R9800Pro *generally speaking* without AA/AF enabled, performing about 5-10% slower on a game that is designed to use the features on the card. I don't necessarily care about anything > 30FPs in the game, it's quite nice at that speed. And FYI, as far as I'm concerned, with NYHoustonman's post of the config differences, that explains the benchmark differences, and IMHO HardOCP should include a correction in the review, noting that, or simply stop using it as a benchmark. :)

Check out the post NYHoustonman made above. There are comments stating some of the changes that were done involving the differing rendering techniques utilized by NV2X and up hardware and non nV hardware, and that is from UbiSoft.
Yeah, I got that. I'm tempted to get my R9700P owning friend to install SC and compare the two side by side, to see how my GF3Ti200 renders the scene vs how his R9700P renders it, because from the screenshots I've seen, it's only some slight shadowing that's different (as was pointed out earlier). I don't know that that should necessarily equate to the kind of performance difference we've seen though.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
Originally posted by: TheSnowman
you are obviously useing one of the driver sets that lowers the floting point percision if your fx is beating a 9700pro. also it seems you are getting a rather low score for your 9700pro as i get 4836 with an xp@1636.

No, that's normal for a P4 ;)
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
I think the majority of our disagreement can be cleared up fairly simply :)

I don't know rendering well enough though, is shadowing a very ultra intensive thing to do?

As done in Splinter Cell, extremely. It isn't just an issue of the shadow itself, the calculations involved with the lighting and how it interacts with the environment is by far the most intensive single portion of the rendering engine. Outside of the shadowing implemented in SC, it's just a typical U2 powered game, the same engine that 9800s are running tripple digit framerates on with AA/AF at some of the settings H tested.

I'm trying to think off the top of my head and I would expect your system to show between a 30%-50% performance difference between the highest shadow levels and the lowest(40% would be my rough estimate), although you can check that for yourself(I could be way off, but it should be in that range). There is a demo you can download over at B3D to test it out, just a sec.... B3D's benchmark file(with instructions).

They recorded a demo with the intention of making it as vid card intensive as possible, however it isn't quite the best representation of the game overall(due to lacking the calculations required for shadowing NPCs).
 

chsh1ca

Golden Member
Feb 17, 2003
1,179
0
0
Originally posted by: BenSkywalker
There is a demo you can download over at B3D to test it out, just a sec.... B3D's benchmark file(with instructions).

They recorded a demo with the intention of making it as vid card intensive as possible, however it isn't quite the best representation of the game overall(due to lacking the calculations required for shadowing NPCs).
Alright, as soon as I get my PC back into one piece I'll download the benchmark and give it a shot, although my guess is that my system will be CPU limited more than GPU limited (although maybe both).
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
although my guess is that my system will be CPU limited

The XBox is only running a souped up Celery 733 so it shouldn't be too bad(1.2GHZ paired with a GF3 seems like a nice balance to me).

BTW- Off topic, but do you know if the Transformers Movie is out on DVD, and if so where could I buy it? I know, out of the blue question buy my parents picked me up one of the new addition Optimus Primes and my kids were confused why I was getting a toy so I was trying to explain to them how the Transformers were the shiznit when I was in my younger days, they didn't quite get it though ;)
 

chsh1ca

Golden Member
Feb 17, 2003
1,179
0
0
Originally posted by: BenSkywalker
although my guess is that my system will be CPU limited

The XBox is only running a souped up Celery 733 so it shouldn't be too bad(1.2GHZ paired with a GF3 seems like a nice balance to me).
Yeah, but like you said, higher system requirements from a different codebase... :D

BTW- Off topic, but do you know if the Transformers Movie is out on DVD, and if so where could I buy it? I know, out of the blue question buy my parents picked me up one of the new addition Optimus Primes and my kids were confused why I was getting a toy so I was trying to explain to them how the Transformers were the shiznit when I was in my younger days, they didn't quite get it though ;)
Hah, I wish I knew. Transformers were the shiznit... I'm guessing we're about the same age. :)

Uhm, I think I'm gunna cry. They're selling the entire seasons online too.
Transformers: The Movie on DVD
The first season on DVD
The second season on DVD
Hmmm, maybe my upgrade will have to wait. I can't believe I didn't google that sooner!
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Uhm, I think I'm gunna cry. They're selling the entire seasons online too.

$105 for the movie and season 1 and 2 :D :D

I can't believe I didn't google that sooner!

I have Googled it in the past, never came up with actual DVDs(always VCDs). I think the wife might belittle me a bit if I buy it without checking first(the movie would be NP, but that bundle deal looks soooo tempting ;) ) though I think it could be worth it :)

Thank you very much for digging that up, it is greatly appreciated :D :D
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Hey guys, Transformers the Movie (awesome btw:D, Judd Nelson, Orson Welles, that song from Boogie Nights ;)) has been on DVD for at least 2 years...I picked it up some time ago. You can get it at Best Buy for like $10.99 or something around there. They also have the boxed seasons 1 and 2 but I'm not sure how much they go for...season 2 is the one to get if you only get one right now. Check the anime section. There was some controversy over the DVD version and the theatrical version but the difference escapes me........

Chiz
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Hey Chiz,

I've never seen it on DVD(nearest BB is a ~hour drive from my house), been keeping an eye out though(seen the VHS at Wal-Mart). Is season 2 the one with the return of Optimus? That and the movie are the two I remember most vividly(and the two I'd like to get the most). I have Rhino's site open in another window and can find the return of Optimus on VHS for $10, only problem is I would have to buy a VCR to watch it :p

Does the DVD you have offer widescreen, and do you have the regular or collectors edition? I'll gladly pay extra for widescreen when available(I know the seasons are fullscreen natively so no go there). I'm currently debating which course of acquisition I should take. I can order it online and get it some time next week, drive up to the nearest 'civilization' to where I live(no BB, but they have numerous video specialty shops there), or biting the bullet and spending the two hours in a car to BB and back. Hmmmmm.....
 

chsh1ca

Golden Member
Feb 17, 2003
1,179
0
0
Originally posted by: Mem
BenSkywalker you tried
here :).
Aww man... Thanks to you damned people I'm going to be homeless and broke! Well, at least I'll have a PC with Splinter Cell and Transformer DVDs... :D
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
After making my last post, I decided to head over to the nearest civilization and see what I could come up with. Ended up going to five different stores and was about to walk out of the last place when I happened to ask the guy at the counter(extremely friendly and helpful btw :) ) if they had any copies of the Transformer Movie on DVD and they had several, although they were in the anime section(didn't even think to look there). They also had the box sets but the episode I really want is from season three and they didn't have any of those and I didn't feel quite like spending $200(the price for the box sets they had plus the Movie) and coming home with only half of the episodes I really wanted. I got the collectors edition but unfortunately it is still full screen, they didn't have any widescreen versions in(looking around I don't think they exist).

I checked ebay and they have the non collectors edition of season 1 going on a Buy It Now auction for $29.99(8 copies available), with the collectors editions in the $40 range. Thanks a lot to all those who offered up information, my middle son absolutely loved the movie, sat down and stayed put throughout the entire movie and then wanted to watch it again(quite the accomplishment for a four and a half year old as any parent will tell you, not even the Harry Potter movies or Pokemon could keep him sitting still for the entire length of a movie).