• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Second opinion on physx

lavaheadache

Diamond Member
Initially I was quite underwhelmed by the whole Physx mumbo jumbo, but after playing Cryostasis with a backup physx card I am impressed. I still don't think it is a deal breaker when it comes to which GPU company gets my money next round but I will definately be keeping a keen eye on upcoming titles which may make use of the tech.

I feel like the implementation in Cryostasis was good enough that it made a significant impact on the game that it would have been inferior on an AMD card. However, this is one game. I wish AMD would do something to showcase thier product in a special way. They have a very good lineup with the hd 4XXXX cards showing very good performance and the price to match. I know first hand cause I had a 4870 for 8 months. On the otherhand, Nvidia has also got a nice lineup and a neat little advantage with Physx.

Who know's maybe Cryostasis was a fluke and other games will not be hindered as much when played with out physx.

Just my $ .02

 
The problem with physx is that a modern cpu can easily handle it. With newer games you have 3 cpu threads - integer, floating point, and physics (primary reason the xbox 360 has 3 cores). If you want even fancier physics you could add a second cpu core to balance the work.

Software physics makes far more sense because people with physx cards are rare, whereas quad cores are far more common. A cpu is a powerful general purpose processor and you basically have TWO free cores to run physics. Taking a GPU to do that work is a huge waste of resources since most of the rendering functionality is not being used. A single cpu core (much less two) probably has more horsepower pushing physics than a GPU can manage.

Of course, it's not like nvidia doesn't know this. They are desperately trying to hold onto everything they can because they are a GPU company, and their competition of Intel and AMD both have a wider portfolio that can marginalize nvidia over time.
 
@Astrallite: While a CPU can do physics, it sucks at it, even with 8 threads. It's really no match for the massive parallelism of a GPU, which plows through physics a lot quicker. And it's also relative to how much physics is going on in a game. You could easily get to the point of so many physics calculations going on, that even an i7 Extreme wouldn't be able to handle it, while a 8800 still could lol.

ATI needs to swallow it's pride for once, and just accept Nvidia's offer to help implement CUDA on their hardware, along with PhysX.
 
Originally posted by: Astrallite
The problem with physx is that a modern cpu can easily handle it. With newer games you have 3 cpu threads - integer, floating point, and physics (primary reason the xbox 360 has 3 cores). If you want even fancier physics you could add a second cpu core to balance the work.

Software physics makes far more sense because people with physx cards are rare, whereas quad cores are far more common. A cpu is a powerful general purpose processor and you basically have TWO free cores to run physics. Taking a GPU to do that work is a huge waste of resources since most of the rendering functionality is not being used. A single cpu core (much less two) probably has more horsepower pushing physics than a GPU can manage.

Of course, it's not like nvidia doesn't know this. They are desperately trying to hold onto everything they can because they are a GPU company, and their competition of Intel and AMD both have a wider portfolio that can marginalize nvidia over time.

Firstly, if CPUs were so great at physics, then why have we never seen effects like in Mirror's Edge and Cryostasis in any other games? Now before you argue that PhysX runs poorly on CPUs because nVidia has no interest in it, let me point out that none other than Intel owns Havok. Yet, have we seen any Havok games that perform better than PhysX games? Have we seen these fluid or cloth effects in any Havok game?
And why is Intel developing Larrabee anyway? If CPUs are so good at physics, why does Intel always mention how Larrabee will accelerate physics? Like this:
http://isdlibrary.intel-dispat...csOnLarrabee_paper.pdf
It sounds a whole lot like what nVidia is already doing.

Secondly, GPUs handle things just fine. They're a much better solution than CPUs, because it's just a much better trade-off between extra physics effects and framerate than CPUs are. A CPU is actually a much greater waste of resources, because when you run intensive physics on your CPU, your game will become incredibly CPU-limited, and the framerate will tank into single-digit numbers. I really don't see why people use this argument, it makes no sense whatsoever.
 
Originally posted by: Kakkoii
@Astrallite: While a CPU can do physics, it sucks at it, even with 8 threads. It's really no match for the massive parallelism of a GPU, which plows through physics a lot quicker. And it's also relative to how much physics is going on in a game. You could easily get to the point of so many physics calculations going on, that even an i7 Extreme wouldn't be able to handle it, while a 8800 still could lol.

ATI needs to swallow it's pride for once, and just accept Nvidia's offer to help implement CUDA on their hardware, along with PhysX.

It's not a matter of pride, it's a matter of adopting an open standard versus adopting a very niche proprietary solution that happens to be owned by the company they are competing with. nVidia could arbitrarily make it run better on their cards or worse on ATI's for instance.(Nobody would put this past them either, they've done some pretty sleezy things in the past to cut out competition; the ULI chipset and SLI issue comes to mind)

It's also because to date, PhysX hasn't shown itself to be worth investing in yet. At the very least, if a propriety standard is going to be used, it would be MUCH smarter to use one made by a company that isn't ATI or nVidia, such as Microsoft. I fully expect PhysX to slowly be replaced by other physics solutions with time. PhysX has done too little too late while being given a large amount of time to prove its worth.
 
Yet, have we seen any Havok games that perform better than PhysX games? Have we seen these fluid or cloth effects in any Havok game?

You may be interested in AMD's GPU accelerated Havok cloth on OpenCL that demo'd at this years GDC.

I agree CPUs have no place with physics, it is clear through recent investigations on this subject done all round the internet that even a Corei7 sucks at physics (through PhysX demonstrations) a GPU is simply better- end of story. I applaud nvidia for giving me a reason to keep an old GPU for PhysX operations as it is something I have wanted from AMD/Nv for a while but at the moment I'm not sure if PhysX is the way with the games not up to standard and OpenCL on the way it's difficult to say. It should also be said that Nvidia does offer the PhysX SDK for free for devs as a complete physics engine which is IMO a good move on their part, things like this can make PhysX adoption quicker and furthers the realism in games that would otherwise have to license a 3rd party engine or spend time and money developing their own.
 
Thanks OP, I'm picking up the game today. I've heard the name but never investigated. Looks interesting and the graphics do look good. Will check it out after I finish Crysis.
 
Setting up a new PC layout now.

Gigabyte MA790GP-UD4H
PhenomII 730BE
4GB DDR2 1066
GTX295 Primary Renderer
8800GTS 512 for PhysX
Vista 32 SP1
2x 500GB Seagate RAID0
Thermaltake Armor case (Modded all fan connectors for 5v instead of 12. Nice and quite with very good airflow. 4 120mm and 1 220mm fans)

Latest Edition: 3DVision
22" 120MHz Samsung 2233RZ Monitor
3DVision shutter glasses.

Games I'm currently installing:

Crysis
Crysis Warhead
Far Cry 2
Fallout 3
Cryostasis (GPU PhysX)
Mirrors Edge (GPU PhysX)
Unreal Tournament 3 (GPU PhysX)
GRAW2 (GPU PhysX)
BioShock
Half Life 2
Age of Conan
Call of Duty 2
Call of Duty 4
Call of Duty W@W
Painkiller
Painkiller BOOH expansion
Stalker
Assassins Creed

Cryostasis is a pretty hefty game. Depending on the capabilities of ones primary GPU, a second GPU for PhysX may surely help bring up framerates.

I remember using a 9800GTX+ for the primary and a 8600GT for PhysX. It definitely helps.
 
PhysX is a cool technology but adoption of GPU acceleration of games has just been too slow. If consoles could do GPU acceleration we'd probably see many more games that truly benefit from nVidia/Ageia cards but that isn't the case.
 
Originally posted by: Sylvanas
Yet, have we seen any Havok games that perform better than PhysX games? Have we seen these fluid or cloth effects in any Havok game?

You may be interested in AMD's GPU accelerated Havok cloth on OpenCL that demo'd at this years GDC.

Yea, to which I have two things to say:
1) It's also running on a GPU, not on a CPU.
2) It's just a demo. We don't have OpenCL, we don't have a physics API that runs on OpenCL, let alone that we have any games using these effects. In fact, that demo wasn't even released to the public, so even if you have an AMD GPU, you can't actually run this demo.

Conclusion: Even AMD agrees that GPU physics are the way to go. However, at this point, PhysX is the only GPU-accelerated option available to end-users. Still no word on when an actual GPU-accelerated version of Havok will be available to end-users, let alone any actual games making use of it, to provide the same type of effects that PhysX now offers.

So to answer my own question:
No, we haven't seen any games with Havok, that offer the same effects that PhysX does. And we apparently won't see them until there's a GPU solution for Havok, whether that be AMD GPUs or Intel's Larrabee.
That was my point basically: If CPUs are so good at physics, then why have we never seen these effects before?

Originally posted by: Sylvanas
I applaud nvidia for giving me a reason to keep an old GPU for PhysX operations as it is something I have wanted from AMD/Nv for a while but at the moment I'm not sure if PhysX is the way with the games not up to standard and OpenCL on the way it's difficult to say.

PhysX isn't necessarily tied to Cuda. nVidia could port the codebase to OpenCL or DirectX Compute shaders if they wanted. That is not going to be the issue. Obviously there is currently no incentive to port PhysX over, since there is no competition. However, when Havok and other hardware-agnostic physics APIs start competing, nVidia might find it necessary to go hardware-agnostic aswell. They've done it before, with Cg for example.
 
Originally posted by: Keysplayr
Vista 32 SP1

Excuse me for going offtopic... but erm, wow?
Why do you still use a 32-bit OS? I've been using x64 OSes for years now, and thought that pretty much every enthusiast would have switched now, considering the obvious limitations with the 4 GB memory barrier, not even getting into performance advantages with games offering native 64-bit binaries, such as Crysis.
Assuming you have a retail version of Vista, you can legally use the 64-bit version with the same key. You just need to get a 64-bit DVD, either from MS, or just borrow or download one.

And why don't you go for SP2? It's been out for a few weeks already.
 
Originally posted by: dguy6789
It's not a matter of pride, it's a matter of adopting an open standard versus adopting a very niche proprietary solution that happens to be owned by the company they are competing with.

While OpenCL may be an open standard, Havok most certainly is not.
Havok is owned by Intel, that other company they are competing with.

Originally posted by: dguy6789
nVidia could arbitrarily make it run better on their cards or worse on ATI's for instance.(Nobody would put this past them either, they've done some pretty sleezy things in the past to cut out competition; the ULI chipset and SLI issue comes to mind)

The exact same could be said about Intel... In fact, Intel has actually been convicted of blocking AMD's business in various countries.

Originally posted by: dguy6789
It's also because to date, PhysX hasn't shown itself to be worth investing in yet. At the very least, if a propriety standard is going to be used, it would be MUCH smarter to use one made by a company that isn't ATI or nVidia, such as Microsoft.

But Microsoft has not shown any interest in developing a physics API whatsoever.
I think we can all agree that Intel is not the right company to promote a physics standard any more than nVidia is.
So what is your point?
 
Originally posted by: thilan29
PhysX is a cool technology but adoption of GPU acceleration of games has just been too slow. If consoles could do GPU acceleration we'd probably see many more games that truly benefit from nVidia/Ageia cards but that isn't the case.

Thilan, you could just give all the devs that signed on a chance to develop their new games with PhysX. We already have a small taste, as lavaheadache describes, he will be keeping a closer eye on PhysX after experiencing Cryostasis. This is exactly how I saw things happening. The more experience people have with PhysX, the more interested in it they become.

 
Originally posted by: dguy6789
Originally posted by: Kakkoii
@Astrallite: While a CPU can do physics, it sucks at it, even with 8 threads. It's really no match for the massive parallelism of a GPU, which plows through physics a lot quicker. And it's also relative to how much physics is going on in a game. You could easily get to the point of so many physics calculations going on, that even an i7 Extreme wouldn't be able to handle it, while a 8800 still could lol.

ATI needs to swallow it's pride for once, and just accept Nvidia's offer to help implement CUDA on their hardware, along with PhysX.

It's not a matter of pride, it's a matter of adopting an open standard versus adopting a very niche proprietary solution that happens to be owned by the company they are competing with. nVidia could arbitrarily make it run better on their cards or worse on ATI's for instance.(Nobody would put this past them either, they've done some pretty sleezy things in the past to cut out competition; the ULI chipset and SLI issue comes to mind)

It's also because to date, PhysX hasn't shown itself to be worth investing in yet. At the very least, if a propriety standard is going to be used, it would be MUCH smarter to use one made by a company that isn't ATI or nVidia, such as Microsoft. I fully expect PhysX to slowly be replaced by other physics solutions with time. PhysX has done too little too late while being given a large amount of time to prove its worth.

Niche proprietary solution?

1. It has been demonstrated MANY times that PhysX is not proprietary in the sense that nobody else can use it. It's only proprietary in the sense that Nvidia is the only one using it right now. Not that others can't. They can but won't.

2. Niche market? With over a 100 million PhysX GPU capable install base (which would equate to @ 1/3 the US population but spreadout worldwide), I'd hardly call PhysX a niche market. It's much broader and bigger than anything else out there right now. And growing. We are further along that we were 6 months ago, and will be much further along 6 months from now. It's not stopping as some predicted. Devs are writing/coding as we type away here.

3. Microsoft: Speaking of MS, Windows 7 should prove very interesting with Direct X Compute built directly into the OS. So, in a way, what you want is already scheduled for release in Windows 7. Nvidia fully supports and is pushing ahead with OpenCL.

4. I don't actually understand what people have against Nvidia inventing any sort of standard. It's not like the technology is bad. Far from it. It is insane. You guys should be welcoming this sort of innovation. No other company on Earth is pushing technology as hard as Nvidia is, and always has. CUDA is an awesome architecture and has been proven many times over. PhysX is just making it's debut with some full games. Young, but on it's way. I really cannot get my mind around people being against Nvidia creating any sort of standard. It's not like they are a Tier 5 company. PhysX is now emerging as an interesting technology that less and less gamers are condemning to a "checkbox" feature, and seeing it as a true gaming boon.

So, it really doesn't matter who's "proprietary" technology eventually becomes a "standard" technology. The dude with the best tech wins. That's how it should be.
If Larrabee comes out and wastes everything in it's path, then that's who should win the standard war. Everything was proprietary in it's infancy. These IP, if proven, go on to become standards.

 
Originally posted by: Keysplayr
Nvidia fully supports and is pushing ahead with OpenCL.

In fact, a few days ago, it was announced that nVidia's OpenCL drivers have passed certification:
http://insidehpc.com/2009/06/1...certification-process/

"NVIDIA Corporation has released the world?s first OpenCL 1.0 conformant drivers for Windows XP and LINUX. These drivers are now available to all NVIDIA GPU Computing registered developers."

Not much from the AMD camp yet, sadly.
 
Originally posted by: Scali
Originally posted by: Keysplayr
Vista 32 SP1

Excuse me for going offtopic... but erm, wow?
Why do you still use a 32-bit OS? I've been using x64 OSes for years now, and thought that pretty much every enthusiast would have switched now, considering the obvious limitations with the 4 GB memory barrier, not even getting into performance advantages with games offering native 64-bit binaries, such as Crysis.
Assuming you have a retail version of Vista, you can legally use the 64-bit version with the same key. You just need to get a 64-bit DVD, either from MS, or just borrow or download one.

And why don't you go for SP2? It's been out for a few weeks already.

To be honest, I really do not see a need, at present, to use 64 bit Vista. For me. 4GB limitation seems to be fine. I can't really see needing anything more at this point. Maybe for Windows 7 I will consider it. And with games like Crysis, My performance is actually pretty good with a GTX295. Probably the reason I don't feel compelled to go 64 bit, is I don't see any shortcomings with the 32 bit version. Go Figure.

And I'm funny with Service Packs. I won't use them until they are about 6 months out to the public. In other words, well proven not to be a nightmare. It took me 6 months to install SP2 for XP. Again, it really didn't stop me from doing what I needed to do. Everything worked well and I was happy with it. If it aint broke, don't fix it type of situation.

And besides, I only have 4GB of RAM for this rig anyway. yeah, RAM is cheap now, but it's even cheaper not to buy any. hehe.

Can you just tell me real quick what advantages/fixes SP2 for Vista will grant me?

 
Originally posted by: Scali
Originally posted by: Keysplayr
Nvidia fully supports and is pushing ahead with OpenCL.

In fact, a few days ago, it was announced that nVidia's OpenCL drivers have passed certification:
http://insidehpc.com/2009/06/1...certification-process/

"NVIDIA Corporation has released the world?s first OpenCL 1.0 conformant drivers for Windows XP and LINUX. These drivers are now available to all NVIDIA GPU Computing registered developers."

Not much from the AMD camp yet, sadly.

I would be surprised to see AMD run notepad without rendering errors at this point. What they have done in terms of GPGPU stuff is pathetic at this point.
 
Originally posted by: Keysplayr
To be honest, I really do not see a need, at present, to use 64 bit Vista. For me. 4GB limitation seems to be fine.

4 GB is the 'hard limit', this includes all memory-mapped I/O and things aswell. In the very best case you only have about 3.2 GB of memory actually available to the system.
Considering you're basically putting 3 videocards into your system, with a total of over 2 GB of memory, there will probably be less than 2 GB of memory actually available to Vista.
Now I personally don't find Vista to be very snappy with less than 2 GB of memory. 2 GB really seems to be the sweet spot.

Another thing is that 32-bit versions of Windows also limit each application to a maximum of 2 GB. In a 64-bit version of Windows, those same applications can each get the full 4 GB.
I've personally found that games like Crysis use so many textures and other resources at very high settings, that you are running into the 2 GB limit. I found that running the 64-bit version reduced my loading time considerably, not only when loading a new game, but also during the game. In 32-bit, there were places where just turning around would cause the game to drop to single-digit frames while loading a new part of the level... This flushed the other part, so turning back would again cause slowdowns while it loaded the other part of the level again.

Originally posted by: Keysplayr
Probably the reason I don't feel compelled to go 64 bit, is I don't see any shortcomings with the 32 bit version. Go Figure.

Well, you generally don't miss what you never had. I suggest you give it a try, you might not want to go back.
It's very easy to give it a try now, because you can use the Windows 7 x64 RC for free. All you need is some harddisk space and a bit of spare time.

Originally posted by: Keysplayr
And I'm funny with Service Packs. I won't use them until they are about 6 months out to the public. In other words, well proven not to be a nightmare. It took me 6 months to install SP2 for XP. Again, it really didn't stop me from doing what I needed to do. Everything worked well and I was happy with it. If it aint broke, don't fix it type of situation.

Well, that's up to you. I find that Microsoft has generally tested their service packs very well, and by the time they are offered to the general public, there is little risk. Vista had quite a few problems originally, specifically with performance and memory usage. So keeping your system up to date is a good way to ensure maximum performance. Having said that, SP2 in itself isn't that compelling. But in general, I like to stay up to date with everything... also including BIOS versions and chipset/videocard drivers and all that.
 
Originally posted by: Scali
Originally posted by: Keysplayr
To be honest, I really do not see a need, at present, to use 64 bit Vista. For me. 4GB limitation seems to be fine.

4 GB is the 'hard limit', this includes all memory-mapped I/O and things aswell. In the very best case you only have about 3.2 GB of memory actually available to the system.
Considering you're basically putting 3 videocards into your system, with a total of over 2 GB of memory, there will probably be less than 2 GB of memory actually available to Vista.
Now I personally don't find Vista to be very snappy with less than 2 GB of memory. 2 GB really seems to be the sweet spot.

Another thing is that 32-bit versions of Windows also limit each application to a maximum of 2 GB. In a 64-bit version of Windows, those same applications can each get the full 4 GB.
I've personally found that games like Crysis use so many textures and other resources at very high settings, that you are running into the 2 GB limit. I found that running the 64-bit version reduced my loading time considerably, not only when loading a new game, but also during the game. In 32-bit, there were places where just turning around would cause the game to drop to single-digit frames while loading a new part of the level... This flushed the other part, so turning back would again cause slowdowns while it loaded the other part of the level again.

Originally posted by: Keysplayr
Probably the reason I don't feel compelled to go 64 bit, is I don't see any shortcomings with the 32 bit version. Go Figure.

Well, you generally don't miss what you never had. I suggest you give it a try, you might not want to go back.
It's very easy to give it a try now, because you can use the Windows 7 x64 RC for free. All you need is some harddisk space and a bit of spare time.

Originally posted by: Keysplayr
And I'm funny with Service Packs. I won't use them until they are about 6 months out to the public. In other words, well proven not to be a nightmare. It took me 6 months to install SP2 for XP. Again, it really didn't stop me from doing what I needed to do. Everything worked well and I was happy with it. If it aint broke, don't fix it type of situation.

Well, that's up to you. I find that Microsoft has generally tested their service packs very well, and by the time they are offered to the general public, there is little risk. Vista had quite a few problems originally, specifically with performance and memory usage. So keeping your system up to date is a good way to ensure maximum performance. Having said that, SP2 in itself isn't that compelling. But in general, I like to stay up to date with everything... also including BIOS versions and chipset/videocard drivers and all that.

LOL Scali!! Just for the hell of it, and because this Vista Install is so new, and I haven't installed any games as of yet, I downloaded and tried to install SP2 for Vista.
And not to my surprise, I received a BSOD and memory dump. So, I tried. System is configured perfectly with all up to date drivers from respective manufacturers.

I've rebooted. I'll try the install again. What the heck right?
I'll let you know what happens. But this is precisely why I wait so long after SP's are released. I haven't had much luck with them, and also because I used to manage about a 2600 node and 40+ server Windows network. I was always SUPER cautious when it came to updates and service packs across the network. I guess that carried over to my personal use.

Sorry for this OT. Back to PhysX. Sorry gang. 😉
 
Originally posted by: Scali
Originally posted by: Keysplayr
Vista 32 SP1

Excuse me for going offtopic... but erm, wow?
Why do you still use a 32-bit OS? I've been using x64 OSes for years now, and thought that pretty much every enthusiast would have switched now, considering the obvious limitations with the 4 GB memory barrier, not even getting into performance advantages with games offering native 64-bit binaries, such as Crysis.
Assuming you have a retail version of Vista, you can legally use the 64-bit version with the same key. You just need to get a 64-bit DVD, either from MS, or just borrow or download one.

And why don't you go for SP2? It's been out for a few weeks already.

*Practically* there is no difference in gaming whatsoever, frameratewise; you gain a little in the load times with a 64-bit OS over 32 is all; MS gave the devs a hotfix to address the 2GB limitation and there are SO few games that are actually optimized for the 64-bit pathway

in fact Vista32 is slightly faster for most games

If you have an OEM version of Vista32 you are stuck with it; no upgrade to the 64 bit version without paying - and if you have the upgrade one, it costs about $15 for MS to send you the 64-bit disk

i am still on SP1 with all the hotfixes because i need consistency through all of my benching
- i am also building an AMD PC this week

a "budget" High performance gamer built around an unlocked 550 X2 Phenom II which i want to be able to take on my current intel PC - Q9550S at 4.0Ghz
New AMD build - *budget* high performance gamer for 19x12

it will have SP2 😛
- and i will will match it in my current PC

 
Originally posted by: Keysplayr
Setting up a new PC layout now.

Gigabyte MA790GP-UD4H
PhenomII 730BE
4GB DDR2 1066
GTX295 Primary Renderer
8800GTS 512 for PhysX
Vista 32 SP1
2x 500GB Seagate RAID0
Thermaltake Armor case (Modded all fan connectors for 5v instead of 12. Nice and quite with very good airflow. 4 120mm and 1 220mm fans)

Latest Edition: 3DVision
22" 120MHz Samsung 2233RZ Monitor
3DVision shutter glasses.

Games I'm currently installing:

Crysis
Crysis Warhead
Far Cry 2
Fallout 3
Cryostasis (GPU PhysX)
Mirrors Edge (GPU PhysX)
Unreal Tournament 3 (GPU PhysX)
GRAW2 (GPU PhysX)
BioShock
Half Life 2
Age of Conan
Call of Duty 2
Call of Duty 4
Call of Duty W@W
Painkiller
Painkiller BOOH expansion
Stalker
Assassins Creed

Cryostasis is a pretty hefty game. Depending on the capabilities of ones primary GPU, a second GPU for PhysX may surely help bring up framerates.

I remember using a 9800GTX+ for the primary and a 8600GT for PhysX. It definitely helps.

I am truly enjoying Cryostasis using my 8800 GT SC to fully experience the PhysX effects. The game is pretty slick and quite creepy:shocked:
 
Originally posted by: Keysplayr

4. I don't actually understand what people have against Nvidia inventing any sort of standard. It's not like the technology is bad. Far from it. It is insane. You guys should be welcoming this sort of innovation. No other company on Earth is pushing technology as hard as Nvidia is, and always has. CUDA is an awesome architecture and has been proven many times over. PhysX is just making it's debut with some full games. Young, but on it's way. I really cannot get my mind around people being against Nvidia creating any sort of standard. It's not like they are a Tier 5 company. PhysX is now emerging as an interesting technology that less and less gamers are condemning to a "checkbox" feature, and seeing it as a true gaming boon.

So, it really doesn't matter who's "proprietary" technology eventually becomes a "standard" technology. The dude with the best tech wins. That's how it should be.
If Larrabee comes out and wastes everything in it's path, then that's who should win the standard war. Everything was proprietary in it's infancy. These IP, if proven, go on to become standards.

This is a good point. I don't understand it either. But I think I do. People sometimes like to see the perceived Goliath fails. It's simple as that. It's the "root for the underdog" mentality. Nv haters like to see AMD and Intel kick NV's ass. But these same people are/were probably bashing Intel too and rooting for AMD on the CPU front. And the same people who always bash Microsoft Windows due to its complete and utter dominance in the desktop OS market and will espouse Linux... but we know in reality Linux is a hobbyist desktop OS... and that most desktop PCs are still "standarized" to an extremely proprietary Windows OS. "Open source" softwares and intellectual properties are a noble concept... and open source does have its place... but for 95% of the time.. intellectual properties are not free and usually those who invented it will want to control it. I wouldn't want to invent something and then have some slacker comes along to ask me to share it with him no string attached. I remember pepole were arguing/bashing against MS DirectX API when it threaten OpenGL too some 10+ years ago. And here we are today, DirectX has pretty much become the de facto gaming API for Windows.

AMD and Intel will do their best to derail and sabotage Physx. Nv pretty much bends over to help AMD to get along with Physx, but AMD is acting like a prima dona.
 
Originally posted by: apoppin
*Practically* there is no difference in gaming whatsoever, frameratewise; you gain a little in the load times with a 64-bit OS over 32 is all; MS gave the devs a hotfix to address the 2GB limitation and there are SO few games that are actually optimized for the 64-bit pathway

What hotfix are you talking about?
As far as I know, the 2 GB is by design (the OS needs some address space for itself, for things like the kernel and drivers and such).
You can use the /LARGEADDRESSAWARE switch to extend it to 3 GB, but that doesn't always work 100%. Quite a few drivers don't like it when you try to get into the 2 GB of reserved address space.
 
Originally posted by: shangshang
I remember pepole were arguing/bashing against MS DirectX API when it threaten OpenGL too some 10+ years ago. And here we are today, DirectX has pretty much become the de facto gaming API for Windows.

Not only that, but DirectX is actually starting to get a foot in the door in the more professional markets aswell. For example, Discreet, maker of industry standard software packages like 3dsmax and Autocad, started supporting Direct3D as an alternative to OpenGL a few years ago, and is now phasing out OpenGL altogether, because Direct3D gives them much more consistent results across a wide range of hardware, because of more mature drivers and better overall standardization and validation of drivers and hardware.
See here for example: http://usa.autodesk.com/adsk/s...eID=123112&id=10676494

"Autodesk is committed to Direct3D because it provides the best technology platform for 3D CAD and for the creation of digital prototypes. While the alternative open, multiplatform OpenGL graphics standard has served the industry for many years, it is no longer the dominant player. It has been eclipsed by a combination of the advances in cost performance at the low end of the graphics market and Microsoft?s adoption of Direct3D as the standard graphics API on the Windows platform.

Direct3D offers many advantages. Hardware manufacturers must conform strictly to the Direct3D specification when they implement standard graphics features. Compliance with this specification is enforced through the Windows Hardware Quality Labs (WHQL) driver certification process. The rigorous test specification requires that graphics hardware and drivers demonstrate 100 percent implementation of the Direct3D specification. As a result, Direct3D-compliant graphics cards ensure a high-quality graphics experience, without the need for applications to include special profiles or case code. You thus receive increased stability and reliability.

For Autodesk Inventor customers, Direct3D lowers the cost of entry?so more users can adopt the Inventor application for their design, engineering, and manufacturing workflows?and allows existing users to leverage their previous investment in graphics cards.

With support for Direct3D introduced in Inventor 11, Autodesk had a full year of product experience with Direct3D before the release of Windows Vista. We have found that by leveraging Direct3D, Inventor implementation on Windows Vista shows some small but significant performance gains compared with Windows XP.

Finally, Autodesk has worked closely with the Microsoft Direct3D team to make sure that Direct3D is well designed for CAD applications. Direct3D 10, the latest release of Direct3D, is included with Windows Vista and offers a wide variety of powerful new features, many of which were driven by the needs of high-end CAD and visualization applications. Leveraging the Direct3D 10 next-generation platform, Autodesk will continue to build user interface enhancements that deliver accurate and compelling visual results."

Sadly some people are STILL in denial, and still think OpenGL (which is so open that the opensource implementation for linux and *BSD had to be called MesaGL because of trademark issues and all that) is a superior standard. Even the professional market is now moving away from it.
 
Originally posted by: Scali
Originally posted by: apoppin
*Practically* there is no difference in gaming whatsoever, frameratewise; you gain a little in the load times with a 64-bit OS over 32 is all; MS gave the devs a hotfix to address the 2GB limitation and there are SO few games that are actually optimized for the 64-bit pathway

What hotfix are you talking about?
As far as I know, the 2 GB is by design (the OS needs some address space for itself, for things like the kernel and drivers and such).
You can use the /LARGEADDRESSAWARE switch to extend it to 3 GB, but that doesn't always work 100%. Quite a few drivers don't like it when you try to get into the 2 GB of reserved address space.

Microsoft released one - i dunno - 18 months ago and it made Vista32 work very well with the newer games of the time - including the Witcher

There were several articles on it at AT - i would have to *dig* for them and i am under a deadline for my own review 😛

i tested back then in 2 identical 4GB PCs and found - except for Hellgate: London and FarCry - the majority of games ran slightly faster on Vista32 - with no issues except perhaps for slightly slower loading overall [that was not consistently measurable due to the way Vista handles memory].

Well, i updated my testing a month ago and published it; this time with more games - and found the same thing. Most games run slightly faster in Vista32. And there are a handful more well-optimized-for-64bit games - including FarCry2 - but not Crysis :Q
 
Back
Top