AMD's Richard Huddy on DirectX 11, Eyefinity, and the competition

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Seero

Golden Member
Nov 4, 2009
1,456
0
0
So wouldent it be great if physx could be better optimized for multithreding, then you could use the cpu to 100% instead of having the cores sitting idle. But nvidia dont want that, they want you to buy a more powerfull video card.
You assume too much, or simply hate Nvidia too much. Why won't Microsoft do it if they can? Who developed DirectX? Why aren't their physics API in Dx11? Because of Nvidia? I don't think so. There will be physics API if it is actually better running on CPU compare to GPU, but no.

In theory, all cores can work together. In practice, they can't. They can work independently, but still shares the same cache, so not only they can't work together, but there is a delay just to pass data from one core to another. The idea is to have one core figure out how to distribute work, then sent to the rest of the core. This idea works, although not fully utilize all cores, it is much better than before. But how many CPU cores are there? 8? Sorry, there are 128 shaders in GTS 8800. That is why. The technology don't stop here, and Fermi design is to further reduce the idle time on each shader, but that isn't the scope here.

If Microsoft can create DirectX that force every video card vendor to pay and follow, why not the other way around? In fact, it should be the other way around. ATI/Nvidia knows more about video card than Microsoft yet Microsoft dictates what function to support. I don't like that, but I can't change that either. I too like ATI/Nvidia to join hands and create a super OpenCL/GL engine for developers, but it isn't happening. Anyways.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
Then why are most titles using CPU PhysX?

And according to 100 senior developers from various developer houses, well, PhysX is the most popular.

http://bulletphysics.org/wordpress/?p=88

Why?
Because ATI don't support PhysX. PhysX is being treated as an addon while some games require physics to function. You can't create a game that only works with Nvidia cards right?

Besides, passing data through PCI-E is very expensive. It is not smart to offload all physics computation to GPU even if it works on all GPU. DirectCompute can lead to a new physics engine that works on all GPUs.
 
Last edited:

waffleironhead

Diamond Member
Aug 10, 2005
7,063
570
136
Because ATI don't support PhysX. PhysX is being treated as an addon while some games require physics to function. You can't create a game that only works with Nvidia cards right?

Nice theory. I'm more thinking that it could it be developers do not want to spend time working on an exclusionary niche product. With cpu physics everyone playing benefits. With physx you lockout some of your potential user base. Now dont get me wrong, I like what I see from physx. I just think that is is going to go away with the addition of even more cores on the cpu that are not being utilized.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
Nice theory. I'm more thinking that it could it be developers do not want to spend time working on an exclusionary niche product. With cpu physics everyone playing benefits. With physx you lockout some of your potential user base. Now dont get me wrong, I like what I see from physx. I just think that is is going to go away with the addition of even more cores on the cpu that are not being utilized.
PhysX will fade out, just like old video cards, the question is when? Soon, a physics engine that does "load balance" will arise, killing all existing physics engine. Again, the question is when?

As of now PhysX is a vendor specific physics engine that are widely used. More and more pre-developed code/api will arise which attracts more developers to use it. These API will then become the standard, when they are widely used. Whether Microsoft is going to jump in and create new standards that does what PhysX does? I don't know, probably. It is a fight between Nvidia vs Intel/Microsoft.
 

T2k

Golden Member
Feb 24, 2004
1,665
5
81
Because ATI don't support PhysX. PhysX is being treated as an addon while some games require physics to function. You can't create a game that only works with Nvidia cards right?

Excuse me but I have to call it a load of horsecrap... Nvidia REFUSES TO LICENSE IT TO ATI, read the interview: Huddy clearly stated Nvidia lies about being open publicly but gave them the middle finger when they reached out.

Besides, passing data through PCI-E is very expensive. It is not smart to offload all physics computation to GPU even if it works on all GPU. DirectCompute can lead to a new physics engine that works on all GPUs.
To be honest once Intel and AMD release their respective CPUGPU-Fusion products (supposedly in 2-3 years) it becomes irrelevant thus PhysX will go titsup immediately unless Nvidia suddenly changes attitude.
 

KIAman

Diamond Member
Mar 7, 2001
3,342
23
81
Totally off topic but I believe the strength of Nvidia's marketing powers are strong enough that they could justify a new GPU architecture that could fold laundry and people would buy it thinking it's all the rage.

When it comes to proprietary design vs standardized. developers will flock to standardize simply because of market visibility and consistency.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
It's simpy not true.

PhysX might can claim higher total number of games but a lot of 'em are low-budget junk.
Havok is clearly doing better among big budget, AAA titles - there's a reason for that...

Yea, one thing I did notice while looking at the lists of games is that Havok has a LOT of the big name games.
 

waffleironhead

Diamond Member
Aug 10, 2005
7,063
570
136
Because ATI don't support PhysX. PhysX is being treated as an addon while some games require physics to function. You can't create a game that only works with Nvidia cards right?

Excuse me but I have to call it a load of horsecrap... Nvidia REFUSES TO LICENSE IT TO ATI, read the interview: Huddy clearly stated Nvidia lies about being open publicly but gave them the middle finger when they reached out.

To be honest once Intel and AMD release their respective CPUGPU-Fusion products (supposedly in 2-3 years) it becomes irrelevant thus PhysX will go titsup immediately unless Nvidia suddenly changes attitude.

Nah, when that happens nvidia will suddenly pull the vendor check on the code and WALLAH! suddenly physx works for everyone...:D.
Then nvidia marketing will step in and say " We are pleased that the other hardware vendors have finnally caught up to us and now have products than can properly work with our solution. Would anyone like to buy it?"
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
Excuse me but I have to call it a load of horsecrap... Nvidia REFUSES TO LICENSE IT TO ATI, read the interview: Huddy clearly stated Nvidia lies about being open publicly but gave them the middle finger when they reached out.
Again, you are taking what a PR manager of AMD as the truth, while the PR of Nvidia claims otherwise.

To be honest once Intel and AMD release their respective CPUGPU-Fusion products (supposedly in 2-3 years) it becomes irrelevant thus PhysX will go titsup immediately unless Nvidia suddenly changes attitude.
2-3 years is a long time my friend. Any company will become obsoleted if they are not improving. That is precisely why Nvidia pushes so hard on GPGPU and tegra.
 

Outrage

Senior member
Oct 9, 1999
217
1
0
You assume too much, or simply hate Nvidia too much. Why won't Microsoft do it if they can? Who developed DirectX? Why aren't their physics API in Dx11? Because of Nvidia? I don't think so. There will be physics API if it is actually better running on CPU compare to GPU, but no.

In theory, all cores can work together. In practice, they can't. They can work independently, but still shares the same cache, so not only they can't work together, but there is a delay just to pass data from one core to another. The idea is to have one core figure out how to distribute work, then sent to the rest of the core. This idea works, although not fully utilize all cores, it is much better than before. But how many CPU cores are there? 8? Sorry, there are 128 shaders in GTS 8800. That is why. The technology don't stop here, and Fermi design is to further reduce the idle time on each shader, but that isn't the scope here.

If Microsoft can create DirectX that force every video card vendor to pay and follow, why not the other way around? In fact, it should be the other way around. ATI/Nvidia knows more about video card than Microsoft yet Microsoft dictates what function to support. I don't like that, but I can't change that either. I too like ATI/Nvidia to join hands and create a super OpenCL/GL engine for developers, but it isn't happening. Anyways.

so cause microsoft have problems multithreding directx according to you. Nvidia cant multithread physx.....

so a cpu with 4 cores have problems figuring out how to dispatch to 4 cores, but a gpu has 128 shaders so therefor it dont have problem with it.....

opencl works with the gpu/cpu and a lot of other things, why wont nvidia port physx to opencl? could it be that they want to sell you graphic cards ?
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
If NVIDIA keeps getting manhandled in the sales department, and AMD gets its act together in its marketing department, you'll see a change of focus (hopefully sooner than later). Ideally, you'd want to see better multi-core support in games, with Physics scaling being pumped to more cores (for example, "High" physics settings in a game can only be enabled if you have a quad core). Meanwhile, you use the computational power of graphics cards for tesselation and the like. If you want to supplement CPU physics processing with graphics horsepower (if enough is available), not divert it completely, which is being done now. That's why PhysX on the GPU's run like shit, and I actually think it's mud on NVIDIA's face. NVIDIA promotes GPU-based PhysX like crazy, but when you turn it on, performance still sucks (especially in single GPU solutions). Guaranteed if you scaled it to a modern quad core, you could get similar IQ with almost zippo performance hit. The set-up is right for any developer with enough brains and marketing finesse to rip the rug right out from underneath NVIDIA using a highly scaleable/multi-core CPU-based physics engine; why no one's done it is anyone's guess (or maybe there are barriers I'm not aware of).
 

T2k

Golden Member
Feb 24, 2004
1,665
5
81
Again, you are taking what a PR manager of AMD as the truth, while the PR of Nvidia claims otherwise.

You are greatly confused: Huddy is the head of European Developer Relations, he has nothing to do with PR or marketing - that's Nvidia's standard practice to only let Dan Vivoli's PR dogs out.

2-3 years is a long time my friend. Any company will become obsoleted if they are not improving. That is precisely why Nvidia pushes so hard on GPGPU and tegra.
Exactly: short on x86 license Nvidia has no choice.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
so cause microsoft have problems multithreding directx according to you. Nvidia cant multithread physx.....

so a cpu with 4 cores have problems figuring out how to dispatch to 4 cores, but a gpu has 128 shaders so therefor it dont have problem with it.....

opencl works with the gpu/cpu and a lot of other things, why wont nvidia port physx to opencl? could it be that they want to sell you graphic cards ?
Nvidia has problems, that is why they redesigned GT200. On GT200, all shader must wait until the last shader has finished before the next iteration begins. If the task is only uses 1 shader, then all other shaders have to wait until it is done. This is how I understand the "Serial Kernel Execution" means. The new architecture allows "Parallel Kernel Execution" which utilize shaders better.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
You are greatly confused: Huddy is the head of European Developer Relations, he has nothing to do with PR or marketing - that's Nvidia's standard practice to only let Dan Vivoli's PR dogs out.

Exactly: short on x86 license Nvidia has no choice.
So Nvidia's employees are dogs who does nothing by lie and AMD's employess are gods who does nothing by tell truths? I think I really should stop replying you.
 
Last edited:

Rezist

Senior member
Jun 20, 2009
726
0
71
Obviously a load balancing between CPU/GPU physics is the answer. However I'm no programmer but during games it seems the GPU's are already maxed anyways and the CPU is left idle. Looking at that alone makes the CPU the number 1 choice since it's not getting utilized as well. The biggest disappointment of Physx is the perfromance hit and poor CPU utlization. If nVidia can work it out they might have something good there.
 

T2k

Golden Member
Feb 24, 2004
1,665
5
81
So Nvidia's employees are dogs and AMD's employess are gods? I think I really should stop replying you.

Replying to your own strawman, getting angry then ignoring me as a result? :D
 
Last edited:

T2k

Golden Member
Feb 24, 2004
1,665
5
81
Obviously a load balancing between CPU/GPU physics is the answer. However I'm no programmer but during games it seems the GPU's are already maxed anyways and the CPU is left idle. Looking at that alone makes the CPU the number 1 choice since it's not getting utilized as well. The biggest disappointment of Physx is the perfromance hit and poor CPU utlization. If nVidia can work it out they might have something good there.

You don't understand it: it WAS working fine on all cores until Nvidia DISABLED it unless you run one of their graphics cards - in which case it'd run on the GPU faster anyway, rendering the entire CPU mode moot.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
It's simpy not true.

PhysX might can claim higher total number of games but a lot of 'em are low-budget junk.
Havok is clearly doing better among big budget, AAA titles - there's a reason for that...

Amazing how you read it that way but let me help you clear this up: Why are most PhysX titles CPU PhysX?
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
Obviously a load balancing between CPU/GPU physics is the answer. However I'm no programmer but during games it seems the GPU's are already maxed anyways and the CPU is left idle. Looking at that alone makes the CPU the number 1 choice since it's not getting utilized as well. The biggest disappointment of Physx is the perfromance hit and poor CPU utlization. If nVidia can work it out they might have something good there.
Poor CPU utilization has nothing to do with PhysX but directx 9 itself. Turn off PhysX don't magically max out my CPU. As to the drop of performance, I agree.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
Amazing how you read it that way but let me help you clear this up: Why are most PhysX titles CPU PhysX?
How does that change his argument? Games that use CPU PhysX only are ones for which NVIDIA hasn't developed GPU PhysX. Again, this is another area where proprietary solutions fail. PhysX is up for bid like any other engine, and my guess is it's cheaper to license than is Havok (correct me if I'm wrong). Like any development, there's probably a lot of politics and budgeting that plays a role in this.
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
I read through some of the thread and theres one thing I don't understand.

If Physx dosen't have any great "A" titles ANd sucks either this way or that way.

Why does anyone care if ATI gets to use it, and why would AMD want it?
Why are all those people I read about pissed because Nvidia (the company who pays for the RD) dosen't want to give the competition THERE MONEY.

This thread is a pointless bottle of Wreckage / T2K trollage.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
How does that change his argument? Games that use CPU PhysX only are ones for which NVIDIA hasn't developed GPU PhysX. Again, this is another area where proprietary solutions fail. PhysX is up for bid like any other engine, and my guess is it's cheaper to license than is Havok (correct me if I'm wrong). Like any development, there's probably a lot of politics and budgeting that plays a role in this.

Really, so if CPU physX is an after thought then why is PhysX so popular with the Senior developers from various developer houses? Or maybe it actually has great tools and abilities for the CPU, GPU, PC, X-box, Sony, Nintendo, etc.?
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
How does that change his argument? Games that use CPU PhysX only are ones for which NVIDIA hasn't developed GPU PhysX. Again, this is another area where proprietary solutions fail. PhysX is up for bid like any other engine, and my guess is it's cheaper to license than is Havok (correct me if I'm wrong). Like any development, there's probably a lot of politics and budgeting that plays a role in this.

PhysX is free for the binaries, $50k for the source.
For support/etc you have to pay extra, $8k, but to use just the physics engine it's free.

Then you have to consider that PhysX is multiplatform. Why use hardware features that are PC only when it's easier to just make it basic and software only, something that will run on all platforms.
So you've got cheap developers getting it free, and you've got bigger people using the basic because it's cross platform and (presumably) fairly easy to port.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
I read through some of the thread and theres one thing I don't understand.

If Physx dosen't have any great "A" titles ANd sucks either this way or that way.

Why does anyone care if ATI gets to use it, and why would AMD want it?
Why are all those people I read about pissed because Nvidia (the company who pays for the RD) dosen't want to give the competition THERE MONEY.
Hell if I know, and that was my point. In it's current form, PhysX is nothing special and there are much better physics engines out there, that, for whatever reason, aren't being used. Like I said, if some company delivered a decent, practical product, they could sweep the market.