Answers from Nvidia for top 5 this week.

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Week 1

1. Is NVIDIA moving away from gaming and focusing more on GPGPU? We have heard a lot about Fermi's compute capability, but nothing of how good it is for gamers.

Jason Paul, GeForce Product Manager: Absolutely not. We are all gamers here! But, like G80 and G200 before, Fermi has two personalities: graphics and compute. We chose to introduce Fermi?s compute capability at our GTC conference, which was very compute-focused and attended by developers, researchers, and companies using our GPUs and CUDA for compute-intensive applications. Such attendees require fairly long lead times for evaluating new technologies, so we felt it was the right time to unveil Fermi?s compute architecture. Fermi has a very innovative graphics architecture that we have yet to unveil.

Also, it?s important to note that our reason for focusing on compute isn?t all about HPC. We believe next generation games will exploit compute as heavily as graphics. For example:


Physical simulation ? whether using PhysX, Bullet or Direct Compute, GPU computing can add incredible dynamic realism to games through physical simulation of the environment.
Advanced graphical effects ? compute shaders can be used to speed up advanced post-processing effects such as blurs, soft shadows, and depth of field, helping games look more realistic
Artificial intelligence ? compute shaders can be used for artificial intelligence algorithms in games
Ray Tracing ? this is a little more forward looking, but we believe ray tracing will eventually be used in games for incredibly photo-realistic graphics. NVIDIA?s ray tracing engine uses CUDA.

Compute is important for all of the above. That?s why Fermi is built the way it is, with a strong emphasis on compute features and performance.

In addition, we wouldn't be investing so heavily in gaming technologies if we were really moving away from gaming. Here?s a few of the substantial investments NVIDIA is currently making in PC gaming:


PhysX and 3D Vision technologies

The Way it?s Meant to be Played program, including technical support, game compatibility testing, developer tools, antialiasing profiles, ambient occlusion profiles, etc.

LAN parties and gaming events (including PAX, PDX LAN, Fragapalooza, Million Man LAN, Blizzcon, and Quakecon to name a few recent ones)

2. Why Has NVIDIA continued to refresh the G92? Why didn't NVIDIA create an entry level GT200 piece of hardware? The constant G92 renames and reuse of this aging part have caused a lot of discontent amongst the 3D enthusiast community.

Jason Paul, GeForce Product Manager: We hear you. We realize we are behind with GT200 derivative parts, and we are doing our best to get them out the door as soon as possible. We invested our engineering resource in transitioning our G9x class products from 65nm to 55nm manufacturing technology as well as adding several new video and display features to GT 220/210, which put these GT200-derivative products later in time than usual. Also, 40nm capacity has been limited, which has made the transition more difficult.

Since its introduction, G92 has remained a strong price/performance product in our line-up. So why did we rebrand it? While hardware enthusiasts often look at GPUs in terms of the silicon core (i.e. G92) and architecture (i.e. GT2xx), many of our less techie customers instead think about GPUs simply in terms of performance, price, and feature set, summarized via the product name. The product name is an easy way to communicate how products with the same base feature set (i.e. DirectX 10 support) compare to each other in terms of price and performance. Let?s take an example ? what is the higher performance product, a 8800 GT or a 9600 GT? The average joe looking at an OEM web configurator or Best Buy retail shelf probably won?t know the answer. But if they saw a 9800 GT and a 9600 GT, they would know that a 9800 GT would provide better performance. By keeping G92 branding current with the rest of our DirectX 10 product line-up, we were able to more effectively communicate to customers where the product fit in terms of price and performance. At the same time, we tried to make it clear to technical press that these new brands were based on the G92 core so enthusiasts would know this information up front.


3. Is it true that NVIDIA has offered to open up PhysX to ATi without stipulation so long as ATi offers its own support and codes its own driver, or is ATi correct in asserting that NVIDIA has stated that NV will never allow PhysX on ATi gpus? What is NVIDIA?s official stance in allowing ATi to create a driver at no cost for PhysX to run on their GPUs via OpenCL?

Jason Paul, GeForce Product Manager: We are open to licensing PhysX, and have done so on a variety of platforms (PS3, Xbox, Nintendo Wii, and iPhone to name a few). We would be willing to work with AMD, if they approached us. We can?t really give PhysX away for ?free? for the same reason why a Havok license or x86 license isn?t free?the technology is very costly to develop and support. In short, we are open to licensing PhysX to any company who approaches us with a serious proposal.

4. Is NVIDIA fully committed to supporting 3D Vision for the foreseeable future with consistent driver updates or will we see a decrease in support as appears to be the current trend to many 3D Vision users? For example. A lot of games have major issues with Shadows while running 3D Vision. Can profiles fix these issues or are we going to have to rely on developers to implement 3D Vision compatible shadows? What role do developers play in having a good 3D Vision experience at launch?

Andrew Fear, 3D Vision Product Manager: NVIDIA is fully committed to 3D Vision. In the past four driver releases, we have added more than 50 game profiles to our driver and we have seeded over 150 3D Vision test setups to developers worldwide. Our devrel team works hard to evangelize the technology to game developers and you will see more developers ensuring their games work great with 3D Vision. Like any new technology, it takes time and not every developer is able to intercept their development/release cycles and make changes for 3D Vision. In the specific example of shadows, sometimes these effects are rendered with techniques that need to be modified to be compatible with stereoscopic 3D, which means we have to recommend users disable them. Some developers are making the necessary updates, and some are waiting to fix it in their next games.

In the past few months we have seen our developer relations team work with developers to make Batman: Arkham Asylum and Resident Evil 5 look incredible in 3D. And we are excited now to see new titles that are coming ? such as Borderlands, Bioshock 2, and Avatar ? that should all look incredible in 3D.

Game profiles can help configure many games, but game developers spending time to optimize for 3D Vision will make the experience better. To help facilitate that, we have provided new SDKs for our core 3D Vision driver architecture that lets developers have almost complete control over how their game is rendered in 3D. We believe these changes, combined with tremendous interest from developers, will result in a large growth of 3D Vision-Ready titles in the coming months and years.

In addition to making gaming better, we are also working on expanding our ecosystem to support better picture, movie, and Web experiences in 3D. A great example is our support for the Fujifilm FinePix REAL 3D W1 camera. We were the first 3D technology provider to recognize the new 3D picture file format taken by the camera and provide software for our users. In upcoming drivers, you will also see even more enhancements for a 3D Web experience.

5. Could Favre really lead the Vikings to a Superbowl?

Ujesh Desai, Vice President of GeForce GPU Business: We are glad that the community looks to us to tackle the tough questions, so we put our GPU computing horsepower to work on this one! After simulating the entire 2009-2010 NFL football season using a Tesla supercomputing cluster running a CUDA simulation program, we determined there is a 23.468% chance of Favre leading the Vikings to a Superbowl this season.* But Tesla supercomputers aside, anyone with half a brain knows the Eagles are gonna finally win it all this year! J

*Disclaimer: NVIDIA is not liable for any gambling debts incurred based on this data.

Previously answered questions:

1. Does NVIDIA have any plans to reconsider their stance on allowing PhysX to function on an NVIDIA graphics card along side a primary GPU from another manufacturer under Windows 7? If not, does NVIDIA have any plans to offer a capable discrete PhysX only (PPU) card as a stand alone product that will function with any video card, or is PhysX forever tied to the graphics card?

This is Nvidia's official stance on the issue. Nvidia does not officially support it for performance/QA/Dev/Business reasons.

http://physxinfo.com/news/330/...-nvi...configurations/

So it seems that at this time, Nvidia is not planning any reconsideration regarding PhysX. The Ageia PPU is supported under Windows XP and Windows Vista and support will continue. This is not true for Windows 7.

2. Does Nvidia have good coffee around the office? Is it a good whole bean roast ground fresh for each pot?

"We actually have a variety of coffee options.
I don't drink coffee, but I see a lot of people in line at our Starbucks kiosk in our cafeteria. We also have a frequent coffee buyer club.
We also have a variety of Flavia machines with assorted flavors including options like Milky Way mix-ins.
My favorite machine however, is an automatic Peets machine that dispenses lattes, cappuccinos, and a tasty frothy hot chocolate mixed with real milk!"

--------------------------------------------------------------------------------------

New questions are being selected today for next weeks answers.


 
Apr 20, 2008
10,161
984
126
"Since its introduction, G92 has remained a strong price/performance product in our line-up. So why did we rebrand it? While hardware enthusiasts often look at GPUs in terms of the silicon core (i.e. G92) and architecture (i.e. GT2xx), many of our less techie customers instead think about GPUs simply in terms of performance, price, and feature set, summarized via the product name. The product name is an easy way to communicate how products with the same base feature set (i.e. DirectX 10 support) compare to each other in terms of price and performance. Let?s take an example ? what is the higher performance product, a 8800 GT or a 9600 GT? The average joe looking at an OEM web configurator or Best Buy retail shelf probably won?t know the answer. But if they saw a 9800 GT and a 9600 GT, they would know that a 9800 GT would provide better performance. By keeping G92 branding current with the rest of our DirectX 10 product line-up, we were able to more effectively communicate to customers where the product fit in terms of price and performance. At the same time, we tried to make it clear to technical press that these new brands were based on the G92 core so enthusiasts would know this information up front. "

Honestly, he didn't actually answer the question. 40nm is irrelevant as the GT200 is still 55nm, correct?

Also, the 8800gt/9600gt doesn't make sense. The 9800gt/8800gt rebranding made a LOT of uninformed consumers pretty pissed when they found out that they didn't upgrade at all. Remember those threads? The 9xxx series should even exist personally. Each one could have been renamed to an 8700gt(9600gt), 8850 (9800GT) and so forth. The rebranding ordeal really sucked for nVidia. It sounds little and petty now, but a ton of people were pissed.

I could complain about ATI and their 3xxx series essentially being the same as their 2xxx series, but the 3xxx series didn't carry a price premium as compared to the G92 series.
 

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
I heard once that G92 was gonna be 9800 series right off the bat, but they decided against it fearing it would cannibalize G80 sales. Obviously the GTS 640 was dead as soon as the GT came out since it cost more and performed less, but I guess they figured less people would buy an 8800GTX or Ultra if a new series had a 9800 name. But it didn't seem to matter, as people went with the 8800GT by the droves anway, some even dumping their GTX'. If G92 still can compete against 4770, 4850, 5750, etc then there's no need to get rid of it though.
 

Blazer7

Golden Member
Jun 26, 2007
1,099
5
81
3. Is it true that NVIDIA has offered to open up PhysX to ATi without stipulation so long as ATi offers its own support and codes its own driver, or is ATi correct in asserting that NVIDIA has stated that NV will never allow PhysX on ATi gpus? What is NVIDIA?s official stance in allowing ATi to create a driver at no cost for PhysX to run on their GPUs via OpenCL?

Jason Paul, GeForce Product Manager: We are open to licensing PhysX, and have done so on a variety of platforms (PS3, Xbox, Nintendo Wii, and iPhone to name a few). We would be willing to work with AMD, if they approached us. We can?t really give PhysX away for ?free? for the same reason why a Havok license or x86 license isn?t free?the technology is very costly to develop and support. In short, we are open to licensing PhysX to any company who approaches us with a serious proposal.

So much for PhysX being an open standard.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
I think that it should have been the other way around. All G9x parts should have been the 9xxx series to differentiate from G8x/8xxx series. So, I would have called the very first 8800GT a 9800GT and the die shrink to 55nm a 9800GT+. Something like that. At any rate I think people got the performance that they paid for no matter what the model was called.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Originally posted by: Blazer7
3. Is it true that NVIDIA has offered to open up PhysX to ATi without stipulation so long as ATi offers its own support and codes its own driver, or is ATi correct in asserting that NVIDIA has stated that NV will never allow PhysX on ATi gpus? What is NVIDIA?s official stance in allowing ATi to create a driver at no cost for PhysX to run on their GPUs via OpenCL?

Jason Paul, GeForce Product Manager: We are open to licensing PhysX, and have done so on a variety of platforms (PS3, Xbox, Nintendo Wii, and iPhone to name a few). We would be willing to work with AMD, if they approached us. We can?t really give PhysX away for ?free? for the same reason why a Havok license or x86 license isn?t free?the technology is very costly to develop and support. In short, we are open to licensing PhysX to any company who approaches us with a serious proposal.

So much for PhysX being an open standard.

What is your definition of open standard? To spend millions on a technology and give it out for free? PhysX is open as it can be licensed by anyone and used by anyone. If you wanted to use Havok, you would have to pay a licensing fee. So PhysX is open to anyone who wishes to license it.
 

Blazer7

Golden Member
Jun 26, 2007
1,099
5
81
Originally posted by: Keysplayr
Originally posted by: Blazer7
3. Is it true that NVIDIA has offered to open up PhysX to ATi without stipulation so long as ATi offers its own support and codes its own driver, or is ATi correct in asserting that NVIDIA has stated that NV will never allow PhysX on ATi gpus? What is NVIDIA?s official stance in allowing ATi to create a driver at no cost for PhysX to run on their GPUs via OpenCL?

Jason Paul, GeForce Product Manager: We are open to licensing PhysX, and have done so on a variety of platforms (PS3, Xbox, Nintendo Wii, and iPhone to name a few). We would be willing to work with AMD, if they approached us. We can?t really give PhysX away for ?free? for the same reason why a Havok license or x86 license isn?t free?the technology is very costly to develop and support. In short, we are open to licensing PhysX to any company who approaches us with a serious proposal.

So much for PhysX being an open standard.

What is your definition of open standard? To spend millions on a technology and give it out for free? PhysX is open as it can be licensed by anyone and used by anyone. If you wanted to use Havok, you would have to pay a licensing fee. So PhysX is open to anyone who wishes to license it.


"Hello JC,

Ill explain why this function was disabled.

Physx is an open software standard any company can freely develop hardware or software that supports it. Nvidia supports GPU accelerated Physx on NVIDIA GPUs while using NVIDIA GPUs for graphics. NVIDIA performs extensive Engineering, Development, and QA work that makes Physx a great experience for customers. For a variety of reasons - some development expense some quality assurance and some business reasons NVIDIA will not support GPU accelerated Physx with NVIDIA GPUs while GPU rendering is happening on non- NVIDIA GPUs. I'm sorry for any inconvenience caused but I hope you can understand.

Best Regards,
Troy
NVIDIA Customer Care"

Rings a bell ?

Somehow I can't imagine the words ?open std? and ?have to pay for royalties? in the same sentence.

I would have been a lot happier with nV if they would have gone with the royalties thing from the start instead of creating expectations that PhysX would be a truly open std. Because that's what they did.
 

error8

Diamond Member
Nov 28, 2007
3,204
0
76
Originally posted by: Keysplayr
Originally posted by: Blazer7
3. Is it true that NVIDIA has offered to open up PhysX to ATi without stipulation so long as ATi offers its own support and codes its own driver, or is ATi correct in asserting that NVIDIA has stated that NV will never allow PhysX on ATi gpus? What is NVIDIA?s official stance in allowing ATi to create a driver at no cost for PhysX to run on their GPUs via OpenCL?

Jason Paul, GeForce Product Manager: We are open to licensing PhysX, and have done so on a variety of platforms (PS3, Xbox, Nintendo Wii, and iPhone to name a few). We would be willing to work with AMD, if they approached us. We can?t really give PhysX away for ?free? for the same reason why a Havok license or x86 license isn?t free?the technology is very costly to develop and support. In short, we are open to licensing PhysX to any company who approaches us with a serious proposal.

So much for PhysX being an open standard.

What is your definition of open standard? To spend millions on a technology and give it out for free? PhysX is open as it can be licensed by anyone and used by anyone. If you wanted to use Havok, you would have to pay a licensing fee. So PhysX is open to anyone who wishes to license it.

Sounds fair to me. If AMD wants PhysX, he has to pay for it. There is no way around it.

Have to say, this is a very good initiative with all these questions and answers from Nvidia. Great work Keysplayr. :)
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Originally posted by: Keysplayr
What is your definition of open standard? To spend millions on a technology and give it out for free? PhysX is open as it can be licensed by anyone and used by anyone. If you wanted to use Havok, you would have to pay a licensing fee. So PhysX is open to anyone who wishes to license it.

DirextX isn't an open standard either.

http://en.wikipedia.org/wiki/DirectX

Does the world end now? :laugh:

Originally posted by: Scholzpdx
"Since its introduction, G92 has remained a strong price/performance product in our line-up. So why did we rebrand it? While hardware enthusiasts often look at GPUs in terms of the silicon core (i.e. G92) and architecture (i.e. GT2xx), many of our less techie customers instead think about GPUs simply in terms of performance, price, and feature set, summarized via the product name. The product name is an easy way to communicate how products with the same base feature set (i.e. DirectX 10 support) compare to each other in terms of price and performance. Let?s take an example ? what is the higher performance product, a 8800 GT or a 9600 GT? The average joe looking at an OEM web configurator or Best Buy retail shelf probably won?t know the answer. But if they saw a 9800 GT and a 9600 GT, they would know that a 9800 GT would provide better performance. By keeping G92 branding current with the rest of our DirectX 10 product line-up, we were able to more effectively communicate to customers where the product fit in terms of price and performance. At the same time, we tried to make it clear to technical press that these new brands were based on the G92 core so enthusiasts would know this information up front. "

Honestly, he didn't actually answer the question. 40nm is irrelevant as the GT200 is still 55nm, correct?

Also, the 8800gt/9600gt doesn't make sense. The 9800gt/8800gt rebranding made a LOT of uninformed consumers pretty pissed when they found out that they didn't upgrade at all. Remember those threads? The 9xxx series should even exist personally. Each one could have been renamed to an 8700gt(9600gt), 8850 (9800GT) and so forth. The rebranding ordeal really sucked for nVidia. It sounds little and petty now, but a ton of people were pissed.

I could complain about ATI and their 3xxx series essentially being the same as their 2xxx series, but the 3xxx series didn't carry a price premium as compared to the G92 series.

I think he did, while he did not state it explicitly I think it is safe to assume that anyone discussing "GT200 derivatives aimed at the mainstream" implicitly understands this requires a die-shrink in order to get the cost structure low enough that a mainstream part can be sold at a mainstream price. He proceeded to answer that question by stating they are working on it, 40nm capacity is a limiting factor at the moment, and up till now they had chose instead to invest resources into managing G92b since 55nm capacity was available.
 

Blazer7

Golden Member
Jun 26, 2007
1,099
5
81
Originally posted by: error8
Originally posted by: Keysplayr
Originally posted by: Blazer7
3. Is it true that NVIDIA has offered to open up PhysX to ATi without stipulation so long as ATi offers its own support and codes its own driver, or is ATi correct in asserting that NVIDIA has stated that NV will never allow PhysX on ATi gpus? What is NVIDIA?s official stance in allowing ATi to create a driver at no cost for PhysX to run on their GPUs via OpenCL?

Jason Paul, GeForce Product Manager: We are open to licensing PhysX, and have done so on a variety of platforms (PS3, Xbox, Nintendo Wii, and iPhone to name a few). We would be willing to work with AMD, if they approached us. We can?t really give PhysX away for ?free? for the same reason why a Havok license or x86 license isn?t free?the technology is very costly to develop and support. In short, we are open to licensing PhysX to any company who approaches us with a serious proposal.

So much for PhysX being an open standard.

What is your definition of open standard? To spend millions on a technology and give it out for free? PhysX is open as it can be licensed by anyone and used by anyone. If you wanted to use Havok, you would have to pay a licensing fee. So PhysX is open to anyone who wishes to license it.

Sounds fair to me. If AMD wants PhysX, he has to pay for it. There is no way around it.

I think that my point is clear. It's not that nV doesn't have the right to ask for royalties for licensing their own tech. It's all about the way the whole PhysX thing started and how it came up that pisses me off.

Have to say, this is a very good initiative with all these questions and answers from Nvidia. Great work Keysplayr. :)

I second that. :thumbsup:

 

Painman

Diamond Member
Feb 27, 2000
3,805
29
86
TL;DR. ATi kas a kickass, sold-out VPU, nV has a press conference.

Did I miss anything?
 

Barfo

Lifer
Jan 4, 2005
27,554
212
106
Originally posted by: Painman
TL;DR. ATi kas a kickass, sold-out VPU, nV has a press conference.

Did I miss anything?

Are you Wreckage's red team counterpart? :roll:
 

nOOky

Platinum Member
Aug 17, 2004
2,862
1,875
136
Me too. Does anyone from nvidia ever peruse forums like this and give credence to any of our complaints or concerns? I ask because it's a popular enthusiast site. When reviewing newegg's user reviews for example, manufacturers have started to respond now to individual concerns. I would think marketing/development/driver teams would use some of the info here for improvement.
 

yh125d

Diamond Member
Dec 23, 2006
6,907
0
76
Let?s take an example ? what is the higher performance product, a 8800 GT or a 9600 GT? The average joe looking at an OEM web configurator or Best Buy retail shelf probably won?t know the answer. But if they saw a 9800 GT and a 9600 GT, they would know that a 9800 GT would provide better performance.


Yeah picking 9800gt for the name was a bad move and pissed a lot of people off. They had several other names that would have been much more appropriate. 9600GTS/X, 9800GS, 9700 something...

Woulda cleared a lot of confusion




Anywho, it seems that nvidia would be open to licensing physx to ATI? That's good news if I'm understanding it right. I still don't care much for physx, but it's def good to hear nvidia is open to it becoming less proprietary.

And no I don't agree that nV should just make it open source, they developed it, they deserve a proper reimbursement for others using it through licensing fees (though not extravagant ones)
 

ZimZum

Golden Member
Aug 2, 2001
1,281
0
76
Originally posted by: Idontcare
DirextX isn't an open standard either.

http://en.wikipedia.org/wiki/DirectX

Does the world end now? :laugh:

Difference being, since Microsoft owns 90% of the PC market share their releases such as DirectX become defacto standards. nVidia does not have this luxury. They are hoping devs adopt physyx as a standard. But they are in an industry where propretary standards have a history of failing. That was actually the whole point of DirectX. To do away with the proprietary vendor specific API's.
 

KIAman

Diamond Member
Mar 7, 2001
3,342
23
81
Originally posted by: Keysplayr
Week 1

1. Is NVIDIA moving away from gaming and focusing more on GPGPU? We have heard a lot about Fermi's compute capability, but nothing of how good it is for gamers.

Jason Paul, GeForce Product Manager: Absolutely not. We are all gamers here! But, like G80 and G200 before, Fermi has two personalities: graphics and compute. We chose to introduce Fermi?s compute capability at our GTC conference, which was very compute-focused and attended by developers, researchers, and companies using our GPUs and CUDA for compute-intensive applications. Such attendees require fairly long lead times for evaluating new technologies, so we felt it was the right time to unveil Fermi?s compute architecture. Fermi has a very innovative graphics architecture that we have yet to unveil.

Also, it?s important to note that our reason for focusing on compute isn?t all about HPC. We believe next generation games will exploit compute as heavily as graphics. For example:


Physical simulation ? whether using PhysX, Bullet or Direct Compute, GPU computing can add incredible dynamic realism to games through physical simulation of the environment.
Advanced graphical effects ? compute shaders can be used to speed up advanced post-processing effects such as blurs, soft shadows, and depth of field, helping games look more realistic
Artificial intelligence ? compute shaders can be used for artificial intelligence algorithms in games
Ray Tracing ? this is a little more forward looking, but we believe ray tracing will eventually be used in games for incredibly photo-realistic graphics. NVIDIA?s ray tracing engine uses CUDA.


Compute is important for all of the above. That?s why Fermi is built the way it is, with a strong emphasis on compute features and performance.

In addition, we wouldn't be investing so heavily in gaming technologies if we were really moving away from gaming. Here?s a few of the substantial investments NVIDIA is currently making in PC gaming:


PhysX and 3D Vision technologies

The Way it?s Meant to be Played program, including technical support, game compatibility testing, developer tools, antialiasing profiles, ambient occlusion profiles, etc.

LAN parties and gaming events (including PAX, PDX LAN, Fragapalooza, Million Man LAN, Blizzcon, and Quakecon to name a few recent ones)

Nvidia's current GPU's already handle these "features." How does Fermi do it differently? Simply run faster? It still doesn't answer the question on why Fermi is still the architecture to move towards from a gaming perspective.
 

Borealis7

Platinum Member
Oct 19, 2006
2,914
205
106
i love how one of their managers is called "Andrew Fear"! wouldn't want to be working under him though...lol!

but i liked the first answer regarding Fermi's computer vs graphics abilities. basically they're saying "remain clam, You aint seen nothin' yet".
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
Originally posted by: Blazer7
3. Is it true that NVIDIA has offered to open up PhysX to ATi without stipulation so long as ATi offers its own support and codes its own driver, or is ATi correct in asserting that NVIDIA has stated that NV will never allow PhysX on ATi gpus? What is NVIDIA?s official stance in allowing ATi to create a driver at no cost for PhysX to run on their GPUs via OpenCL?

Jason Paul, GeForce Product Manager: We are open to licensing PhysX, and have done so on a variety of platforms (PS3, Xbox, Nintendo Wii, and iPhone to name a few). We would be willing to work with AMD, if they approached us. We can?t really give PhysX away for ?free? for the same reason why a Havok license or x86 license isn?t free?the technology is very costly to develop and support. In short, we are open to licensing PhysX to any company who approaches us with a serious proposal.

So much for PhysX being an open standard.

This seemed like really the wrong question to ask. Can we ask them next time if they will add OpenCL(or Compute Shader) support to PhysX? That would likely make all these other PhysX questions moot. The last part of the above question is NOT the same thing. If Nvidia supported plain-old OpenCL, other vendors wouldn't need to write a custom driver just for PhysX and could use their existing/forthcoming OpenCL Drivers.

 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: ZimZum

Difference being, since Microsoft owns 90% of the PC market share their releases such as DirectX become defacto standards.

NVIDIA has nearly 70% so I guess PhysX is now the defacto standard. :confused:

Again that does not make it an open standard. Just like PhysX, DX is proprietary.

It's a simple fact some people refuse to accept.
 

mwmorph

Diamond Member
Dec 27, 2004
8,882
1
81
I like the answer to question #5. It's the best I've felt about Nvidia all year.

I do have a question keys:

Windows 7 has just been released and with it DirectX 11 has hit retail customers. What is Nvidia's vision and strategy for DirectX 11's Compute shader function, OpenCL and CUDA? Beyond that, can Nvidia tell us what about DirectX11 and what we as a customer can look forward to from a gaming perspective for Fermi?
 

error8

Diamond Member
Nov 28, 2007
3,204
0
76
A question from me: Leaving all the rumors aside, can Nvidia at least give us a time frame in which Fermi will be released. I know it's not possible to say 10th January for example, but at least something like Spring, Summer, Autumn.
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
I have a simple question. Will you be able to use a direct x 10 8800gtx/ gtx 260 card for Physx with a direct x 11 Fermi card? If not, why not?