ATI Havok GPU physics apparently not as dead as we thought

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: BenSkywalker
Who is more important to Apple? Intel or Sickly NV.

You should study up a bit on the history of Apple, Intel has been a very, very small supplier for them on a historical basis. Apple is also very much the sort of company that would drop x86 whenever they felt like it and move to a completely different architecture- they have done it twice before.

YOu seem to think a VP at NV being the head of a group is a bid deal .

I pointed out that he was as some exceptionaly dishonest people were acting like OpenCL was an AMD/Intel proposal from the start- they were brought on board after Apple and nVidia had already been working on the standard. These things are fact.

Lol not even close .

Then quite frankly you have no idea what you are talking about. I explained the issues with the rather shockingly poor demo vid you linked, the "physics" in it were horribly broken and I pointed out easy ways for anyone to be able to tell. If you can't understand that, I can't help you. Go to school, learn a little bit about computer science perhaps?

Besides thats in this game

No, it isn't. Supposedly it is in the engine, but we don't have proof of that either. Noone involved claims it is in the game.

I said intel and apple were working on a standard outside of DX and it was all true

Apple has had OpenGL for many years now, what are talking about? Seriously, do you have any idea at all what the difference between OpenGL and OpenCL is? Honestly, it sounds like you don't have the slightest understanding of any of the technologies involved. You give the impression that you can read press releases and then put together exceedingly poorly written posts.

Give me a break on the GL /CL ' Thing ok . were talking CL here . If we were talking kronos than we could talk GL alittle. Nice atempt at deversion. You don't know what was going on between Intel and Apple . But I can tell you for fact both Apple and myself new about larrabee way before you did.

On the content of whats in the game project Offset. This is there stance . It has been their stance since Intel took over.

Mark Subotnick
Executive Producer of Project Offset

Hello from your friendly neighborhood Executive Producer of Project Offset.

We are all back at work and attempting to recover from a great and very tiring GDC. We had a great time presenting some of our editor, tools, tech and the Meteor Destruction Video. Rémi and Ian gave a great presentation that was well received and attended. The videos and website refresh are also being very well received and we thank you all for your continued support.

We know you want to see gameplay and you want to know what the game will be like.
We know you want to know all about when you can play it and on what platform you will be able to play it on.
We know you want to know why it has been a while since we showed you anything, and why we still are showing tech demo?s and examples of content and not the game.
We know you want you to know all about it, and for that we share your excitement and we wish we could tell you all.

Q: Has the game gone back to the drawing board since you last saw it?
A: In some ways?Yes. We have a had a design scrub post Intel acquisition. The gameplay footage you have seen in the past was from a prototype. However we did not throw away all work or assets from that effort. We will share all we can as soon as possible and when ready. I wish I could share all right now, but we need to make sure what we share is what we can deliver and that we make no false promises to you our fans. And yes we work for Intel and we work closely with them to decide when to share with you what we have been up to. Please keep checking the site for updates and news. We will keep updating as often as possible.

Yes it has been a while, and yes this project and team started small and was then acquired by Intel. The tech and engine have been in development for some time, and post the acquisition did need some re-work to target Intel Architectures in the best way possible. We also took some time after completing the prototype of the game, which was created before acquisition, to look at the size of the team, our resources, the games out on shelf currently and our target hardware architectures, and did some re-working on design and gameplay mechanics. We did a round or two of design scrubs. We also have some great new help in crafting our story in the best way possible to share with you and let you experience, it and get immersed in our world and characters.

We are thrilled with the progress we have made and continue to make. Our tech, our game, our tools, our story, and our progress are all exciting to us internally. We have pretty pictures, but we also have fun and exciting gameplay in store for you and a narrative in plan for the life of the franchise.

We will be sharing all we can with you as soon as possible. We know you are eager to see and learn. We are eager to share and to get your feedback.

Please keep the questions, feedback and support coming - and tell a friend.

What did you find exciting at GDC? There was a lot of buzz around a few new things and technologies.
What games most excited you?
Any keynotes that shocked, excited or surprised you?
Any news that interested you?

Please go to our forum and post, post, post. We also have weekly topics that we will blog about and would love your feedback on those. Again go to the forums and post away.

We love to hear your thoughts about us and the industry.

I will avoid giving the impression of promoting anyone?s tech over any other, so I will not personally comment on what I liked and will leave that to the press and you all. As a gamer I am excited by a lot of the tech and news.

Some common themes this year at GDC:
· New gaming services
· Iphone games and all the people going to make apps for the Iphone
· How do we keep innovating and creating and pushing the bar while insuring an ROI?
· Facial Animation and the technologies around them
· A continued push for strong community tools and support
· Free gaming business models

I am happy to see our industry still strong during such tough times. We are happy and grateful to be working and not only that but to be working on a game, engine, editor and world class tools. Thank you for all of your support.

Please share your thoughts on all topics, and keep the questions coming.

I am off to rest and recover some more, and then back to work making a fun game for you all to play.


http://forums.anandtech.com/me...103927&highlight_key=y
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Give me a break on the GL /CL ' Thing ok . were talking CL here .

Then why on Earth were you talking about it changing gaming on Apple's hardware? What do you really think is going to change in a profound manner because of CL? You are the one that brought it up, so you go ahead and explain it. You have also made comment about CL hurting DirectX..... that is rather interesting to say the least. Given that OpenCL shares almost no functionality at all with DirectX it is a tad bit confusing. It would be akin to saying that BluRay was going to kill off iTunes, it just doesn't make any sense.

But I can tell you for fact both Apple and myself new about larrabee way before you did.

And I would care why? A bad idea with horrible execution time for a flawed concept based on bad predictions, not exactly something I'd be all that interested in honestly. But that isn't what this thread is about, I thought I obliterated enough of your delusions about Larrabee and how profoundly stupid ray tracing is today in the last discussion we had about this? But that is getting off topic.

You don't know what was going on between Intel and Apple .

Intel came on board with OpenCL after Apple and nVidia had been working on it for a while. As far as Larrabee may pertain to Apple- people do not buy Macs to game on. Larrabee may not suck horrible at media based tasks, so it may work out for Apple depending on how much longer it takes Intel to actually get anything done, but it won't be because of the gaming performance(that is of a much smaller level of importance to Apple then general computing capabilites).
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: BenSkywalker
Give me a break on the GL /CL ' Thing ok . were talking CL here .

Then why on Earth were you talking about it changing gaming on Apple's hardware? What do you really think is going to change in a profound manner because of CL? You are the one that brought it up, so you go ahead and explain it. You have also made comment about CL hurting DirectX..... that is rather interesting to say the least. Given that OpenCL shares almost no functionality at all with DirectX it is a tad bit confusing. It would be akin to saying that BluRay was going to kill off iTunes, it just doesn't make any sense.

But I can tell you for fact both Apple and myself new about larrabee way before you did.

And I would care why? A bad idea with horrible execution time for a flawed concept based on bad predictions, not exactly something I'd be all that interested in honestly. But that isn't what this thread is about, I thought I obliterated enough of your delusions about Larrabee and how profoundly stupid ray tracing is today in the last discussion we had about this? But that is getting off topic.

You don't know what was going on between Intel and Apple .

Intel came on board with OpenCL after Apple and nVidia had been working on it for a while. As far as Larrabee may pertain to Apple- people do not buy Macs to game on. Larrabee may not suck horrible at media based tasks, so it may work out for Apple depending on how much longer it takes Intel to actually get anything done, but it won't be because of the gaming performance(that is of a much smaller level of importance to Apple then general computing capabilites).[/
q]

Your first bolded . Here let me make it easy for you . Intel doesn't need DX to run its new tech. Whats does that mean? Let me think! Oh could it be that if Intel doesn't need DX that intel software hardware will slide right into place on snow OS with CL. Could this change things for Apple or Intel . If Intel is self dependent . Maybe AMD and NV will say Hay we don't want to be tied to DX. So I guess is what I am saying is Larrabee will work on Apple as well if not better than MS. Thats a game changer. Wait till you guys see both AMD and Intel lock there hardware to a platiform . Coming very soon . The sooner the better. Nv will work on Apple only if its running MS. ATI can run a apple only if its on ms.

Larrabee can run on snow os native. Thanks to CL. Also apple has done some other things to help Intel cpus threading as well. Its going to be exciting . Of course I am referring DX gaming here.


Your second point bolded . A link to go with that . I already showed thats a false statement. You get a link saying Apple approached NV on CL. Because this CL thing started along time ago between Intel and Apple 06. You will never except this until we all see larrabee performing on apple snow os along side I7. Maybe inte/Apple cooked up a scheme to bring NV onboard because NVs VP was head of Khronos group lol. By Apple going to NV chipsets . A move that hurt Apple big time. Lots of failures. And lost market share for Apple. You might ask Why would intel Apple do this for intel . Tuff question. Let me think . Tuff. Could it be . Fast forward the year is 2010 Intel is introducing Sandy Bridge. Sandy is a new arch . It doesn't run native x86. Yep thats right An Intel cpu that doesn't run native x86. X86 is ported over to Intels New tech. Called AVX.

So heres the payoff for Apple . All x86 programms ported to AVX will run native on snow os. Thats the deal its a mutal benefit to both parties. In the end nv and ms will lose out. You can look for Apple to sell its OS to the public in 2011.
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,978
126
Originally posted by: Frank Encruncher

Beside all that, most physics are terrible anyways.
Most will agree with me, until I can drop a wall on an opponent using a tank shell or fireball(genre of your choice) physcis are wasteful.
Check out the interviews and trailers for Red Faction 3. All static fixtures (buildings, bridges, walls, etc) are destructible because they use real physics principles to perform load-bearing calculations. If the engine deems you've done enough damage to the right places, the entire building can collapse.

When developers first started implementing buildings, a lot of buildings instantly collapsed because they weren't "built" properly, and the engine decided they weren?t strong enough to support their own weight. This required a dramatic change in the way of thinking about level design.

The irony is that the whole thing runs under software physics: an in-house engine for the calculations, and Havok for the actual physics effects. It?s also PC and console compatible.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Have you noticed lately the rising anamosity that MS is showing towards Apple lol. Some great thinker at ms finely connected the dots. This all started heating up when it was announced that x86 programms will be ported to AVX on Intel after late 2010.

Dam it took the boys at MS a long time to catch on to what was going on right under there noses.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: BenSkywalker
Who is more important to Apple? Intel or Sickly NV.

You should study up a bit on the history of Apple, Intel has been a very, very small supplier for them on a historical basis. Apple is also very much the sort of company that would drop x86 whenever they felt like it and move to a completely different architecture- they have done it twice before.

YOu seem to think a VP at NV being the head of a group is a bid deal .

I pointed out that he was as some exceptionaly dishonest people were acting like OpenCL was an AMD/Intel proposal from the start- they were brought on board after Apple and nVidia had already been working on the standard. These things are fact.

Lol not even close .

Then quite frankly you have no idea what you are talking about. I explained the issues with the rather shockingly poor demo vid you linked, the "physics" in it were horribly broken and I pointed out easy ways for anyone to be able to tell. If you can't understand that, I can't help you. Go to school, learn a little bit about computer science perhaps?

Besides thats in this game

No, it isn't. Supposedly it is in the engine, but we don't have proof of that either. Noone involved claims it is in the game.

I said intel and apple were working on a standard outside of DX and it was all true

Apple has had OpenGL for many years now, what are talking about? Seriously, do you have any idea at all what the difference between OpenGL and OpenCL is? Honestly, it sounds like you don't have the slightest understanding of any of the technologies involved. You give the impression that you can read press releases and then put together exceedingly poorly written posts.

Bolded part Ben

It really takes alot Ben to get under my skin. Even with your base attempts at baiting.

You seen In Meteor that the leaves didn't move to your liking or the flags didn't do this or that.

Yet with all your great powers of observation. When I told you it was in the scene you said no its not . I bolded what ya said above. Now watch your little dressee swirl ok . Than watch that Meteor video again. Now those swirling clouds paint it red than in the center put a dancer.

SO what is it baiting or what?

Ben its OK. When I get angry I walk away. I never go back . I got pretty pissed here once. In the water cooling section I will never go back . AS is usual I was right about the extreme systems BS On a waterblock and rad that was recently released. I tested the stuff myself. After my daughter showed me her results . Lets just say they were very good. My test equipment for watercooling on PCs is as good as it gets plus we test inside the case. There was alot of lieing going on and I got pretty freaken pissed about it. I can't say all the reasons why . But trust me there good reasons. This is something I am GREAT at not good, great. So I take no shit on the subject Iam so good with large hydralic systems it surprizes myself make it smaller and its to easy. Everthing else I pretty lose with.

 

instantcoffee

Member
Dec 22, 2008
28
0
0
Originally posted by: chizow
Really, what's the advantage? 9 more months of the same CPU physics we've seen for the last 5-6 years? And what about the consumers? The longer it takes for GPU physics adoption from both IHVs, the longer it'll take for developers to implement additional GPU physics effects in games.

The advantage is a continous hardware optimizaton for the API for the years to come. With major support. Optimizations for everyone. Not everything is good to throw at the GPU, since the GPU has other tasks then to calculate the physics.

For us consumers, its better with Havok that has been constantly used as hardware agnostic middleware, in contrary to PhysX, which currently is more used to push hardware. It also takes longer time with PhysX to get GPU accelerated physics for everyone, since its using CUDA as platform. The result of PhysX's approach is that the GPU accelerated physcs exists only as an addition to the game instead of a part of the game. As seen with Mirrors edge.


LMAO. So lets get this straight. You think AMD would allow Nvidia to write drivers for their hardware? Here's where SunnyD starts lecturing you about the obvious licensing and IP problems you'd encounter.

The SDK isn't released to write the drivers, but to write applications for ATI cards that will access the cards resources. I don't think you understand what the SDK is for, if you think its there so people can write drivers for ATI cards.

Rofl, that's funny, it seems you've done nothing but post misinformation and lies while blatantly ignoring facts to the contrary since you joined. As for Havok clearly being the better solution....lol...are you saying CPU physics is a better alternative to GPU physics? You're defending AMD's position to segregate the GPU physics market and you're advocating things that benefit all consumers? LOOOOOOOL.

Misinformation and lies comes from you and according to your post history, you have done that for years. Guerrilla marketing for Nvidia at any chance you get. You're even trying to twist facts to make bad publicity for AMD and good for Nvidia. Just look at the quote from your post.
AMD is not segregating the GPU physics market. Thats another lie from you. First of all, Havok belongs to Intel, not AMD. Secondly, Havoks OpenCL approach gives GPU physics for everyone, not only geforce cards. I am defending AMD's choice of selecting a hardware agnostic alternative versus something Nvidia uses to push hardware. Since I am buying Nvidia and ATI GPU's, AMD and Intel CPU's, Havok is the best alternative for me as a consumer.
Thats the difference between us as well. You want to market Nvidia whenever you get the chance, even by lying as above. AMD's choice is not segregation, but a unification. With OpenCL, both ATI and Nvidia gets Havok support by default. With CUDA, only Nvidia cards get PhysX (on GPU). Nvidia's approach segregates the GPU market.

Past titles and licensing agreements show exactly that, history. The point of those announcements is to show that publishers are giving developers the choice to implement whatever middleware they choose. There are more Havok titles than PhysX, I've never claimed otherwise, but there's also absolutely no denying there are more GPU-accelerated PhysX titles than Havok titles, offering more advanced physics simulations that aren't possible on current CPUs. Do licensing agreements automatically translate into more PhysX titles? No. Do they offer more potential for adoption, more options for developers, and ultimately, a higher probability of GPU accelerated titles in the future? Absolutely yes.

I think you should read it again (and note that its not about licensing agreement, but what they actually use):

As a part of our long-standing partnership with Havok, nine out of our ten internal studios, including Relic, Rainbow and Volition, are <actively using Havok Physics and other Havok products in development today

That was funny in light of Nvidia's pressrelease. The release from Havok is newer btw. Its even more funny towards your post:

End result is you have some of the largest publishers on the PC (EA, 2K, THQ) and 2 of the 3 major consoles (Wii and PS3) along with some of the most influential game engines (UE 3.0 and Gamebryo) all signing and announcing major licensing agreements with PhysX in the past 8-9 months since Nvidia announced GPU accelerated PhysX. I'd say support is clearly growing in PhysX's favor, if anything

Makes your post about licensing agreements being history a bit controdictive to your previoius post about those agreements. But, everything is allowed when guerilla marketing Nvidia, right?

LOL clear as mud. The only thing you've made clear is that you have no clue about what you're talking about. And to a lesser degree that your bent on promoting misinformation over all else.

I think its more your understanding thats a bit limited here. That CUDA provides support for OpenCL, DX11 etc. doesn't mean that OpenCL, DX11 etc. supports CUDA. You cannot access CUDA through DX11 or OpenCL. Its not a two way street.


LOL, which is the end result I've been advocating all along, widespread adoption of GPU accelerated physics which will ultimately lead to accelerated implementation in games. Nvidia will be able to claim support for both "in the best interest of consumers" while AMD and people like you advocate segregation of the market, ultimately delaying any widespread use of GPU physics.

The only thing you have been advocating, is what you always advocate: Nvidia, CUDA and PhysX. Again you repeat the lie about AMD advocating segregation of the market. I'll repeat and add:
Intel owns Havok, not ATI.
Havok is currently the biggest physics middleware provider and existed before PhysX.
Havok's GPU acceleration is hardware agnostic through OpenCL. Both ATI and Nvidia benifits from it and will be able to get it by default. Unlike PhysX, where Nvidia won't let anyone else use it (on GPU( unless they adopt CUDA as well.
As long as Nvidia is using PhysX to push hardware and CUDA, I don't think you will see a widespread use of GPU PhysX physics.
With Havok on OpenCL, developers will be able to use GPU accelerated physics and reach everyone. ATI supports it, Nvidia supports it and Intel will support it as well. Consumers win.

AMD is not advocating segregation of the market. Thats a lie you should stop spreading. If anything, they are supporting Intel's Havok which covers the whole market, instead of a part of it.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Instant coffee. I think it's pretty much safe to say, that you're marketing just as much as you think Chizow is. I mean, you're both calling each other liars. Why don't you both take this crap to PM's and spare us all. Thanks.
 

instantcoffee

Member
Dec 22, 2008
28
0
0
Originally posted by: Keysplayr
Instant coffee. I think it's pretty much safe to say, that you're marketing just as much as you think Chizow is. I mean, you're both calling each other liars. Why don't you both take this crap to PM's and spare us all. Thanks.

I disagree about that. The only thing I am marketing is my opinions. I'm not marketing ATI/AMD/Nvidia or Intel. What I am interested in, is their products. Chizow is an "independent (or not)" PR agent for Nvidia. His world is about Nvidia vs. ATI, Intel vs. AMD, while mine is about their products, not the companies. He's mostly crapping on all the competition of Nvidia.

I didn't initiate the name calling, but I did respond to it. Him lying about stuff, for the sake of Nvidia PR, isn't anything new, but I didn't want to go to that level before responding to his post. We can always take it on PM's, but I don't think Chizow would do that. Historically, Chizow goes after the person instead of the subject in discussions at some point in every discussion. I doubt he will change that.

I will stop any namecalling for now, but I will respond if he continues.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Your first bolded . Here let me make it easy for you . Intel doesn't need DX to run its new tech. Whats does that mean? Let me think! Oh could it be that if Intel doesn't need DX that intel software hardware will slide right into place on snow OS with CL. Could this change things for Apple or Intel . If Intel is self dependent . Maybe AMD and NV will say Hay we don't want to be tied to DX.

Neither AMD or nVidia have been tied to DX, for nV in particular all the way back to the NV1 they havne't been tied to DX. Their hardware happens to run DirectX, and I think you will find most people like having MS in the position where they control DirectX as it gives us a standard to build the industry around. Perhaps you are too young to remember, but the API wars of the early days of real time 3D were not enjoyable as a gamer.

Nv will work on Apple only if its running MS.

nVidia parts on Apple hardware run OpenGL, as they have for many years now. Macs do not run DirectX natively nor have they ever(well, the hardware is capable of it if you install Windows now, but not native under Mac OS).

Fast forward the year is 2010 Intel is introducing Sandy Bridge. Sandy is a new arch . It doesn't run native x86. Yep thats right An Intel cpu that doesn't run native x86. X86 is ported over to Intels New tech. Called AVX.

Seriously, I think a comp sci class or two would help you out enormously. Either that or spend time reading on line, start back with the Pentium Pro. Intel chips haven't run actualy x86 code on an internal basis for years. Furthermore, the Itanium 'EPIC' architecture has also been around for many years. Nothing you are talking about it remotely new, we know exactly how much of an impact it will have in the market. You seem to think Intel has divine foresight, how about NetBurst hitting 10GHZ, most of us will remember that foolishness.

This all started heating up when it was announced that x86 programms will be ported to AVX on Intel after late 2010.

Macs already run x86 natively on Mac OS. No need to fast forward to any point in time, it isn't something that is coming in the future- it has been happening for quite some time. Again, common knowledge and publicly available information for some time now.

When I told you it was in the scene you said no its not . I bolded what ya said above. Now watch your little dressee swirl ok . Than watch that Meteor video again. Now those swirling clouds paint it red than in the center put a dancer.

Having a uniform environment impacted by all physical activity is good physics, no matter what the graphics look like. Having a bunch of disjointed elements working out of sync is bad physics, period. I made no mention of the graphics differences as we are discussing physics in this thread, and in that regard that demo was very poor.

His world is about Nvidia vs. ATI, Intel vs. AMD, while mine is about their products, not the companies.

If that were the case why would you be opposed to AMD offering support for Havok AND PhysX? Noone that has been backing PhysX in this discussion has stated that nVidia shouldn't support both that I can recall. Offering your customers the best product you are capable of shouldn't be too much to ask, even for AMD.
 

instantcoffee

Member
Dec 22, 2008
28
0
0
Originally posted by: BenSkywalker
If that were the case why would you be opposed to AMD offering support for Havok AND PhysX? Noone that has been backing PhysX in this discussion has stated that nVidia shouldn't support both that I can recall. Offering your customers the best product you are capable of shouldn't be too much to ask, even for AMD.


This one was an answer to me I think.

I'm not opposed to AMD offering support for Havok AND PhysX. IF PhysX gets ported to OpenCL, AMD will automatically gain support for PhysX as Nvidia does with Havok. I am opposed to the current strategy of running PhysX off CUDA. Nvidia says they might port it to OpenCL at a later stage, while Havok says they will use OpenCL. For me, it means I might like to have physx around at a later stage.... :p

Also, with the extensive use of shaders in current and upcoming games, I'm more interested in Havoks approach of using a more balanced CPU/GPU approach. This looks more interesting to me, since I am fond of SSAO (and also Nvidia putting it in as an option in their drivers, which is a good thing in my book) and want to use shaders for that as well.

Currently, they cannot use both Havok and PhysX in the same games, and it seems Havok has more to offer now then PhysX for me as a gamer.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Just got my instant coffee, time to take out the trash. :)

Originally posted by: instantcoffee
The advantage is a continous hardware optimizaton for the API for the years to come. With major support. Optimizations for everyone. Not everything is good to throw at the GPU, since the GPU has other tasks then to calculate the physics.
You do realize both AMD and Intel also understand the future of physics is hardware acceleration right? The only reason Intel would capitulate to a GPU accelerated, non-x86 based solution would be to keep Havok relevant by matching PhysX's hardware capabilities.

For us consumers, its better with Havok that has been constantly used as hardware agnostic middleware, in contrary to PhysX, which currently is more used to push hardware. It also takes longer time with PhysX to get GPU accelerated physics for everyone, since its using CUDA as platform. The result of PhysX's approach is that the GPU accelerated physcs exists only as an addition to the game instead of a part of the game. As seen with Mirrors edge.
PhysX is only used to push hardware? That's odd, prior to this week I thought it was used to push hardware PhysX capabilities previously impossible on existing hardware and software solutions (discounting Ageia PPU).

Also, how does it take longer with PhysX's approach just because its CUDA? You think it would've taken 8-9 months for AMD to write some working CUDA drivers for AMD parts?

The SDK isn't released to write the drivers, but to write applications for ATI cards that will access the cards resources. I don't think you understand what the SDK is for, if you think its there so people can write drivers for ATI cards.
Rofl, I don't think you understand neither are needed to write a driver for AMD's hardware, but they certainly would be necessary for API and software validation to ensure the drivers actually did what they were written to do. The point is that AMD's claim about "closed and proprietary standards" preventing them from supporting PhysX is a farce.

Misinformation and lies comes from you and according to your post history, you have done that for years. Guerrilla marketing for Nvidia at any chance you get. You're even trying to twist facts to make bad publicity for AMD and good for Nvidia. Just look at the quote from your post.
Rofl misinformation and lies coming from me? I've been spot on with my analysis of hardware physics and all the different API in play since day 1 with few exceptions. I've repeatedly corrected you in this thread with facts and references disputing your false claims. Seeing as you haven't yet mentioned Havok FX, OpenGL, or claimed Nvidia can't support OpenCL in this reply its obvious facts have prevailed.

But none if this is really any surprise given your brief posting history and propensity for posting misinformation, although I will say low post count + high concentration of FUD certainly makes it easy to remember people like you. ;) I guess you don't recall when you first posted, guns ablazing full of "facts":
  • "Facts" right?

    Nvidia getting sued by Microsoft for buggy drivers......We don't have all the material for the lawsuit, but Microsoft obviosly feels the evidence is good enough to bring to court. They probably have more evidence then submitted reports, since they are suing Nvidia for bad drivers.

    Games crashes on Nvidia cards even when they have "worked close with the developers". A good example might be Assassins Creed, where the DX10.1 caused bugs on Nvidia cards, so it needed to be removed in the next patch. Or Fallout 3 running faster on ATI cards when it first came, even though it was a TWIMTBP title.

So yes, please let me know if I ever post anything as egregiously false as that, or anything you've posted in this thread (or here ever, really) and make sure to back those claims up with actual facts and references and not whatever inaccurate "facts" you fabricate next.

AMD is not segregating the GPU physics market. Thats another lie from you. First of all, Havok belongs to Intel, not AMD. Secondly, Havoks OpenCL approach gives GPU physics for everyone, not only geforce cards. I am defending AMD's choice of selecting a hardware agnostic alternative versus something Nvidia uses to push hardware. Since I am buying Nvidia and ATI GPU's, AMD and Intel CPU's, Havok is the best alternative for me as a consumer.
No, its not a lie. If they supported PhysX as it became available, and also supported Havok, you'd have a point. Instead they gave deceitful reasons about not supporting "closed and proprietary standards" and not moving forward with GPU Physics until it "makes sense". End result is that the resulting segregation of the GPU physics market has retarded the adoption of GPU physics for at least another 8-9 months plus whatever lead time before Havok's OpenCL client is released for public consumption.

Thats the difference between us as well. You want to market Nvidia whenever you get the chance, even by lying as above. AMD's choice is not segregation, but a unification. With OpenCL, both ATI and Nvidia gets Havok support by default. With CUDA, only Nvidia cards get PhysX (on GPU). Nvidia's approach segregates the GPU market.
Uh, no I've pushed for GPU accelerated physics from Day 1 because I saw the potential for making games better. I honestly do not care about what standards or middleware is being used, as long as the hardware I own supports it and the application results in better games. That's very different than all the AMD apologists providing reasons for why AMD shouldn't support a new innovative feature their hardware is perfectly capable of supporting and advancing the industry.

Also you seem to be forgetting the fact OpenCL was only ratified a few months ago and is very similar to CUDA (again, given Nvidia chaired the standard, and probably wrote the code to largely mirror CUDA). If AMD had worked on a CUDA driver that effort would've been value add toward an OpenCL driver, which is the same source of optimism Nvidia owners should have for seamless OpenCL Havok support. There is no technical reason the hardware cannot support it and the APIs are very similar, so the inverse should hold true for AMD and PhysX even if they were committed to OpenCL.

I think you should read it again (and note that its not about licensing agreement, but what they actually use):

<links>

Makes your post about licensing agreements being history a bit controdictive to your previoius post about those agreements. But, everything is allowed when guerilla marketing Nvidia, right?
Rofl, you still don't understand, I've never claimed PhysX support is greater than Havok support. I claimed PhysX support is growing, as is clearly indicated from major publishers, platforms, and developers licensing the SDK. Surely you understand that allows for a greater probability that PhysX is used in games going forward, right? If PhysX was insignificant and not gaining support, why would these developers sign licensing agreements to use them to begin with?

Also, there's no need to engage in guerilla marketing, or even gorilla as it pertains to you seeing as I've had to bludgeon you repeatedly with facts to make any progress, as these points should be clearly obvious to any reasonably minded individual. No one has claimed overnight support. That's the whole point and why AMD's subterfuge with regard to PhysX was so deceitful. Because game development has significant lead time so that the earlier adoption of features means the higher likelihood of seeing those features in actual games.

I think its more your understanding thats a bit limited here. That CUDA provides support for OpenCL, DX11 etc. doesn't mean that OpenCL, DX11 etc. supports CUDA. You cannot access CUDA through DX11 or OpenCL. Its not a two way street.
LMAO, my limited understanding? At least you haven't used Havok FX or OpenGL once in your replies yet, so we're clearly making progress at bringing your limited understanding up to speed.

Also, I haven't claimed its a two way street and completely compatible, they'd obviously need whatever software translation needed whether via wrapper or custom driver, but again, given how similar the API are there's no technical reason preventing support. Dave Baumann's (AMD) and Nadeem Mohammed's (Nvidia) statements here and elsewhere have absolutely backed my points on this.

The only thing you have been advocating, is what you always advocate: Nvidia, CUDA and PhysX. Again you repeat the lie about AMD advocating segregation of the market. I'll repeat and add:
Intel owns Havok, not ATI.
Havok is currently the biggest physics middleware provider and existed before PhysX.
LMAO, its plainly obvious now that AMD's previous smoke and mirrors stance on PhysX has done nothing but segregate the GPU physics market and retarded the implementation of accelerated physics effects in games. While the end result might be PhysX and Havok both supporting AMD and Nvidia hardware, the cost will be another 8-9 months + however long it takes for OpenCL Havok to be made publicly available.

Havok's GPU acceleration is hardware agnostic through OpenCL. Both ATI and Nvidia benifits from it and will be able to get it by default. Unlike PhysX, where Nvidia won't let anyone else use it (on GPU( unless they adopt CUDA as well.
And? You didn't even know what OpenCL was before this week, much less OpenCL Havok (again, refer to previous use of OpenGL + Havok FX), and it certainly doesn't absolve AMD of all the rubbish they've fed everyone about GPU physics for the past 8-9 months. As for the bit about Nvidia lol, well again, its obvious your arguments have no merit.

As long as Nvidia is using PhysX to push hardware and CUDA, I don't think you will see a widespread use of GPU PhysX physics.
LMAO, you seem to be missing the big picture here. Nvidia is pushing GPGPU to extend the usefulness of their hardware and diversify into non-GPU markets, not a PhysX middleware they paid 100 million for. CUDA and by extension, OpenCL are just a means to that end and both are compute architectures largely designed and written by Nvidia to achieve that goal. They don't care what software exposes this hardware functionality, as long as their hardware is eating most of the pie (it is), they just want to grow that pie (it will with Havok, or PhysX, or both). The only way to ensure they'll continue to eat the majority of that growing pie is to support all standards and middleware while offering the best performance.

With Havok on OpenCL, developers will be able to use GPU accelerated physics and reach everyone. ATI supports it, Nvidia supports it and Intel will support it as well. Consumers win.
Which would've been the same outcome 8-9 months ago if AMD supported PhysX.

AMD is not advocating segregation of the market. Thats a lie you should stop spreading. If anything, they are supporting Intel's Havok which covers the whole market, instead of a part of it.
LMAO, you can ignore the last 8-9 months prior to OpenCL Havok's announcement all you like, it just makes you look like a revisionist. If they supported PhysX in August it would've been 100% of the GPU physics market, just as it will be going forward now (presumably).

Oh and I don't really drink instant coffee, it tastes like shit and leaves a bad after taste.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: instantcoffee
I disagree about that. The only thing I am marketing is my opinions. I'm not marketing ATI/AMD/Nvidia or Intel. What I am interested in, is their products. Chizow is an "independent (or not)" PR agent for Nvidia. His world is about Nvidia vs. ATI, Intel vs. AMD, while mine is about their products, not the companies. He's mostly crapping on all the competition of Nvidia.

I didn't initiate the name calling, but I did respond to it. Him lying about stuff, for the sake of Nvidia PR, isn't anything new, but I didn't want to go to that level before responding to his post. We can always take it on PM's, but I don't think Chizow would do that. Historically, Chizow goes after the person instead of the subject in discussions at some point in every discussion. I doubt he will change that.

I will stop any namecalling for now, but I will respond if he continues.
Uh no, I stop to call out blatant misinformation and lies and it just so happens your posts are always chock full of it. :)

Again, please feel free to point out instances of me lying or posting information. I think its clearly obvious in all of our exchanges who is doing the fact-finding and correcting, given your stance and arguments flap in the wind like a PhysX or Havok-enabled flag, while mine has been the same since August. LMAO. :)
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: BenSkywalker
Apple has had OpenGL for many years now, what are talking about? Seriously, do you have any idea at all what the difference between OpenGL and OpenCL is? Honestly, it sounds like you don't have the slightest understanding of any of the technologies involved. You give the impression that you can read press releases and then put together exceedingly poorly written posts.
LOL, I'm surprised you actually have the patience to read and reply to his posts and still manage to show restraint. I just find it easier to ignore his posts seeing as they're so nonsensical and difficult to decipher. I won't deny getting a good laugh though, thanks. :)
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: evolucion8
Someone is looking for another nice vacation...
Don't worry ole buddy you're safe, there's no penalty for posting misinformation, FUD or lies provided you adequately demonstrate ignorance or a substandard level of understanding. And no that's not a personal attack, its a correct interpretation of policy:

  • From the links in ViRGE's post at the top:

    TOS and Guidelines:"You agree that you will not use our Forums to post any material, or links to any material, which is knowingly false and/or defamatory, inaccurate, abusive, vulgar, hateful, harassing, obscene, profane, sexually oriented, threatening, invasive of a person's privacy, or otherwise violative of any law.

    1) No trolling, flaming or personally attacking members. Deftly attacking ideas and backing up arguments with facts is acceptable and encouraged. Attacking other members personally and purposefully causing trouble with no motive other than to upset the crowd is not allowed.
I guess the real question is what it takes to establish a pattern to reasonably prove the "knowingly" portion for intent, seeing as its the same few people who continually refuse to acknowledge or accept facts integral to the discussion.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: chizow
Originally posted by: evolucion8
Someone is looking for another nice vacation...
Don't worry ole buddy you're safe, there's no penalty for posting misinformation, FUD or lies provided you adequately demonstrate ignorance or a substandard level of understanding. And no that's not a personal attack, its a correct interpretation of policy:

  • From the links in ViRGE's post at the top:

    TOS and Guidelines:"You agree that you will not use our Forums to post any material, or links to any material, which is knowingly false and/or defamatory, inaccurate, abusive, vulgar, hateful, harassing, obscene, profane, sexually oriented, threatening, invasive of a person's privacy, or otherwise violative of any law.

    1) No trolling, flaming or personally attacking members. Deftly attacking ideas and backing up arguments with facts is acceptable and encouraged. Attacking other members personally and purposefully causing trouble with no motive other than to upset the crowd is not allowed.
I guess the real question is what it takes to establish a pattern to reasonably prove the "knowingly" portion for intent, seeing as its the same few people who continually refuse to acknowledge or accept facts integral to the discussion.

Is not a personal attack, is bashing, you are an nVidia fanboy, period. You will never understand of reasoning and fairplay, that's why people like you are banned for 3 weeks, and you will be again, I'm safe because I always backup my FUD with links in the web from reputable websites which are experts in FUD. So I know that I will be safe, but not sure about you pal
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
Originally posted by: chizow
The point is that AMD's claim about "closed and proprietary standards" preventing them from supporting PhysX is a farce.

Instead they gave deceitful reasons about not supporting "closed and proprietary standards" and not moving forward with GPU Physics until it "makes sense".

Closed: not public; restricted; exclusive.[/quote]

Proprietary: manufactured and sold only by the owner of the patent, formula, brand name, or trademark associated with the product.[/quote]

I wouldn't say it's completely a farce or deceitful; while CUDA was offered to ATi, it is still closed and proprietary. OpenCL is not.

End result is that the resulting segregation of the GPU physics market has retarded the adoption of GPU physics for at least another 8-9 months plus whatever lead time before Havok's OpenCL client is released for public consumption.

Some would, and have, argued that nVidia imposing CUDA with PhysX has done that.

  • nVidia: Here's PhysX for ya. We got it to work on GPUs by using CUDA. You can have that too.

    ATi: We don't want to dick with CUDA - that's your shit. That's why we have Stream. Can't you just let us have PhysX?

    nVidia: Nope. You have to take CUDA with it if you want it.

To each their own perspective. In one hand I can see how nVidia's hesitation to release non-CUDA based PhysX has been a delay in its adoption to the GPU. In the other, I can see how ATi's unwillingness allow CUDA on their GPUs as a delay in adoption.

When one of the rivaling companies owns both entities, a delay in adoption is unavoidable. I doubt had the situation been reversed (that is, if ATi owned PhysX and coupled it to Stream) nVidia would have accepted such an offer either.

...OpenCL was only ratified a few months ago and is very similar to CUDA (again, given Nvidia chaired the standard, and probably wrote the code to largely mirror CUDA). If AMD had worked on a CUDA driver that effort would've been value add toward an OpenCL driver, which is the same source of optimism Nvidia owners should have for seamless OpenCL Havok support.

It's hard to say who did the bulk of the writing; the VP position nVidia had for OpenCL certainly helps show that they played their role in its development, but we won't know whether or not that translated to their domination of its architecture. Even if it did, ATi and Intel were also involved in its gestation. I'm sure both of their OpenCL support will be just fine.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
@Evolucion8, you've proven my point better than anything else I could write here, so no need to further derail the thread. :)

Originally posted by: josh6079
I wouldn't say it's completely a farce or deceitful; while CUDA was offered to ATi, it is still closed and proprietary. OpenCL is not.
I'm not going to argue semantics about open or closed, but the fact remains Havok is every bit as closed and proprietary as PhysX, yet you're unwilling to acknowledge their support of Havok is not only contradictory, but completely deceitful in light of their reasons for not supporting PhysX? Also, you believe waiting what turned out to be 6-9 months for OpenCL ratification and support is a better alternative than support of both given they're in no way mutually exclusive? I think you really should go back and read what was written about PhysX and AMD's support, or lack of it, as its really too much to cut and paste:

Does AMD Block PhysX On Radeon Development?

Some would, and have, argued that nVidia imposing CUDA with PhysX has done that.
As opposed to what? The non-existent API that wasn't invented yet? CUDA pre-dates Nvidia's acquisition of PhysX, its the foundation of their GPGPU efforts that started with G80 over 2 years ago and is also the foundation of OpenCL. Naturally it makes more sense for Nvidia to use an existing API rather than wait for something to come around that was functionally identical.

  • nVidia: Here's PhysX for ya. We got it to work on GPUs by using CUDA. You can have that too.

    ATi: We don't want to dick with CUDA - that's your shit. That's why we have Stream. Can't you just let us have PhysX?

    nVidia: Nope. You have to take CUDA with it if you want it.

You forgot:

  • nVidia:[/i] We'll give you whatever help you need, the SDKs for both PhysX and CUDA are free to download but you can invent your own API, drivers or whatever else you need to get PhysX to work, its all gravy in our book.

    ATi: No thanks, we'll just wait 9 months for you to copy your API, re-name it, and then pretend we invented it along with Intel using their much better proprietary software, now that it makes sense. And don't worry our fans and customers won't care either because we've convinced them all its Nvidia's fault.

It's hard to say who did the bulk of the writing; the VP position nVidia had for OpenCL certainly helps show that they played their role in its development, but we won't know whether or not that translated to their domination of its architecture. Even if it did, ATi and Intel were also involved in its gestation. I'm sure both of their OpenCL support will be just fine.
Again, you're welcome to read up on Derek's articles on OpenCL here on AT, but based on his comments and every other news bit I've read about it, its very similar to CUDA.
 

josh6079

Diamond Member
Mar 17, 2006
3,261
0
0
Originally posted by: chizow
Originally posted by: josh6079
I wouldn't say it's completely a farce or deceitful; while CUDA was offered to ATi, it is still closed and proprietary. OpenCL is not.
I'm not going to argue semantics about open or closed, but the fact remains Havok is every bit as closed and proprietary as PhysX...

Yes, indeed, it is. But I was talking about CUDA and OpenCL.

...yet you're unwilling to acknowledge their support of Havok is not only contradictory, but completely deceitful in light of their reasons for not supporting PhysX?

I'm confused. Dave Baumann clarified that they'd be happy to use either Havok or PhysX. So long as it's not through CUDA, they're fine.

The OP showed Havok running on Radeons through OpenCL. Had PhysX been available through OpenCL, why wouldn't they have used it?

I think you really should go back and read what was written about PhysX and AMD's support, or lack of it, as its really too much to cut and paste:

Does AMD Block PhysX On Radeon Development?

Thank you. That was a very informative link.

Also, [do] you believe waiting what turned out to be 6-9 months for OpenCL ratification and support is a better alternative than support of both given they're in no way mutually exclusive?

Personally, I'm not sure. Cheng, who has more expertise and a knowledge of PC history than I, commented on whether or not it was a better alternative:

Godfrey Cheng:
As you know, proprietary interfaces for hardware acceleration on the PC haven?t really been successful in the long term with developers (re: S3 Metal, 3dfx Glide, Cg). The solutions they were trying to solve were superseded by collaborative industry wide interfaces, which is why we think it is far better for us to be putting our efforts into advancing open, industry standards such as OpenCL that will ultimately grow the ecosystem for Stream Computing.

To summarize, it seems some believe that "proprietary interfaces", like CUDA, will be eclipsed by "collaborative industry interfaces" like OpenCL.

Beings how there is a historical basis for such an assumption (e.g., his comments concerning S3 Metal, 3dfx, Glide, and CG.), it seems valid to say that concentrating resources on those "collaborative industry interfaces" like OpenCL would be better served than for CUDA on the short term.

To what degree this has really "hurt" GPU-accelerated physics advancements is controversial.

Originally posted by: chizow
Some would, and have, argued that nVidia imposing CUDA with PhysX has done that.
As opposed to what? The non-existent API that wasn't invented yet?

If nVidia would have released PhysX and PhysX alone to ATi, could ATi have used their Stream instead?

nVidia:[/i] We'll give you whatever help you need, the SDKs for both PhysX and CUDA are free to download but you can invent your own API, drivers or whatever else you need to get PhysX to work, its all gravy in our book.

So nVidia didn't force CUDA on ATi? They left both PhysX and CUDA mutually exclusive and free to take one or the other?

ATi:[/i] No thanks, we'll just wait 9 months for you to copy your API, re-name it, and then pretend we invented it along with Intel using their much better proprietary software, now that it makes sense.

OpenCL is not a "copy" of CUDA though. From a developer's standpoint, they are different and it depends on what they want.

Originally posted by: chizow
And don't worry our fans and customers won't care either because we've convinced them all its Nvidia's fault.

I'm not really saying it's anyone's fault because I don't know what exactly was hurt.

Processing physics on processors other than the CPU have been around since Aegia's PPU, but physics themselves haven't really done anything spectacular in light of that.

Cheng gave AMD's opinion of the current state of physics:

Godfrey Cheng:
The primary technical challenge today for video games is rendering. Rendering will remain the limitation for realism and gameplay for the foreseeable future. It would be better to focus the GPUs on rendering to provide the best game play. Also remember that when you create more objects or rigid bodies with physics simulation, it further taxes the rendering system. There are scenarios where GPU physics simulation actually slows down gameplay and decreases the experience.

With the proliferation of quad-core CPUs from AMD and Intel, there is ample horsepower to run physics simulation on the CPU. Most games today can take advantage of two cores effectively, scaling to the third and fourth core yields diminishing performance. The game developers are realizing that there is available horsepower with the third and fourth cores for physics simulation.

Game developers will write code for the biggest installed base of hardware to ensure a big market for their games. The only certainty for the developers is that there will be a multi-core CPU in modern PCs. To write a game that supports a proprietary GPU-based physics API would mean a vastly different code base for the game developer as well as relegating this type of game experience to a small percentage of the computers. Clearly, this is not the desirable path for game developers and AMD.

Our strategy is to optimize our CPUs to run Havok?s API and libraries and then to investigate how we can improve gameplay with offloading certain forms of physics simulation to the GPU. We have our theories and models, but we will not announce our product plans until we are ready to roll them out.

This addresses some of the questions I've personally had regarding physics in general. Do all of physics really need a GPU to calculate them? One as powerful as my 9800 GTX+ or an 8800 GT? Why haven't quad core processors become almost a necessity in PC gaming yet? Why have there been very little improvements to physics - regardless of what is processing them - over the past 5 years or so?

The GDC showed some pretty cool stuff. I really liked the cloth demo Ben linked earlier. But why has it taken so long to even get here?

What is processing them isn't the answer, because that's varied from CPUs, PPUs, and GPUs alike recently. Couple that with the fact that the G80 and derivatives dominated the PC gaming market for so long, and I don't see any reason why physics are still where they are.

Originally posted by: chizow
Again, you're welcome to read up on Derek's articles on OpenCL here on AT, but based on his comments and every other news bit I've read about it, its very similar to CUDA.

Oh, I know it's similar. I just know that it's not identical, and where's there's differences there can be reasons for choices. ATi made theirs for reasons that lie within their differences.

 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: BenSkywalker
Your first bolded . Here let me make it easy for you . Intel doesn't need DX to run its new tech. Whats does that mean? Let me think! Oh could it be that if Intel doesn't need DX that intel software hardware will slide right into place on snow OS with CL. Could this change things for Apple or Intel . If Intel is self dependent . Maybe AMD and NV will say Hay we don't want to be tied to DX.

Neither AMD or nVidia have been tied to DX, for nV in particular all the way back to the NV1 they havne't been tied to DX. Their hardware happens to run DirectX, and I think you will find most people like having MS in the position where they control DirectX as it gives us a standard to build the industry around. Perhaps you are too young to remember, but the API wars of the early days of real time 3D were not enjoyable as a gamer.

Nv will work on Apple only if its running MS.

nVidia parts on Apple hardware run OpenGL, as they have for many years now. Macs do not run DirectX natively nor have they ever(well, the hardware is capable of it if you install Windows now, but not native under Mac OS).

Fast forward the year is 2010 Intel is introducing Sandy Bridge. Sandy is a new arch . It doesn't run native x86. Yep thats right An Intel cpu that doesn't run native x86. X86 is ported over to Intels New tech. Called AVX.

Seriously, I think a comp sci class or two would help you out enormously. Either that or spend time reading on line, start back with the Pentium Pro. Intel chips haven't run actualy x86 code on an internal basis for years. Furthermore, the Itanium 'EPIC' architecture has also been around for many years. Nothing you are talking about it remotely new, we know exactly how much of an impact it will have in the market. You seem to think Intel has divine foresight, how about NetBurst hitting 10GHZ, most of us will remember that foolishness.

This all started heating up when it was announced that x86 programms will be ported to AVX on Intel after late 2010.

Macs already run x86 natively on Mac OS. No need to fast forward to any point in time, it isn't something that is coming in the future- it has been happening for quite some time. Again, common knowledge and publicly available information for some time now.

When I told you it was in the scene you said no its not . I bolded what ya said above. Now watch your little dressee swirl ok . Than watch that Meteor video again. Now those swirling clouds paint it red than in the center put a dancer.

Having a uniform environment impacted by all physical activity is good physics, no matter what the graphics look like. Having a bunch of disjointed elements working out of sync is bad physics, period. I made no mention of the graphics differences as we are discussing physics in this thread, and in that regard that demo was very poor.

His world is about Nvidia vs. ATI, Intel vs. AMD, while mine is about their products, not the companies.

If that were the case why would you be opposed to AMD offering support for Havok AND PhysX? Noone that has been backing PhysX in this discussion has stated that nVidia shouldn't support both that I can recall. Offering your customers the best product you are capable of shouldn't be too much to ask, even for AMD.


OK lets go!

Seriously, I think a comp sci class or two would help you out enormously. Either that or spend time reading on line, start back with the Pentium Pro. Intel chips haven't run actualy x86 code on an internal basis for years. Furthermore, the Itanium 'EPIC' architecture has also been around for many years. Nothing you are talking about it remotely new, we know exactly how much of an impact it will have in the market. You seem to think Intel has divine foresight, how about NetBurst hitting 10GHZ, most of us will remember that foolishness

Lets start here shall we. Even tho I know that X86 is ported to AVX. Thats all I know. Idon't care or someone similiar would have come in who understands far more than eye;)

Macs already run x86 natively on Mac OS. No need to fast forward to any point in time, it isn't something that is coming in the future- it has been happening for quite some time. Again, common knowledge and publicly available information for some time now.

Does it? Or is their a layer there ? But really thats not what this is about. Right now its about CL and getting Apple ready for larrabee and Grand central/OpenCL on Snow leapard Os. To be released around the time Larrabee comes out or earlier . May be out now I haven't a clue. The X86 thing really comes into play with Sandy as X86 is ported to AVX . Ya I read your sputter about that. I did what I should ignored. X86 does not run native on Leopard DX doesn't even run on leopard. Bootcamp on MS makes it work .

Having a uniform environment impacted by all physical activity is good physics, no matter what the graphics look like. Having a bunch of disjointed elements working out of sync is bad physics, period. I made no mention of the graphics differences as we are discussing physics in this thread, and in that regard that demo was very poor
Ya I can see how your viewing things . Everone here can see threw ya .Its transparent as all hell. I have asked to see better . You give a link to cloth havok demo.

I said its in the scene . You deny spout some shit . Than I tell you were its at and you come back with this dribble. I think that cloud swirl is cools as hell . Its also the done with havok cloth. Something cool about storm clouds . They all look differant. There is noway that Project offset can show anything you would like . Every member readings knows this.
I just wasting my time with you . Its this or go watch porn . I rather do this for now. LOL.

 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
X86 does not run native on Leopard

I can't continue with this discussion honestly, I have a hard time believing that any person with the capability of registering on these forums can be as ignorant as your posts make you seem. Do some research and figure out what you are talking about. I will give you a few quick pointers-

Windows isn't x86

x86 is an instruction set processors use

Windows isn't x86

x86 is what compilers output when they are creating code to run on x86 based processors

Windows isn't x86

New Macs run x86 based processors

And finally, Windows isn't x86
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Well back on topic... I doubt we will see a game using Havok + GPU this year or even next year. So we are mostly just discussing speculation at this point.