"Inevitable Bleak Outcome for nVidia's Cuda + Physx Strategy"

Page 19 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Wreckage
Originally posted by: munky

It means when talking about the benefits of accelerated Physx on NV gpu's, they don't apply to any other platform that Physx happens to "support."

Well since Havok does not currently offer support for any GPU, it is far too restrictive, limited, proprietary, etc. that people should support the more open platform that is PhysX.

That's basically what you are saying.

Incorrect. That is not what I'm saying, and what you just said is dead wrong. If gpu acceleration in Physx was "open", then it wouldn't rely on proprietary Cuda technology.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: munky

Incorrect. That is not what I'm saying, and what you just said is dead wrong. If gpu acceleration in Physx was "open", then it wouldn't rely on proprietary Cuda technology.

Havok is proprietary so stop ignoring that fact. CUDA/PHYSX were offered to AMD so they are the ones cockblocking a standard.

Besides if not for CUDA, we would have to wait until 2011 or later for a GPU physics game.

I will play my next gen gaming features now thank you.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
It means when talking about the benefits of accelerated Physx on NV gpu's, they don't apply to any other platform that Physx happens to "support."

They apply to the PS3. Just because x86 CPUs are very, very weak at FPU calcs doesn't mean that all CPUs are. Actually, IIRC even the CPU in the 360 handily bests current x86 CPUs at physics calculations.

If gpu acceleration in Physx was "open", then it wouldn't rely on proprietary Cuda technology.

That is a rather profoundly absurd statement. Linux won't run on my 360, does that mean that Linux isn't open? Sure, Red Hat could run on the 360 if MS ported it, but MS has no interest in doing so and would block anyone that tried. Red Hat on PCs runs on x86 that isn't open, and could run on POWER as used in the 360 which also isn't open, but that doesn't change the fact that Red Hat itself, is(not saying the PhysX is actually open, but the reasons you are using are just absurd).

Thing is, Nvidia would have to do absolutely no work to ensure accelerated PhysX worked on any platform that provided a compliant OpenCL stack.

Talk is nice, but is AMD going to actually support OpenCL? Given that their loyalists hype the technology to no end, why is it that only nVidia has drivers that support OpenCL available?
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Wreckage
Havok is proprietary so stop ignoring that fact. CUDA/PHYSX were offered to AMD so they are the ones cockblocking a standard.

Besides if not for CUDA, we would have to wait until 2011 or later for a GPU physics game.

I will play my next gen gaming features now thank you.

I never claimed Havok was open. Get your facts straight.

Originally posted by: BenSkywalker
They apply to the PS3. Just because x86 CPUs are very, very weak at FPU calcs doesn't mean that all CPUs are. Actually, IIRC even the CPU in the 360 handily bests current x86 CPUs at physics calculations.
The extra physics effects available on the PS3 aren't exclusive to PhysX, unlike the PC platform.
That is a rather profoundly absurd statement. Linux won't run on my 360, does that mean that Linux isn't open? Sure, Red Hat could run on the 360 if MS ported it, but MS has no interest in doing so and would block anyone that tried. Red Hat on PCs runs on x86 that isn't open, and could run on POWER as used in the 360 which also isn't open, but that doesn't change the fact that Red Hat itself, is(not saying the PhysX is actually open, but the reasons you are using are just absurd).
Last I checked, Linux wasn't owned by a HW company who then ported it to run on their proprietary HW.
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
Originally posted by: Scali
Originally posted by: Modelworks
They do know what it is like to work in a production environment since they have had to do that for many years.

As do I.

Originally posted by: Modelworks
Yes you are ignorant to how things work in the production environment. That requires experience. How can you be in touch with the competition when you said that you never used the unreal engine ?


Then you go out of your way with far-fetched arguments about how UnrealEngine is modular (which I never denied anyway), and throw gratuitous insults around.
Yes, you are right that UE is modular. That doesn't make me ignorant, it doesn't even go against anything I said. And it isn't relevant to this thread.
It's just that you HAD to win. Which you didn't, because your premise was false to begin with. Bye now.


Far fetched ? That would imply not true. Which again you would have understood had you used the engine. No knowledge of how UE3 uses physics = ignorant to that fact.

I already told you this is not a win lose discussion, but you seem hung up on the 'win' idea.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Modelworks
Far fetched ? That would imply not true.

No, far-fetched means:
Not likely; difficult to believe; outlandish; wild; impractical

Not necessarily not true.
Impractical seems very much in place here.

Let me reiterate for you once more:
1) Someone claims PhysX can only run on nVidia GPUs and cannot be used for first-order physics.
2) I say that PhysX also runs on the CPU and that games made with the UnrealEngine such as Unreal Tournament use PhysX for everything in the game, including the first-order physics.
3) You claim that UnrealEngine uses other physics libraries aswell.
4) I say it doesn't say anything about any other API than PhysX on www.unrealtechnology.com
5) You water down your claim to "PhysX is the default, but since the engine is modular, it is possible to replace it yourself"
6) I agree, but don't see how it is relevant to point 2), nor to your own point 3).
7) You start insutinging ad nauseam.

Originally posted by: Modelworks
No knowledge of how UE3 uses physics = ignorant to that fact.

I have knowledge of how UE3 uses physics.
What I said is true, Unreal Tournament uses PhysX for first-order physics.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
The extra physics effects available on the PS3 aren't exclusive to PhysX, unlike the PC platform.

What you said was-

It means when talking about the benefits of accelerated Physx on NV gpu's, they don't apply to any other platform that Physx happens to "support."

So which do you mean? Do you actually mean it doesn't support more advanced features on other platforms, or only on PCs that don't have nV GPUs?

Last I checked, Linux wasn't owned by a HW company who then ported it to run on their proprietary HW.

nV has already ported PhysX to run on several other companies proprietary hardware if you hadn't noticed ;) x86, POWER and PPC are all proprietary, they all run PhysX just fine. The only exception is the company that Intel is trying to sue out of business, the one that was just found to have been a victim of Intel violating anti trust laws in Europe, who decided that Intel was a better partner for them to team with.

Thought of from a different angle- If MS started porting DirectX to other platforms and Apple refused would we not consider them to be utter idiots?
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
Originally posted by: Scali


I have knowledge of how UE3 uses physics.
What I said is true, Unreal Tournament uses PhysX for first-order physics.

Then why did you make statements like

UnrealEngine 3 uses PhysX for everything.

That isn't true, it 'can' use PhysX , but it doesn't have to.

There is no 'other physics API' in the UnrealEngine, and I doubt that any other developers would be crazy enough to develop their own physics API for that, when PhysX does the job just fine. It defies the point of using a physics API in the first place.

Developers crazy enough to develop or use their own ? Like Havok ? Like Bullet ? Tokamak ?

You were trying to use UE3 as an example that somehow PhysX was better than everything else out there. You didn't know how UE3 works , you got the info off the website which you also quoted

What physics engine does it use, other than PhysX?
I can't find any references to anything other than PhysX on www.unrealtechnology.com

Someone that knew how the physics worked would not have said that.
If you are commenting on how PhysX is or should be implemented in games you should know what it currently does, how it works with current engines, and what the gains and drawbacks are. Not say developers are crazy for replacing it with something else. Only someone with an agenda would say that, and we already have enough of them here .
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Modelworks
That isn't true, it 'can' use PhysX , but it doesn't have to.

Semantics.
It uses PhysX by default. Point was that PhysX can be used for first-order physics, and this is actually done in actual titles based on the UnrealEngine.
That should be clear from the context.

Originally posted by: Modelworks
Developers crazy enough to develop or use their own ? Like Havok ? Like Bullet ? Tokamak ?

Some developers apparently ARE crazy enough to use a different one.
Doesn't matter, UnrealEngine only comes with PhysX by default. That was what I said.

Originally posted by: Modelworks
You were trying to use UE3 as an example that somehow PhysX was better than everything else out there.

Not at all. I have absolutely no idea where you would have gotten that idea (unless YOU are the one with the agenda). I merely stated that PhysX can be used for CPU-based first-order physics just like pretty much every other physics API out there, and that it actually is used as such in various titles.

Originally posted by: Modelworks
You didn't know how UE3 works , you got the info off the website which you also quoted

I know how UE3 works (at least to the extent that I claimed, in that it uses PhysX for first-order physics by default). That doesn't mean that I know how every single licensed users chooses to use it. You seem to have problems distinguishing the two.

Originally posted by: Modelworks
Not say developers are crazy for replacing it with something else. Only someone with an agenda would say that, and we already have enough of them here .

Again, it seems that you have gotten weird ideas...
I think developers are crazy for replacing the physics library in an engine *period*. If you license the Valve Source engine and replace Havok with PhysX I would think you're equally crazy. Any developer would understand why I say that: it takes a lot of effort to change it around, and there is little or no gain for most games.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: BenSkywalker
The extra physics effects available on the PS3 aren't exclusive to PhysX, unlike the PC platform.

What you said was-

It means when talking about the benefits of accelerated Physx on NV gpu's, they don't apply to any other platform that Physx happens to "support."

So which do you mean? Do you actually mean it doesn't support more advanced features on other platforms, or only on PCs that don't have nV GPUs?
I said "benefits of accelerated Physx", not benefits of accelerated physics. If the PS3 is capable of accelerating every other physics API, then it's not really an advantage of Physx, is it?

Last I checked, Linux wasn't owned by a HW company who then ported it to run on their proprietary HW.

nV has already ported PhysX to run on several other companies proprietary hardware if you hadn't noticed ;) x86, POWER and PPC are all proprietary, they all run PhysX just fine. The only exception is the company that Intel is trying to sue out of business, the one that was just found to have been a victim of Intel violating anti trust laws in Europe, who decided that Intel was a better partner for them to team with.
PhysX started off as a C++ library, and x86 was already supported by PhysX before Nvidia got involved. It's not like any other platform had to adopt Nvidia's Cuda technology to use PhysX.

Thought of from a different angle- If MS started porting DirectX to other platforms and Apple refused would we not consider them to be utter idiots?

PhysX is not nearly the dominant PC gaming platform like DirecX is. There are viable alternatives to PhysX. From an even different perspective, if Nvidia was a major player in the x86 market, and created a compiler optimized for their HW, would anyone call Intel stupid for refusing to adopt NV's compiler and instead pushing their own?
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: munky
I said "benefits of accelerated Physx", not benefits of accelerated physics. If the PS3 is capable of accelerating every other physics API, then it's not really an advantage of Physx, is it?

No, but it's a bit of a shame when people playing the XBox360 or PC port of the same game, and they don't get the same physics as on the PS3.

Originally posted by: munky
PhysX is not nearly the dominant PC gaming platform like DirecX is. There are viable alternatives to PhysX. From an even different perspective, if Nvidia was a major player in the x86 market, and created a compiler optimized for their HW, would anyone call Intel stupid for refusing to adopt NV's compiler and instead pushing their own?

PhysX isn't tied to Cuda at all. Cuda is just one of the many implementations of the library.
ATi would not have to implement Cuda, use nVidia's compilers or anything.
They were free to implement it any way they wanted... with their own Stream SDK for example.
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,675
146
106
www.neftastic.com
Originally posted by: BenSkywalker
Talk is nice, but is AMD going to actually support OpenCL? Given that their loyalists hype the technology to no end, why is it that only nVidia has drivers that support OpenCL available?

Funny you say that...

SANTA CLARA, CA?APRIL 20, 2009?NVIDIA Corporation, the inventor of the GPU, today announced the release of its OpenCL? driver and software development kit (SDK) to developers participating in its OpenCL? Early Access Program. NVIDIA is providing this release to solicit early feedback in advance of a beta release which will be made available to all GPU Computing Registered Developers in the coming months.

I've highlighted the relevant portion for you. OpenCL from Nvidia is basically a PR thing, as it's not publicly available for consumption. Ergo: OpenCL doesn't live in the end-user driver stack from Nvidia.

I'm sure if you ask AMD nice enough they'd be happy to board you on their similar driver program. That's all I can say about that.

I can play the PR game too.

In the end, the industry almost always comes to agreement on standards. In the PC industry we are currently driven by ?de facto? standards, which generally come into use faster but can limit choice for users. When we all agree on ?open? standards, on the other hand, we differentiate on a level playing field.

And history suggests that is really good for consumers.

The CUDA and OpenCL battle will be fought over the next few years, with applications and ? I suspect ? users as the battleground. In an ideal world we could all save time and money by agreeing on one or the other. Given that we believe in open standards, we vote for OpenCL.

This is from Nigel Dessau - SVP at AMD. I think he pretty much answered your question.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,732
432
126
This is all very interesting from the industry and enthusiasts view point, but from the average consumer viewpoint all that they want is the cheapest card that will allow them to play their game of choice TODAY!

So, until the time consumers hit the WoW shortcut (or whatever rocks your boat - mine is GW atm) and it says "Sorry but this program cant be executed since your graphics card doesn't support PhysX." nVida wont sell more cards to these type of consumers due to the presence or lack of PhysX.
 

akugami

Diamond Member
Feb 14, 2005
6,210
2,552
136
@Scali


I think PhysX still has a very long way to go (and Havok even longer).

I do think that the future of hardware is in a higher state of flux. I think that there is a plausible chance of Intel and AMD investing heavily in CPU/GPU's that may squeeze nVidia out. Whether you think think so is only an opinion, as are my thoughts and speculations.

Some of your arguments even seem to agree with what I'm saying. Again, my opinions and speculations vs yours but you seem intent on "being right" and winning.

It's laughable how little hardware accelerated PhysX adds to gaming at this moment. Some are harping on how great PhysX is but to me, at this moment, it's just a checkbox feature. I think there needs to be huge improvements to PhysX (or any hardware accelerated physics) before it goes mainstream and becomes a compelling feature.

We'll just leave this as my having chimed in my two cents and you yours. You disagree with some of my thoughts and I disagree with some of yours. If we keep arguing we'll just keep going in circles.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: akugami
Some of your arguments even seem to agree with what I'm saying.

Well certainly I see a market for CPU/GPU hybrids. Pretty obvious, as they're just an extension of the IGP that we know today, which is VERY popular (Intel actually has the biggest marketshare on the videochip market, and Intel ONLY makes IGPs currently, and they're not even that good. That says enough).

I just don't see them pushing out high-end GPU solutions anytime soon. The first generation of CPU/GPU hybrids will be little more than a simple IGP glued onto a CPU die. It's not going to be that different from having an IGP, except that perhaps it will be cheaper to manufacture.
It will take a long time and a lot of changes to the architecture before having a GPU on-die will actually get anywhere near the performance of a high-end videocard.

Originally posted by: akugami
It's laughable how little hardware accelerated PhysX adds to gaming at this moment. Some are harping on how great PhysX is but to me, at this moment, it's just a checkbox feature. I think there needs to be huge improvements to PhysX (or any hardware accelerated physics) before it goes mainstream and becomes a compelling feature.

Well, there's PhysX, and then there's using its features to create great effects. PhysX itself can do a lot of things, but you don't just add those to a game in a few days. Aside from that, if you were to go overboard with realism, you'd still run out of processing power.
I don't think PhysX itself is really the problem. What we need is ever more powerful GPUs and games that were designed from the beginning to make use of all types of physics effects, rather than adding them as an afterthought. I think CellFactor was a pretty good example of what physics acceleration is capable of. Perhaps in a few years we'll have REAL games with physics effects like that, and perhaps even more.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
I don't think it is laughable -- sure more compelling content would be very welcomed but laughable? To each their own and many look at things differently.

 

SSChevy2001

Senior member
Jul 9, 2008
774
0
0
Originally posted by: Scali

Originally posted by: SSChevy2001
Yes I know what taskmgr is. While I'm not a developer, but I can understand that you just can't break up a game to run 100% on a quad core. What I expect to see is more titles favoring quad core cpus by more than just by 5%.
You're not a developer, how do you know it's even POSSIBLE to do that?
Are you serious? Why do people come to sites like anandtech?
http://www.anandtech.com/cpuch...owdoc.aspx?i=3559&p=10

FarCry 2
Q9400 - 2.66Ghz - 6MB cache 54.8FPS
E8200 - 2.66Ghz - 6MB cache 42.3FPS

Even with half the cache per core the quad core still see a 30% increase.

Originally posted by: Scali

Originally posted by: SSChevy2001
Only some people will make like these extra cloth, smoke, and debris effect can't run on current CPUs, which is not the case.
What makes you say that?
I've played Mirror's Edge with and without a PhysX GPU, so from personal experience.
Don't believe me read what others have said.

For example, while the cloth effects and smoke/mist in Mirror's Edge might work perfectly well on a mid-range CPU, the glass breaking effects will only run well on higher end CPU's. A physX supporting card is definitely not needed, if it's available though, you will get a very nice performance boost.
http://www.gamespot.com/pc/act...-1-47688581&pid=941949

everytime i have glass breaking around me it drops to like 5FPS with my brand new sapphire radeon 4870 toxic, phenom 9850, 4gigs of ram and plenty of hdd space so i dont think anything is botlenecking it. this games runs EXTREMELY SMOOTH with everything up its just when glass is breaking it craps out same with on my freinds comp. its driving me CRAZY!!!!!!! HELP ME!!!!!!!!:confused:
http://www.techspot.com/vb/all...2716-Mirrors-Edge.html

Originally posted by: Scali

Originally posted by: SSChevy2001
What's ironic is it's the only effect in the game that causes CPUs to crawl.
Is it? When I disabled PhysX acceleration on my GeForce, suddenly the game crawled on my Core2 Duo @ 3 GHz, even in the training level. There is no glass in the training level. The game just crawled everywhere.
Read the links I posted above or look for more yourself. I was shocked the first time I played it on my 4870 that everything worked just fine until that 1st glass section.

With this title it's an all or nothing deal, and most of the effects seem to work just fine. :roll:
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
Originally posted by: SSChevy2001
Originally posted by: Scali

Originally posted by: SSChevy2001
Yes I know what taskmgr is. While I'm not a developer, but I can understand that you just can't break up a game to run 100% on a quad core. What I expect to see is more titles favoring quad core cpus by more than just by 5%.
You're not a developer, how do you know it's even POSSIBLE to do that?
Are you serious? Why do people come to sites like anandtech?
http://www.anandtech.com/cpuch...owdoc.aspx?i=3559&p=10

FarCry 2
Q9400 - 2.66Ghz - 6MB cache 54.8FPS
E8200 - 2.66Ghz - 6MB cache 42.3FPS

Even with half the cache per core the quad core still see a 30% increase.

Originally posted by: Scali

Originally posted by: SSChevy2001
Only some people will make like these extra cloth, smoke, and debris effect can't run on current CPUs, which is not the case.
What makes you say that?
I've played Mirror's Edge with and without a PhysX GPU, so from personal experience.
Don't believe me read what others have said.

For example, while the cloth effects and smoke/mist in Mirror's Edge might work perfectly well on a mid-range CPU, the glass breaking effects will only run well on higher end CPU's. A physX supporting card is definitely not needed, if it's available though, you will get a very nice performance boost.
http://www.gamespot.com/pc/act...-1-47688581&pid=941949

everytime i have glass breaking around me it drops to like 5FPS with my brand new sapphire radeon 4870 toxic, phenom 9850, 4gigs of ram and plenty of hdd space so i dont think anything is botlenecking it. this games runs EXTREMELY SMOOTH with everything up its just when glass is breaking it craps out same with on my freinds comp. its driving me CRAZY!!!!!!! HELP ME!!!!!!!!:confused:
http://www.techspot.com/vb/all...2716-Mirrors-Edge.html

Originally posted by: Scali

Originally posted by: SSChevy2001
What's ironic is it's the only effect in the game that causes CPUs to crawl.
Is it? When I disabled PhysX acceleration on my GeForce, suddenly the game crawled on my Core2 Duo @ 3 GHz, even in the training level. There is no glass in the training level. The game just crawled everywhere.
Read the links I posted above or look for more yourself. I was shocked the first time I played it on my 4870 that everything worked just fine until that 1st glass section.

With this title it's an all or nothing deal, and most of the effects seem to work just fine. :roll:

From your same link:

AMD PhenomII X3 720 2.8GHz 7.5MB cache 45fps
AMD PhenomII X4 920 2.8GHz 8MB cache 43.3fps

????
 

thilanliyan

Lifer
Jun 21, 2005
12,081
2,280
126
Originally posted by: SSChevy2001
Read the links I posted above or look for more yourself. I was shocked the first time I played it on my 4870 that everything worked just fine until that 1st glass section.

With this title it's an all or nothing deal, and most of the effects seem to work just fine. :roll:

I can confirm this as well. At first I ran with PhysX ON and no Geforce for the PhysX and I saw the cloth and mist and I thought...why is this running this fast I thought these effects needed a PhysX card to run fast... but then I got to the part where the officers shoot all the glass and it started crawling (and the thing is it stays very slow even after you're way passed that part of the level). Then I popped in the 8800GT for PhysX (with some hacks to get it working) and the same area was smooth as butter.
 

SSChevy2001

Senior member
Jul 9, 2008
774
0
0
Originally posted by: Keysplayr
From your same link:

AMD PhenomII X3 720 2.8GHz 7.5MB cache 45fps
AMD PhenomII X4 920 2.8GHz 8MB cache 43.3fps

????
More L3 cache per core, faster un-core clock, and less scaling between 3-4 cores?

X3 720 2MB L3 cache per core - 2.0GHz un-core clock
X4 920 1.5MB L3 cache per core - 1.8GHz un-core clock
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: SSChevy2001
Are you serious? Why do people come to sites like anandtech?

It seems you didn't understand my question.

Originally posted by: SSChevy2001
Even with half the cache per core the quad core still see a 30% increase.

Okay, so if we assume the dualcore had 100% CPU usage, that would translate to 50% CPU usage for the quadcore.
You now see 30% increase in performance.
That would roughly mean you now have 65% usage. In other words, the extra 2 cores have 15% load (so 7.5% each).
How is that acceptable, when just now you were ranting that quadcores weren't used properly, with your line of argument being that you don't see anywhere near 100% usage in Task Manager?

Originally posted by: SSChevy2001
Read the links I posted above or look for more yourself. I was shocked the first time I played it on my 4870 that everything worked just fine until that 1st glass section.

Well maybe with a quadcore. I have a dualcore as I said, and for me the whole game crawls without PhysX acceleration. Not a problem for me, since I have a GeForce to fix that. Imagine if I had to buy a new CPU for better physics effect... and still not get it playable at all times :)
 

dadach

Senior member
Nov 27, 2005
204
0
76
Originally posted by: SSChevy2001
Originally posted by: Keysplayr
From your same link:

AMD PhenomII X3 720 2.8GHz 7.5MB cache 45fps
AMD PhenomII X4 920 2.8GHz 8MB cache 43.3fps

????
More L3 cache per core, faster un-core clock, and less scaling between 3-4 cores?

X3 720 2MB L3 cache per core - 2.0GHz un-core clock
X4 920 1.5MB L3 cache per core - 1.8GHz un-core clock


LOL...of course the game will run faster on that X3
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: dadach
Originally posted by: SSChevy2001
Originally posted by: Keysplayr
From your same link:

AMD PhenomII X3 720 2.8GHz 7.5MB cache 45fps
AMD PhenomII X4 920 2.8GHz 8MB cache 43.3fps

????
More L3 cache per core, faster un-core clock, and less scaling between 3-4 cores?

X3 720 2MB L3 cache per core - 2.0GHz un-core clock
X4 920 1.5MB L3 cache per core - 1.8GHz un-core clock


LOL...of course the game will run faster on that X3

Which would indicate the problem we (developers) are facing here.
Apparently it's easier to get three cores with a bit of extra cache (per core) and slightly faster un-core to perform well than it is to get 4 cores to perform well.
Cores/threads seem to be no substitute for cache/memory bandwidth.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
Originally posted by: dadach
Originally posted by: SSChevy2001
Originally posted by: Keysplayr
From your same link:

AMD PhenomII X3 720 2.8GHz 7.5MB cache 45fps
AMD PhenomII X4 920 2.8GHz 8MB cache 43.3fps

????
More L3 cache per core, faster un-core clock, and less scaling between 3-4 cores?

X3 720 2MB L3 cache per core - 2.0GHz un-core clock
X4 920 1.5MB L3 cache per core - 1.8GHz un-core clock


LOL...of course the game will run faster on that X3

Ah, good! So scaling from 2 to 3 cores nets an fps gain. 3 to 4 cores, not so much. Looks like all 4 cores arent' really being utilized 100% like the dual core probably was.

And also, I noticed Chevy made a comment about the Core2Duo having twice the cache available to each core as opposed to half the cache on the Core2Quad.
These arguments are pretty thin.