• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Charlie at Semiaccurate says: Physics hardware makes Kepler/GK104 fast

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Personally, I read Charlie's articles alongside the Sunday Comics. I can always use a good laugh.😉

Seriously though, this article reads like it was written with an automatic essay writer. It's not even stream of consciousness, it's just disjoint and disfigured.

LOL. And I totally agree. Robotic and without the usual Charlie D emotion, positive or negative.
 
Now, I'm an open red ant - and I wouldn't buy green products for my GPU needs, but how is what I've read in this thread at all a bad thing?

Ignoring the performance changes (yes I get it, node change, premiums, rah-rah-rah), I'm reading the following:

$300 part that performs like another $300 part (ie what's expected of 78xx)
$300 part that in certain scenarios can perform like a $450 part (using the lower Tahiti numbers)
$300 with <225W consumption

How is that bad at all? nVidia isn't doing anything new. From where I stand - only nVidia cards can use PhysX, only nVidia cards can use CUDA features, and now only nVidia cards can use nVidia specific optimizations.

If the above is true for a $300 part, imagine what a $500+ part can do.

All the people who were expecting a $300 part to compete with a $500+ part have to get that out of their heads, what we're seeing is something that can literally change the game. Developers might not even have to include these optimizations - it could be an NVidia profile kind of thing.

This is one of those "free performance" situations. Sure, in specific scenarios, but who cares if it acts like a $300 card 95% of the time and a $500+ card 5% of the time - that's a pretty sweet perk.
 
$300 part that performs like another $300 part (ie what's expected of 78xx)
$300 part that in certain scenarios can perform like a $450 part (using the lower Tahiti numbers)
$300 with <225W consumption

How is that bad at all?
Because the $450 part is performing like a $300 one due to gimped/crippled code paths, making the $300 part look better.
 
Because the $450 part is performing like a $300 one due to gimped/crippled code paths, making the $300 part look better.

What? How do you even conclude that from the articles posted? All I read is in certain scenarios the $300 part will compete better, ie optimized.

Sorry, unless you got something to back up that claim, move on.
 
What? How do you even conclude that from the articles posted? All I read is in certain scenarios the $300 part will compete better, ie optimized.
Optimized with Nvidia is sometimes just another word for slow down the competitors hardware. Crysis 2, Batman AA etc. etc. From what Charlie is saying, Nvidia is going to intensify their code paths to give them an even greater advantage in NV sponsored titles.
 
Charlie said nvidia is all for playing dirty and fair where it has too e.g, optimizations - aka phys x (don`t take my interpretation for it - go check it out your self)

i find it funny how everybody gets something different from his article
 
Optimized with Nvidia is sometimes just another word for slow down the competitors hardware. Crysis 2, Batman AA etc. etc. From what Charlie is saying, Nvidia is going to intensify their code paths to give them an even greater advantage in NV sponsored titles.

So, you're saying having incentive to buy one brand because one brand gives incentive is...wrong?

Again, I'm a red ant, and I played all those games fine, but in this situation - they are claiming a $300 card will perform at a $450 card in specific scenarios (ie normal as is now). That's a huge advantage.

For those specific situations the nvidia owners win, and big. That's a pretty sweet perk (as I said.) the AMD side, well - get more involved.
 
i find it funny how everybody gets something different from his article
Not really, Charlie is purposely being ambiguous. But this part seems fairly clear to me:
In the same way that AMD&#8217;s Fusion chips count GPU FLOPS the same way they do CPU FLOPS in some marketing materials, Kepler&#8217;s 3TF won&#8217;t measure up close to AMD&#8217;s 3TF parts. Benchmarks for GK104 shown to SemiAccurate have the card running about 10-20% slower than Tahiti. On games that both heavily use physics related number crunching and have the code paths to do so on Kepler hardware, performance should seem to be well above what is expected from a generic 3TF card. That brings up the fundamental question of whether the card is really performing to that level?

This is where the plot gets interesting. How applicable is the &#8220;PhysX block&#8221;/shader optimisations to the general case? If physics code is the bottleneck in your app, A goal Nvidia appears to actively code for, then uncorking that artificial impediment should make an app positively fly. On applications that are written correctly without artificial performance limits, Kepler&#8217;s performance should be much more marginal. Since Nvidia is pricing GK104 against AMD&#8217;s mid-range Pitcairn ASIC, you can reasonably conclude that the performance will line up against that card, possibly a bit higher. If it could reasonably defeat everything on the market in a non-stacked deck comparison, it would be priced accordingly, at least until the high end part is released.
So, you're saying having incentive to buy one brand because one brand gives incentive is...wrong?
I'm going to assume you have not read his article, or just skimmed over it. And yes of course it is wrong, it fragments what is supposed to be a standards based platform. That is what the PC is, and why it is successful. Do you really want games from column "A" requiring hardware from the red team, and games from column "B" requiring hardware from the green team? In that case, AMD and NV may as well go out and make themselves a game console.
For those specific situations the nvidia owners win, and big. That's a pretty sweet perk (as I said.) the AMD side, well - get more involved.
No offense, but this is a talking point that could come directly from Nvidia. And it is certainly not a sweet perk for those people with older Nvidia hardware, said games will perform poorly for them as well, and needlessly so. You seem to be under the illusion that these code paths are there strictly to increase performance, but that is not so. A nice side effect is Nvidia cad claim their hardware is much faster, but that speed boost is artificial. Before Nvidia had any DX11 hardware, they actually made the claim that their lowly $80 cards were faster than the 5870, because of PhysX, which is pure nonsense.

Physics calculations will run just as well on an AMD GPU (or or less) as an Nvidia one. A standards based physics engine would accomplish just that, and this is the path that game devs need to pursue, not one bought and paid for by Nvidia.
 
Last edited:
Now, I'm an open red ant - and I wouldn't buy green products for my GPU needs, but how is what I've read in this thread at all a bad thing?

Ignoring the performance changes (yes I get it, node change, premiums, rah-rah-rah), I'm reading the following:

$300 part that performs like another $300 part (ie what's expected of 78xx)
$300 part that in certain scenarios can perform like a $450 part (using the lower Tahiti numbers)
$300 with <225W consumption

How is that bad at all? nVidia isn't doing anything new. From where I stand - only nVidia cards can use PhysX, only nVidia cards can use CUDA features, and now only nVidia cards can use nVidia specific optimizations.

If the above is true for a $300 part, imagine what a $500+ part can do.

All the people who were expecting a $300 part to compete with a $500+ part have to get that out of their heads, what we're seeing is something that can literally change the game. Developers might not even have to include these optimizations - it could be an NVidia profile kind of thing.

This is one of those "free performance" situations. Sure, in specific scenarios, but who cares if it acts like a $300 card 95% of the time and a $500+ card 5% of the time - that's a pretty sweet perk.

Its not bad. If the rumor is true, its just not what most people expected.

Thinking it over now, I really think this would be a kickass card for 300$. I just have to see which NV specific games really pan out performance wise when it ships. As it is now, if its only faster in physx titles that would be REALLY underwhelming.

Unless you only play Batman.
 
Last edited:
So, you're saying having incentive to buy one brand because one brand gives incentive is...wrong?

Again, I'm a red ant, and I played all those games fine, but in this situation - they are claiming a $300 card will perform at a $450 card in specific scenarios (ie normal as is now). That's a huge advantage.

For those specific situations the nvidia owners win, and big. That's a pretty sweet perk (as I said.) the AMD side, well - get more involved.

Its not WRONG, its just a dead end. Physx has no traction right now and we are averaging 1 physx title per.....6-10 months? Its kind of like back in the day with the sound blaster 16, when games had to have *specific* support for it. Games would ship with a list of which soundcards they supported, eg, sound blaster 16, gravis, etc. Seeing PC graphics degenerate into that wouldn't be cool.

Why not open it up? Since 99% of games are multi platform, most developers will not develop for it unless it can benefit all platforms..
 
Why not open it up? Since 99% of games are multi platform, most developers will not develop for it unless it can benefit all platforms..
Because Nvidia believes that keeping PhysX closed gives them a competitive advantage, and outweighs anything they would profit from if it was open to all hardware. I personally think this is idiotic, they could make a tidy profit licensing the tech to anyone that wanted to use it. Doing this would make it much more popular. The obvious model to follow would be, free to end users, dev kits are free, a license fee is paid on a completed title.

But that's not how Nvidia rolls, Jensen thinks he is the inventor of graphics.
 
Because Nvidia believes that keeping PhysX closed gives them a competitive advantage, and outweighs anything they would profit from if it was open to all hardware. I personally think this is idiotic, they could make a tidy profit licensing the tech to anyone that wanted to use it. Doing this would make it much more popular. The obvious model to follow would be, free to end users, dev kits are free, a license fee is paid on a completed title.

But that's not how Nvidia rolls, Jensen thinks he is the inventor of graphics.

I can see their side of the argument certainly. It is a value added feature to entire users to buy NV hardware - but at some point you have to face the reality that not many devs are using it. I'd certainly be more interested if more than 2 games per year received physx treatment.
 
I'd certainly be more interested if more than 2 games per year received physx treatment.
For arguments sake, suppose in the next year, 80% of all games were PhysX enabled. The next year, all of them. That would cripple any other graphics makers, which would eventually lead to a monopoly. Now let's flip that around. Suppose Microsoft purchased AMD, and made DirectX Radeon only. That would leave Nvidia out in the cold, again welcome your new monopoly overlords. Or at best, we would end up with two completely incompatible gaming bases. If you ever decided to switch brands, toss out all your games and start over.

Does anyone really want this to happen? If not, then what is an acceptable level of fragmentation? Like I said, if people really want that kind of system, buy a console. For people that like to build their own systems, upgrade them, switch brands, do what the PC was designed to do, I say we stick with open standards.
 
Because the pricing of the $450 part and the expected 78xx is based on no competition. Unless AMD gets some unusual driver optimizations the paper specs of the 78xx cards don't appear to be much better performance wise (10-20%) than the 40nm cards in the same price bracket that it will be replacing.

It is appearing more and more that we consumers are going to experience price/performance stagnation with 28nm. I think this is one of the reasons AMD was in a hurry to get 28nm cards out the door, this way they have placed the onus on NVIDIA to adjust pricing. The GTX 580 is remaining obstinately in the $450-500 bracket.

Now, I'm an open red ant - and I wouldn't buy green products for my GPU needs, but how is what I've read in this thread at all a bad thing?

Ignoring the performance changes (yes I get it, node change, premiums, rah-rah-rah), I'm reading the following:

$300 part that performs like another $300 part (ie what's expected of 78xx)
$300 part that in certain scenarios can perform like a $450 part (using the lower Tahiti numbers)
$300 with <225W consumption

How is that bad at all? .
 
If the rumors of including a dedicated physx processor on die are true, then its performance would be crap for games without heavy usage of physx, this is undeniable, because die space is not being used.

For physx games (not just TWIMTBP), it would be very fast. Makes perfect sense.

The only non-sense is why NV is taking this approach as Physx is even less popular now than years ago and next-gen console games certainly aren't going to be TWIMTBP given the AMD GPUs inside the consoles.
 
The only non-sense is why NV is taking this approach as Physx is even less popular now than years ago and next-gen console games certainly aren't going to be TWIMTBP given the AMD GPUs inside the consoles.
Good point. But here's a thought, maybe this is Nvidia's answer to AMD getting into all 3 next-gen consoles? Otherwise I don't know why they would do this, because like you said, if you don't utilize all the processing power of the GPU, you are going to have a disadvantage on non-optimized software.

Either way, did Nvidia not learn from a company that went bankrupt trying to force a proprietary API onto the masses? A company they ended up purchasing? Wouldn't that be like purchasing a Trojan horse, then getting fooled by a Trojan horse?
 
I don't think the decision has to do with any consoles on the horizons, because these designs as you know, go from years back before they hit production.

So if its true, NV felt they should push on ahead with Physx from a few years back, where there were very popular titles and it appeared to be a trend on the up. Sadly it hasn't turned out for the better.

GK104 with 768 cuda cores running at ~1ghz (no hotclocks like Fermi) is much worse than gtx580 512 cores at 1.7ghz.
 
Current rumors state that the upcoming consoles are going to be AMD based (I think it's been confirmed that the Wii-U is AMD). Most all PC games are now console ports with a few bells and whistles added (if we are lucky). Something in this strategy just doesn't add up if those rumors turn out to be accurate.

Considering the mass shift to console ports probably attributed to the decline of Physx in PC games, taking this path could potentially set them up for the same thing, except it sounds like this time the performance of their GPU line rides on it.

Risky business to be sure. Good to see them shaking things up a bit at least. I'm anxious to see this card in action. Nvidia needs to hurry the hell up and release the darn thing!

They just pay people to add it later to win the benchmark wars.

This sounds too good to be true and only this guy is the one coming up with all of it, sounds like a bucketload of ****.

This is starting to get closer and closer to ala Bulldozer fiasco...

I thought of bulldozer as well when I read the part about software needing to be highly optimized. I can't really imagine why nVidia would go this route, though.

tahiti is MIMD, not SIMD

Facts aren't good for people's fantasies. 😉
 
So if its true, NV felt they should push on ahead with Physx from a few years back, where there were very popular titles and it appeared to be a trend on the up. Sadly it hasn't turned out for the better.

Arguably PhysX is mostly customer retention tool. Once you buy PhysX games and NV hardware, you are much more likely to remain with NV for future GPUs since you are tied into that ecosystem.

To me the ultimate irony was that even the vast majority of NV customers couldn't use PhysX (very well) since single low/mid GPUs account for the vast majority of sales/systems and enabling hardware PhysX on those cards result in either a significant performance hit or necessitating a reduction in graphics, or both.
 
ORB says Charlie is completely wrong.

Charlie Demerjian knows nothing about Kepler, just speculating. In essence, copied the information from my blog. I first talked about the problems with PCIe 3.0. I talked about the new PhysX feature, a small core with low power consumption. But the truth is different, the source from which he drew (MSI) (the same as my is faked by Nvidia itself). Kepler has no PhysX block, and its performance is great against Tahiti everywhere - not in games from RG, the performance is still true what I said long ago, an absolute 7970 killer ...

PS. This is last Kepler info here till some chinese real leaks, with spec ... but i can tell you, REAL specs are floating web ... but you have to find them ... 🙂

fun stuff
 
If the rumors of including a dedicated physx processor on die are true, then its performance would be crap for games without heavy usage of physx, this is undeniable, because die space is not being used.

For physx games (not just TWIMTBP), it would be very fast. Makes perfect sense.

The only non-sense is why NV is taking this approach as Physx is even less popular now than years ago and next-gen console games certainly aren't going to be TWIMTBP given the AMD GPUs inside the consoles.

Indeed.
 
I'm also very sceptical about Charlies article. He says, GK104 will be beaten by Pitcairn in (many?) cases when optimizations are not pro-Nvidia. However, Pitcairn should be around HD6950-6970 level. That increase from GK104s predecessor, the GF114 chip is just to small to be realistic.
Unless Nvidia completely opens PhysX, they wouldn't rely only on this feature and neglect 3D performance in general. My guess is that - performancewise - Nvidia will gain on AMD compared to last generations pecking order. Not by very much, maybe 15%, but to lose against Pitcairn is out of the question.
 
The whole article reads idiotic and lacking in sense. Is he trying to say nvidia will release a mid range card that is about equal to a 580 for $300 ? That is fine and makes sense for any mid-range card coming out on a new node, as well the price hike for mid-range would be in keeping with everything 28nm seeming to cost more.

The rest is pure nonsense which makes me believe none of what he is saying. The card will be faster than a 7970 in games where gpu physx is turned on ? No shit Sherlock, when has that not been the case ?

I hope this rumour of nvidia deciding to dedicate die space to an area that only performs gpu physx and no straight-up game rendering is not true. Total waste of die space and is not worth sacrificing raw FPS horsepower in every game to improve performance in the 15 or so games that have gpu physx, most of those titles are ancient now anyways.

Not buying it, they can't be that stupid. GPU physx has not fufilled the build it and they will come prophecy, on the contrary for most game devs, it's been build it and they will run away - given the miniscule adoption in so few games. It will only be of any worth when the reality of the effects it delivers in games matches the PR releases and youtube tech videos. Not the current state of it, with games like BF3 blowing away anything gpu physx has done with plain old CPU physics and none of the gpu physx overhead/performance hit.
 
Last edited:
Back
Top