How the PlayStation 4 is better than a PC

Page 41 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

galego

Golden Member
Apr 10, 2013
1,091
0
0
You just pulled that figure out of your backside. Come on, be realistic. You can already see that the UE4 demo on the PS4 is pushing less than the PC version. Overhead or not, we're not going to get the situation where the PS4 can produce visuals that top current PCs cannot.

Do you mean the demo developed in a pair of weeks, using nonfinal APIs, without time to optimization or without time to fix bugs (tessellation broken)?

Do you mean the demo developed in an early kit limited to only 1.5 GB usable VRAM and running at only 27-29% of AH resources?

Do you mean the demo developed by the same developers (Epic) that are claiming that the PS4 is ahead of existent gaming PCs?
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
You are very confused. I have tried to explain this to you many times, but I see that my efforts were useless.

It will be better if you stop from making this kind of third-party comments because they add really nothing.

You haven't contributed in any meaningful way since you joined the forum. Ill stop replying to BS when BS stops being posted. ;)
 

Aikouka

Lifer
Nov 27, 2001
30,383
912
126
I don't think you understand Carmack quote:

I'm going to give you a tip. When you make any sort of argument, you need to actually make it in a coherent manner. When someone calls you out on your poor arguments, you can't simply tell them that they don't understand some random source that apparently backs up your argument when it doesn't even say the same thing. You said, "the API/driver overhead is at least 2x," but that isn't even what your quote says, and it certainly doesn't back your ludicrous 9.2 TFLOPS claim.

Also, you are quite astute to know that I don't understand his quote. Frankly, I'm not sure how anyone could understand that quote since the latter half is in rather poor English. However, what we can take a look at is the first half, which is actually comprehensible. Given the PS4 has 1.8 TFLOPS, he's saying it will be about as good as a Radeon 7970 (using floating point processing power as a benchmark).

Although, isn't that quote from 2011? It's rather dangerous to simply think a quote always applies when it comes to the ever evolving state of computational hardware.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
I'm going to give you a tip. When you make any sort of argument, you need to actually make it in a coherent manner. When someone calls you out on your poor arguments, you can't simply tell them that they don't understand some random source that apparently backs up your argument when it doesn't even say the same thing. You said, "the API/driver overhead is at least 2x," but that isn't even what your quote says, and it certainly doesn't back your ludicrous 9.2 TFLOPS claim.

God tip, now you need to apply it yourself. Your original complain was:

I don't think you understand what overhead means if you think that twice as much overhead means twice as much raw performance.

My answer was a well-known quote from Carmack stating how API overhead (which is about 2x, read Huddy article) implies that the console run about 2x faster than the PC:

Consoles run 2x or so better than equal PC hardware, but it isn’t just API in the way, focus a single spec also matters.

I believe Carmack quote is rather easy to understand. What part of the console is faster than the PC you don't still obtain?

Your appeal to the 9.2 TFLOP again proves my point that you and other posters in this thread are arguing without reading what is being really said... The 2x API overhead gives about 3.68 TFLOP

http://forums.anandtech.com/showpost.php?p=34999819&postcount=980
 
Last edited:

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Do you mean the demo developed in a pair of weeks, using nonfinal APIs, without time to optimization or without time to fix bugs (tessellation broken)?

Do you mean the demo developed in an early kit limited to only 1.5 GB usable VRAM and running at only 27-29% of AH resources?

Do you mean the demo developed by the same developers (Epic) that are claiming that the PS4 is ahead of existent gaming PCs?

See this is the coherence claim.

Without optimization or bugfixing.

Now, what is the dev kit without any of these things?

Its basically a PC. (8 core jaguar, 7850-7870 class gpu, programmed pretty much identically to a pc-no special optimizations).

And no you don't get crazy-sh*t performance gains doing a port from i7 + 680 to a jaguar + 7850/7870. It doesn't make sense and so unless you can give a lot of proof (which you definitively have not, a couple links to possibly dev claims is not concrete by any means) i'm going to call it BS.

And no offense but the relatively small difference in IQ between the two versions does not mean that the two are similar in performance. Crysis 3 at playable settings on a 7850 and a 7970 look very very similar (very little difference between high and very high settings).
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
See this is the coherence claim.

Without optimization or bugfixing.

Now, what is the dev kit without any of these things?

Its basically a PC. (8 core jaguar, 7850-7870 class gpu, programmed pretty much identically to a pc-no special optimizations).

And no you don't get crazy-sh*t performance gains doing a port from i7 + 680 to a jaguar + 7850/7870. It doesn't make sense and so unless you can give a lot of proof (which you definitively have not, a couple links to possibly dev claims is not concrete by any means) i'm going to call it BS.

And no offense but the relatively small difference in IQ between the two versions does not mean that the two are similar in performance. Crysis 3 at playable settings on a 7850 and a 7970 look very very similar (very little difference between high and very high settings).

What are you talking here? The demo was prepared in an early dev. kit. Those kits use a custom FX eight core chip and a custom graphics card R10XX... with BIOS tweaks.
 

Aikouka

Lifer
Nov 27, 2001
30,383
912
126
God tip, now you need to apply it yourself. Your original complain was:

I really ought to do the tech forums a favor and just start flagging your posts. Your flagrant deflections are rather juvenile in nature ("I know you are but what am I!?") and not befitting of this environment. If you wish to post such drivel in the future, then stick to here or maybe even here.

I made my point quite clear that you fail to properly back up your claims. In fact, your evidence doesn't even corroborate what you state! Now, I'll attempt to make it even clearer since it didn't sink in last time...

My answer was a well-known quote from Carmack stating how API overhead (which is about 2x, read Huddy article) implies that the console run about 2x faster than the PC:

Sigh, I really hope that you're a foreign person, because if English is your first language... God help you. You say that it's the overhead (caused by the Direct X API) that causes the console to be two times faster than a PC. Since you enjoy bolding parts that make your point, allow me to return the favor: YOUR OWN "EVIDENCE" REFUTES YOUR CLAIM. You keep bolding the first phrase and completely ignoring the middle one: "but it isn’t just API in the way" That statement alone means that he thinks it is not just the API (overhead) that causes it to be slower.

Yeesh... is that really so hard to comprehend?

I believe Carmack quote is rather easy to understand.

I'm sorry, but just like I said, "focus a single spec also matters." is not good English, and I have no idea what he's trying to say.

What part of the console is faster than the PC you don't still obtain?

Faster than what PC? You seem to love to state that it's faster than anything, but Carmack's old quote strictly says twice as fast as equivalent hardware. Ignoring the fact that it's a dated quote, that makes it about as powerful as a 7970. Although, even with that... why is Thief going to be 30 FPS, and why does your quote-machine Carmack believe that next-gen games will mostly target 30 FPS?

Your appeal to the 9.2 TFLOP again proves my point that you and other posters in this thread are arguing without reading what is being really said... The 2x API overhead gives about 3.68 TFLOP

The only thing it proves is that we won't lap up your crap math like a delusional dog. 2.5x more performance and making it equivalent to two Titans. Please, I could use more laughs after an annoying day at work. :rolleyes:
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
What are you talking here? The demo was prepared in an early dev. kit. Those kits use a custom FX eight core chip and a custom graphics card R10XX... with BIOS tweaks.

This reply addresses nothing of the points he brought up. You may as well have just not replied at all.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
Sigh, I really hope that you're a foreign person, because if English is your first language... God help you. You say that it's the overhead (caused by the Direct X API) that causes the console to be two times faster than a PC. Since you enjoy bolding parts that make your point, allow me to return the favor: YOUR OWN "EVIDENCE" REFUTES YOUR CLAIM. You keep bolding the first phrase and completely ignoring the middle one: "but it isn’t just API in the way" That statement alone means that he thinks it is not just the API (overhead) that causes it to be slower.

Yeesh... is that really so hard to comprehend?

I'm sorry, but just like I said, "focus a single spec also matters." is not good English, and I have no idea what he's trying to say.

You claim you "have no idea" but you insist on commenting and below you claim that his quote is outdated. LOL

The main contribution to the 2x "or so" is due to the API. See Huddy where he obtains about a 2x overhead for DX11.

You can substitute the label "overhead" in the previous equations by "overhead+single-spec" if that makes you to feel better... but the factor to add to the equation continues being 2x.

Faster than what PC? You seem to love to state that it's faster than anything, but Carmack's old quote strictly says twice as fast as equivalent hardware.


No
. I did bold the part where says "equal PC hardware". I used this to compute the equivalent PC hardware.

Ignoring the fact that it's a dated quote, that makes it about as powerful as a 7970.

In a previous post I already wrote

http://forums.anandtech.com/showpost.php?p=34999819&postcount=980

The 2x API overhead gives about 3.68 TFLOP

That is about the performance of a 7970 (3.790 TFLOP).

I wonder why you repeat that I have already said.

The only thing it proves is that we won't lap up your crap math like a delusional dog. 2.5x more performance and making it equivalent to two Titans. Please, I could use more laughs after an annoying day at work. :rolleyes:

What 2.5x?

Did you read developers saying that the PS4 is like a PC of 2014 or 2015?

Did you read one developer who wrote, in this same thread, how the PS4 could beat the Titan?
 
Last edited:

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
I think I pegged what's going on. Some programmer is testing his new code out on us. Using a model of a 11 year old console player, creating a auto-bot with 27-29% mental abilities of said player. All answers are automated based on that intelligence model.
If someone needs a LOL/sarcasm to this post it's only because, the amount of times I have read, the same misinformation and or based on misquotes is unbelievable, there can be no other explanation for this round-about discussion :)
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
You are who repeat the same misguided point without reading what was said in the thread.

DX9 had a big overhead and this was reduced in DX11 to something as 2x. Read some of the bit-tech links given before...

I read it and I think that issue is being exaggerated..

If draw calls were such a huge limitation on PCs, why is it that PC games are typically far more detailed than console games?

A good example is Batman Arkham City on PC. In this game, you can enable PhysX which adds a lot of extra objects to the game that would not ordinarily be there in the console versions.

The console version of Arkham City looks positively bare in comparison to the PC version with PhysX enabled..

Thing is, developers have lots of tricks (like instancing) to get around the draw call overhead associated with PC games..
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
It is not about how a 10x more powerful modern PC from 2011 can do it better than an ancient console from 2006, but how you couldn't obtain the console quality using a comparable PC with the same hardware.
Sure, but in 2007, you could, and it wouldn't have to be all fuzzy.

More importantly, TFLOPS and API overhead are separate issues, as is pixel processing and memory bandwidth. TFLOPS are a BS number.

API overhead does not cause some GPU to only have <1/2 the TFLOPS. All those TFLOPS are still there. What it does is limit the overall scene complexity available at any given FPS, beyond the limits that are just there by the hardware. How much it does so will depend on the game's code, and content usage. Thanks to on-die memory controllers, on-die PCIe interfaces, and improvements in CPU cores and caching, PCs were able to eclipse the consoles within 1 year, and leave them in the dust within 3, even with all the overhead.

Due to average VRAM being 1-2GB now, and texture processing being a relatively cheap part of modern games (provided the GPU isn't crippled, which doesn't appear to be the case for either upcoming console); and due to us desktop users needing to wait for AVX2 to perform efficient MIMD without high latency (unless AMD can find marketing people and money to make it happen on their APUs in real programs), the new consoles will have some advantages out of the gate, compared to most gaming PCs.

However, as AVX2 becomes mainstream, and multiplatform games push VRAM, the same thing will happen all over again. This time, though, it might not happen with upscaling (IMO, the best thing to do would be MS and/or Sony to require allowing users to select non-native resolutions, if they want, and disallow forced upscaling entirely).
 
Last edited:

Aikouka

Lifer
Nov 27, 2001
30,383
912
126
You claim you "have no idea" but you insist on commenting and below you claim that his quote is outdated. LOL

I'm afraid that I'm going to start giving off a bewildered air, because now I have no idea what you're talking about! How can you suggest that there is a correlation between understanding something that has poor English and knowing when the quote was made. Keep in mind that I only stated prior that I thought the quote was from 2011. You tend to always snap at anything you think is wrong, and you didn't refute that, so I assumed that I was correct.

The main contribution to the 2x "or so" is due to the API. See Huddy where he obtains about a 2x overhead for DX11.

Where are you getting the fact that the main contribution to reduced performance of PC hardware is related to Direct X-based overhead? Your only reference is some guy named Huddy where he talks about Direct X 11 having 2x the overhead. Keep in mind that I stated earlier that having more overhead does not necessarily linearly affect your performance. It all depends on the amount of time that it takes.

You wonder why people call you out on everything. You want to know why? Because you're a parrot. You know nothing at all. All you can do is quote articles, and make up results that are backed by shaky reasoning that not even the most professional of Jenga players would go near. I'll show you a few of these now...
No. I did bold the part where says "equal PC hardware". I used this to compute the equivalent PC hardware.

You may have bolded it, but you sure as heck are not even paying attention to it. If you truly believe that quote, and you do seem to believe it enough to use it as "evidence" for your exorbitant claims, then you would not attempt to tell people that the PS4 may have around 9.2 PC-equivalent-TFLOPS. That's twice the capability of a GeForce GTX Titan; that number is far, far above the sort-of-between-7850-and-7870 that's in the PlayStation 4.

That is about the performance of a 7970 (3.790 TFLOP).

...and yet again, I shall repeat... why are you making some ridiculous claims of it having the equivalent of a 9.2 TFLOPS PC?

What 2.5x?

9.2 TFLOPS is your audacious guesstimate
3.68 TFLOPS is 2x the PS4s designated performance

9.2 TFLOPS / 3.68 TFLOPS = 2.5x

Did you read developers saying that the PS4 is like a PC of 2014 or 2015?

Whenever someone makes a claim that includes knowledge of future products that are unknown to anyone in that person's field, how can I take that comment seriously? You can't! It's pure marketing fluff! The best you can do is make projections! However, here's another interesting point. If EPIC was forced to use underpowered developer kits that aren't really running on PlayStation 4 hardware, then how do they really know how powerful it will be? Didn't you say that they were pretty much using a specialized desktop?

Did you read one developer who wrote, in this same thread, how the PS4 could beat the Titan?

Is it going to beat the Titan in displaying Thief in 1080p at 60 FPS? :hmm:
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
I read it and I think that issue is being exaggerated..

You think that all the developers are exaggerating. And however, all of them asked Sony to provide a low-end API plus close-to-metal access to avoid the overhead.
 

PrincessFrosty

Platinum Member
Feb 13, 2008
2,300
68
91
www.frostyhacks.blogspot.com
One of the biggest problems with the argument on on mental optimisations is that they tend to come late in the development of console games.

They require specialist knowledge to implement and often it's only done well after working with the system for an extended period of time, these optimizations take additional development effort which not every developer is going to make to the same degree if at all.

If 100% of games were perfectly optimised out of the gate on launch then this argument would hold some weight, but the practical reality of this is simply not true. Most games at launch make very bad use of the resources available mostly becuase they've neglected development in exchange for getting out the gate first. And we see a steady increase in quality as games mature on the platform.

The problem I have with this is that growth in raw hardware speed during this time of maturity far out-paces the growth in performance/quality due to optimizations, hardware growth is exponential.

If your goal is for efficiency and value then sure the consoles make sense, if your goal is to receive the best gaming experience you can then the PC is the way to go with steady growth over time. Bottom line is that console gaming is budget gaming and that's perfectly fine, I'm not looking down on that, but pretending otherwise is just living outside of reality.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
Sure, but in 2007, you could, and it wouldn't have to be all fuzzy.

Don't matter if you move now from 2006 to 2007.

Prove that a gaming PC with the same FLOPs and memory run the game with the same quality.

API overhead does not cause some GPU to only have <1/2 the TFLOPS. All those TFLOPS are still there.

Yes all those TFLOPs are still here. Nobody said they would disappear from the GPU. sorry, but what is being said is something different. Take a look to the 3-page bit-tech article where the overhead of the DX API is discussed. It discusses the topic with some extension. Look at the page starting with

So what sort of performance-overhead are we talking about here?

Pay attention also to the part when says how developers avoid the API and its overhead when

you're looking to squeeze out as much performance as possible.

Due to average VRAM being 1-2GB now.

What average VRAM? According to Steam:

1GB: 32%
2GB: 4.55%
4GB: 0.5%

And I suspect that outside Steam the percentages are smaller.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
Where are you getting the fact that the main contribution to reduced performance of PC hardware is related to Direct X-based overhead? Your only reference is some guy named Huddy where he talks about Direct X 11 having 2x the overhead.

He says more than that. He talks about how overhead is directly related to performance. In the section of the article titled "The DirectX Performance Overhead". Just after discussing 10x API overhead draw calls he writes:

the PC can actually show you only a tenth of the performance if you need a separate batch for each draw call.

And all other developers mentioned agree about how API overhead affects performance.

You may have bolded it, but you sure as heck are not even paying attention to it. If you truly believe that quote, and you do seem to believe it enough to use it as "evidence" for your exorbitant claims, then you would not attempt to tell people that the PS4 may have around 9.2 PC-equivalent-TFLOPS. That's twice the capability of a GeForce GTX Titan; that number is far, far above the sort-of-between-7850-and-7870 that's in the PlayStation 4.

I already explained this to you three times now. 2x API overhead gives an equivalence of 3.68 TFLOP not 9.2 TFLOP. I already corrected your misunderstanding before. Why do you insist on confounding the 9.2 with the 3.68? What is your goal?

...and yet again, I shall repeat... why are you making some ridiculous claims of it having the equivalent of a 9.2 TFLOPS PC?

9.2 is a estimation of the real performance of the PS4. Why do you believe developers are saying that its performance is years ahead of PCs?

Whenever someone makes a claim that includes knowledge of future products that are unknown to anyone in that person's field, how can I take that comment seriously? You can't! It's pure marketing fluff! The best you can do is make projections!

Evidently, when the developer said that the PS4 is as a PC of 2014 or 2015 he was making projections. I believed that was obvious.

People in this forum is also making projections, for instance when they say about how fast 20nm dGPUs will be? Why did not you react to those claims? You seem to react only when anything is favouring the console.

However, here's another interesting point. If EPIC was forced to use underpowered developer kits that aren't really running on PlayStation 4 hardware, then how do they really know how powerful it will be? Didn't you say that they were pretty much using a specialized desktop?

What about Epic knowing what is the performance difference between the early kit that they received and the PS4?

What about Guerrilla Games running all kind of simulations to find bottlenecks in the hardware designed and next claiming that the PS4 is beyond a high-end gaming PC?

What about Quantic dreams testing?

We are already seeing in the first (PS4 tests) that we&#8217;re doing a big difference [...] With the PlayStation 4, it&#8217;s something that really is more like the PC of next year or two years.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Don't matter if you move now from 2006 to 2007.

Prove that a gaming PC with the same FLOPs and memory run the game with the same quality.
Why? We could instead try to prove that the sky is cadmium yellow, rather than blue. IE, you keep wanting something proven that no one will argue against. The argument is that the fixed-hardware advantages gain only a small amount of time, relative to the life cycle of the device, and that they cannot make up for choices that fundamentally cripple the device, such as poor scalar IPC, and poor bandwidth, both of which plagued the last generation, from Sony and MS.

Yes all those TFLOPs are still here. Nobody said they would disappear from the GPU (em. added). sorry, but what is being said is something different. Take a look to the 3-page bit-tech article where the overhead of the DX API is discussed. It discusses the topic with some extension. Look at the page starting with
That's exactly what was being said, by you, complete with pulling some 9.2TFLOPS from thin air.

Pay attention also to the part when says how developers avoid the API and its overhead when
When you bold a statement that means nothing without a lot of context, and where the context largely voids it?

What average VRAM? According to Steam:

1GB: 32%
2GB: 4.55%
4GB: 0.5%
So, now you're trying to refute a claim with evidence supporting the claim? I really am not getting it. But, going through decompiled binaries is arduous, so I'm still here :)
 

HeXen

Diamond Member
Dec 13, 2009
7,837
38
91
sinemadevri_com_napoleon-dinamit1.jpg
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Question for Cerb.

What kind of impact can we expect AVX2 to have on games?
Non-craptastic SIMD/MIMD.

The prior consoles, FI, had Altivec and VMX-128, both of which are rather excellent, high-performance, SIMD extensions for Power. They could really make use of them, too, pulling off Havok and PhysX, FI, that needed much faster CPUs or GPUs on the PC side.

MMX, SSE, and SSE2 were implemented in ways made to reduce the size increase they would add to the chips they were added to. A million xtors really mattered, right through the P3. While the theoretical performance has usually looked good, actual code gets hung up with x86 memory and register issues, except in the simplest cases.

AVX basically improves SSE2 code, if you recompile it.

AVX2 implements a vector instruction for almost every arithmetically useful scalar x86 instruction, allowing the implementation of almost any calculation-heavy loop, with either fully independent iterations, or close enough, with large loop bodies, to be attempted as a vector loop. While narrower than GPUs, this is basically how GPUs do compute.

I'd wager that GPUs will still be more computationally efficient, but it will allow such code to run with no significant latency compared to the scalar components, since it is using the same hardware, so it can just be like any other loop, just processed really fast (as high as 1 cache line per cycle, by Haswell's specs). Fast access to the GPU, with the same physical and virtual memory, will basically allow for the same thing, though with some latency disadvantage. Intel took a lot of potential steam from AMD, by releasing a spec, and a roadmap with CPUs, years early, and then a simulator, so x86 developers have been working on, and itching for, AVX2 support, instead of AMD's Fusion. AMD will also be supporting AVX2.

So, the console guys will AVX, for a nice 2-3x speedup over code that would work well as SSE2 (assuming AMD's implementation is good), and fast enough communication to and from the GPU that things like efficient physics on the GPU may actually be viable. Since (a) it won't work on Intel platforms, and (b) Intel has a different option to do similar work, and that (c) AMD will also support Intel's ISA extension, nobody not being paid by AMD is going to do all that much with the AMD Fusion SoCs supporting these features, in the PC world.
 
Status
Not open for further replies.