• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

How the PlayStation 4 is better than a PC

Page 32 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Aug 11, 2008
10,451
642
126
This thread is like the monster in a horror movie.

No matter how many times you think it is dead, it just keeps coming back, looking just like it did before.

31 pages now of speculation about a piece of hardware that is not even on the market.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
Things change as people learn more about them. PS3 games looked far worse than their 360 counterparts early on, and now they are pretty equal for the most part. And don't get excited, I'm not saying it will outperform a pair of 680's, I'm indulging the other side (you mainly) and providing a very fringe case where that could theoretically happen.

Of course I love to see that only took months to go from something like my PC with a GTX-580 will destroy the PS4 on any game to the PS4 could outperform a pair of GTX-680 SLI if the game uses more than 2 GB. Amazing!

And of course I only can agree with you once again. A next gen game using more than 2GB VRAM is a "very fringe case". The early dev. kits have more than 2 GB because they had excess of memory modules and decided to charge developers for them. And developers such as Crytek solicited to Sony that the console would have 8 GB, because they have plans to use less than 2...
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
There are gpu draw calls, there are cpu draw calls, and there are cpu draw calls that affect gpu performance.

Take a look to some of the links given to you, explaining how API overhead affects gpu performance.

I've already looked them over.

Instead of playing cat and mouse, please quote the relevant sections pertaining to gpu performance.

I never saw any.

I can't prove you wrong with what isn't there, next I'll be tasked with disproving the existence of God.


If PC overhead is so crippling to the tune of 10x composed of both CPU and GPU overhead, please explain why C2Q and 8800 GT is all it takes to completely destroy the IQ and Res of consoles? You even have the benefit of poorly coded ports.


Here is the problem, the same thing was said about the last generation of consoles... PC overhead, and direct to metal benefits, yada yada, they didn't last a year before PC destroyed them. The current unreleased consoles are already in the position the 360 was in, a 7970 is the G80 to the PS4. Except this time the PC is already ahead, and like with history will continue to evolve.

It's a dead horse, seven years ago console fans made the same arguments, six years ago they were proven wrong, in six months you will be to.
 
Last edited:

nsafreak

Diamond Member
Oct 16, 2001
7,093
3
81
Wow.

Games are written from the ground up not to saturate the bus. Why is that difficult to understand?

tumblr_mdl6fkJ6q81r61qhzo1_500.jpg
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
well you said that there was 10 times more overhead on the gpu

The quote that you give is my reply to a question that someone asked me about what Carmack said in a link that I provided. Carmack was discussing the API overhead regarding the PS3 and 360.

Nowhere I said that the PS4 would benefit from a 10x API overhead, among other reasons because DX11 reduced that. The API overhead that I am considering for the PS4, and I wrote this dozens of times, is about 2x.

I repeat my advice: you would start to read what I really wrote instead imagining things.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
This thread is like the monster in a horror movie.

No matter how many times you think it is dead, it just keeps coming back, looking just like it did before.

31 pages now of speculation about a piece of hardware that is not even on the market.
Oh, be quiet. If you keep it up, somebody will talk about braving exposure to the daystar, or communicating with females.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Of course I love to see that only took months to go from something like my PC with a GTX-580 will destroy the PS4 on any game to the PS4 could outperform a pair of GTX-680 SLI if the game uses more than 2 GB. Amazing!

And of course I only can agree with you once again. A next gen game using more than 2GB VRAM is a "very fringe case". The early dev. kits have more than 2 GB because they had excess of memory modules and decided to charge developers for them. And developers such as Crytek solicited to Sony that the console would have 8 GB, because they have plans to use less than 2...

Again, you're taking things out of context like you always do. I don't know if you're really that dense or that much of a fanboy or a bit of both.

In order for your 680 fantasy to come true, not only does the game need to use more than 2GB on PS4, but it also has to be ported so poorly to PC that anything less causes an issue.

Your fantasy, while physically not impossible, isn't likely to happen. Just like my fantasy that include Jennifer Love Hewitt, Mila Kunis and Megan Fox is physically possible but not likely to happen. This is just an example to aid you in putting things INTO context which you've shown an inability to do.

I can tell you're getting giddy at these make-believe scenarios (you're welcome btw), but the reality is that you won't actually see it happen. (sorry)

In your premature excitement, you also appeared to forget that the 8GB is total system ram and not dedicated to being a frame buffer.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
No, there are not. There are only CPU draw calls. It's the CPU that handles the API, and it's the CPU whose time is wasted waiting on all those calls to complete. Vista never gaining massive market share basically caused DX9 to have a longer life than it otherwise would have, else it would be less of an issue.

http://www.google.com/search?q="gpu+draw+calls"

Moreover, as said before, some cpu draw calls can affect gpu performance.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
I've already looked them over.

Instead of playing cat and mouse, please quote the relevant sections pertaining to gpu performance.

I never saw any.

From the introduction. Page 1

The Xbox 360's Xenos GPU has a less then a tenth of the processing power of a top-end PC GPU, so why don't PC games look ten times better?

[...]

We often have at least ten times as much horsepower as an Xbox 360 or a PS3 in a high-end graphics card, yet it's very clear that the games don't look ten times as good. To a significant extent, that's because, one way or another, for good reasons and bad - mostly good, DirectX is getting in the way.

Huddy says that one of the most common requests he gets from game developers is: 'Make the API go away.'

And the entire page 2 is devoted to the API overhead issue and about the end of the page mentions how one would obtain the best performance from programming close-to-the-metal the GPU of a PS3.
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
Man my head hurts after reading all this.It was fun though :p PS4 is a good piece of hardware which can definitely outperform some midrange PCs at launch.But unlike PS4 PC is ever evolving so it will be just of a matter of time for midrance PCs to take the crown back.
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
From the introduction. Page 1



And the entire page 2 is devoted to the API overhead issue and about the end of the page mentions how one would obtain the best performance from programming close-to-the-metal the GPU of a PS3.

I can't believe people can quote something like this

"The Xbox 360's Xenos GPU has a less then a tenth of the processing power of a top-end PC GPU, so why don't PC games look ten times better?"

IQ and processing power doesn't scale linearly period.Also "looks ten times better" is a very subjective matter.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
In order for your 680 fantasy to come true, not only does the game need to use more than 2GB on PS4, but it also has to be ported so poorly to PC that anything less causes an issue.

Therefore we move from the PS4 would outperform a pair of GTX-680 SLI if the game used more than 2GB VRAM to adding if also has..

I love how the fantasy is raising the level..

First some said that the idea of an AMD APU on the PS4 was a fantasy.

After that the idea of the PS4 could be playing games like a PC with a HD 7870 was a fantasy.

Then that moved to PC with a GTX-680 was a fantasy.

Now, the fantasy is a pair of GTX-680 in SLI.


In your premature excitement, you also appeared to forget that the 8GB is total system ram and not dedicated to being a frame buffer.

I know well that the 8GB are of total memory, but you omitted my point and also omitted how the dev. kits have 8 GB RAM and more than 2 GB for VRAM.

Do you maintain that PS4 games will be using less than 2 GB for GPU?
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
http://www.google.com/search?q="gpu+draw+calls"

Moreover, as said before, some cpu draw calls can affect gpu performance.
Draw calls send and commit some portion of a frame to a buffer. The programmer cannot not use them. In any given API, there may be a limit to how much information may be sent per call. All such calls are effectively CPU overhead, and would require mark-up-like rendering data to be gotten rid of, which would be hard to optimize well.

IE, a call that sets up a new object does not need to commit anything to the GPU's memory, yet. But, it might have to in DX9. Doing so takes the CPU's time. The draw starts, the draw completes, and then the next draw starts. That leaves both the CPU and GPU spending a lot of time not doing as much as they could. The API is called, and returns, inside the CPU. That thread is not doing to move on to anything else until that call completes. Literally, the thread's program counter reaches a point where it makes the OS' API call, and does not itself increment until the OS' API call returns.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
I can't believe people can quote something like this

"The Xbox 360's Xenos GPU has a less then a tenth of the processing power of a top-end PC GPU, so why don't PC games look ten times better?"

IQ and processing power doesn't scale linearly period.Also "looks ten times better" is a very subjective matter.

Why did you snip the part about the developers saying "make the API go away". Do you understand they say so because of the performance overhead? Why do you believe that developers solicited Sony direct-to-the-metal programming (aka no API) for the PS4?
 

Aikouka

Lifer
Nov 27, 2001
30,383
912
126
The reason behind the PS4 having 8GB of unified memory, is because it runs x86 cores. So software can't use beyond the 3.75GB limitation.

:confused:

Jaguar cores are AMD64-based (i.e. x86-64). It's been a heck of a long time since AMD64 came out, but if I remember correctly, a CPU implementing it can reference something around 2^44 bytes of memory (not 2^64). I believe that's the right number as it results in 16TB of memory, and I recall reading about the number 16 in relation to the new memory capacity.

Also 4k playback requires at least 4GB of its own.

Uhh... why? If that's true, then 1080p playback would require 1GB of RAM since it has 1/4 the number of pixels. 3840 * 2160 * 24 = 199,065,600 ~= 190MB frame buffer.

Speculation suggests this will improve gameplay performance, I for one agree (not drastically, but at least a few frames).

It would be nice to see some statistics on how many sort of GPGPU events are going on during today's modern games. Is it simply limited to things such as PhysX, TresFX, etc?

How PS4 Could Overpower a More Powerful PC

Pretty much summarizes the dozens of pages here

http://e-mpire.com/content.php/2056

I didn't even have to read past the first paragraph to spot sensationalism.

In addition, the components in PS4 are manufactured with high bandwidth in mind. Much higher than PC.
Not true. The memory bus in many of the higher-end graphics cards (especially from AMD) are definitely faster than the PS4's combined memory bus.

The PC GPUs have their own dedicated pool of GDDR5, while the CPU has it's pool of system RAM that is usually a good bit slower.
This is true; however, the problem is with the takeaway, which is referenced further down. They seem to be suggesting that there's a problem with the CPU having less bandwidth. The fact of the matter is that there isn't a problem unless you're encoding or using an APU. We're talking about gaming and discrete cards, so it doesn't matter.

I don't think anyone is arguing about the unified memory architecture combined with an APU being more efficient for sharing data, but I still don't see any proof that this will magically make it into a paragon of geometry processing excellence. When it comes down to it, simply because you can execute GPGPU commands faster may help you get to the point where you need to build the frame faster, but high-end GPUs still have more logical cores to actually build the geometry. No matter how many links you provide, you cannot necessarily overcome the hardware in these high-end, (relatively) expensive GPUs. The article does make a very obvious point (to the point where I'd issue them a Captain Obvious award) that the PS4 will be faster than a PC of the same price.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Therefore we move from the PS4 would outperform a pair of GTX-680 SLI if the game used more than 2GB VRAM to adding if also has..

I didn't "add" anything. That stipulation has always existed. I simply mentioned it this time around because it became clear that you are too dense to apply any sort of reasoning to this debate so I had to lay out the obvious.

There are two factors at play here. One is your comprehension, the other is your PS4/AMD/APU delusions of grandeur. Both of these factors are off the charts in opposite directions. It's important you not confuse one with the other which you appear to be doing here.

What do you hope to accomplish here? This thread is going EXACTLY like the last one where you proved nothing and everyone else was telling you the same thing they are here. You eventually exited the thread. What makes you think saying the same things to the same people in a different thread is going to be any different? That one went for what, a dozen pages? This one going on 3 dozen. Do you not see a pattern?

It's clear you don't actually know what you're talking about, this is very evident by your draw call debate with Cerb where he's giving you a general technical description on how it works and you pulled up a Google search results as your comeback. This becomes quite obvious that he knows what he's talking about, and you don't. It's as if you expected him to say "oh crap, there are results with the words GPU draw call" perhaps if he only knew as much as you did, it would be enough to put him where you think his place should be. But it didn't quite work out that favorably for you.

Are you trying to fool yourself? Because it doesn't look like you're fooling anyone else.
 
Last edited:

galego

Golden Member
Apr 10, 2013
1,091
0
0
Draw calls send and commit some portion of a frame to a buffer. The programmer cannot not use them. In any given API, there may be a limit to how much information may be sent per call. All such calls are effectively CPU overhead, and would require mark-up-like rendering data to be gotten rid of, which would be hard to optimize well.

IE, a call that sets up a new object does not need to commit anything to the GPU's memory, yet. But, it might have to in DX9. Doing so takes the CPU's time. The draw starts, the draw completes, and then the next draw starts. That leaves both the CPU and GPU spending a lot of time not doing as much as they could. The API is called, and returns, inside the CPU. That thread is not doing to move on to anything else until that call completes. Literally, the thread's program counter reaches a point where it makes the OS' API call, and does not itself increment until the OS' API call returns.

^^ Bold from mine.

Thus substituting the DX API by some low-level API (a la libCGM) or eliminating the API completely and the same GPU will give more performance. This is basically what I did mean when said that some cpu draw calls can affect gpu performance on PCs.

DX11 has improved a lot of the performance over DX9, but the overhead still exist in PCs. It is less but still exist.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
It's called a cpu bottleneck, in PCs.

It's nothing new and for the most part, with modern titles and modern hardware not even a problem for MGPU setups.

This goes back to the "Performance loss where none is lost" argument of your past. It simply displays a total lack of technical understanding on your part.
 
Last edited:

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
Why did you snip the part about the developers saying "make the API go away". Do you understand they say so because of the performance overhead? Why do you believe that developers solicited Sony direct-to-the-metal programming (aka no API) for the PS4?

Honestly you have no idea what are you talking about.There was a time when assembly was the king and people believed that you can achieve peak performance only by hand written assembly code.Then came C\C++ which showed that if you have a good compiler and a strong language the difference is mostly academic.Yes there are still couple of cases where assembly wins but they are few and far between.If there is no API it will be a nightmare for developers, I don't care who said this but it is stupid.
 
Status
Not open for further replies.