• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

How the PlayStation 4 is better than a PC

Page 36 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

galego

Golden Member
Apr 10, 2013
1,091
0
0
Actually if you look at steam who have the most comprehensive database, most people have 4GB ram and a large and growing portion of 8GB.

Most people also have 4 cores, compared to other single, dual or 6/8 cores.

In that quote I was referring to VRAM on PCs. Most Steam gamers have only 1 GB VRAM. Statistics for 2GB and 4GB are 4.5% and 0.5% respectively.

So again, the PS4 may even be a bit better than a high end PC, though highly unlikely as an I7 3970x coupled with 8GB 1800+ memory and a Titan GPU is likely to still be faster than a PS4.

Note that Epic selected an i7 + 16 GB RAM + GTX-680 (2 GB VRAM) for comparison demo with PS4. The PS4 side of the demo used less than a 1/3 of the final specs. A Titan card is not 3x more powerful than the 680. Obtain your own conclusion.
 

Aikouka

Lifer
Nov 27, 2001
30,383
912
126
Yeah here everyone except you have an agenda. Carmack also has an agenda when he went frustated because the same game running @ 60 fps on the console could not run @ 60 fps on a PC 10x more powerful.

http://www.computerandvideogames.co...d-that-pc-is-10-times-as-powerful-as-ps3-360/

Well, here's my question... what exactly are you trying to suggest with this? I read the article, and it sounds more like Carmack is displeased with having to be stuck using specific APIs on the PCs, which cause them to sometimes have to do more work. None of this means that a PC requires 10x the overall horsepower to produce something with the same graphical fidelity.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
Yeah here everyone except you have an agenda. Carmack also has an agenda when he went frustated because the same game running @ 60 fps on the console could not run @ 60 fps on a PC 10x more powerful.

http://www.computerandvideogames.co...d-that-pc-is-10-times-as-powerful-as-ps3-360/

I said everyone has an agenda, mine as a member of the PC master race is to smite console hopes and dreams once every decade.

Carmack again?

http://arstechnica.com/gaming/2011/10/rage-on-pc-is-a-mess-but-you-can-fix-some-of-it/

RAGE is a disappointment on the consoles when it comes to gameplay. On the PC, the game may struggle to work at all, depending on your configuration. This is a sad state of affairs for a developer that was once known for its innovative, and in some cases revolutionary, PC releases.

Seems suspect that he made those comments just a few months before Rage came out :ninja:
 
Last edited:

toyota

Lifer
Apr 15, 2001
12,957
1
0
good old Carmack. the man who brought back muddy 2003 textures yet found a way to make them use lots of vram, take up huge amounts of hard drive space, and even load slowly.
 
Last edited:

galego

Golden Member
Apr 10, 2013
1,091
0
0
Well, here's my question... what exactly are you trying to suggest with this? I read the article, and it sounds more like Carmack is displeased with having to be stuck using specific APIs on the PCs, which cause them to sometimes have to do more work. None of this means that a PC requires 10x the overall horsepower to produce something with the same graphical fidelity.

No. He clearly mentions it is an "overhead" issue. Read the entire thread, because the 10x overhead that Carmack mentions is the same 10x overhead issue mentioned by Lottes, Huddy, or me.

The new DirectX 11 has reduced the overhead, but still exists. This is also mentioned by Huddy, Carmack, or me.

Once again... 2 TFLOP on a console does not equate to 2 TFLOP on the PC; the same than 200 HP on a motorbike does not equate to 200 HP on a car.
 

SlickR12345

Senior member
Jan 9, 2010
542
44
91
www.clubvalenciacf.com
Carmack said that because his game Rage sucked as in it wasn't optimized for PC's at all and he is now whining about it. But its his problem as they didn't offer low settings for Rage so even mediocre PC's were forced to play at basically max settings.

To me that is a developers problem with not giving options and having too weakly compressed textures, probably big textures as well, instead of more smaller textures.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Once again... 2 TFLOP on a console does not equate to 2 TFLOP on the PC; the same than 200 HP on a motorbike does not equate to 200 HP on a car.

I find it funny that no one is saying or claiming this, yet you keep repeating it... I guess when your arguments have been destroyed time and again, you're left with little option than to (re)create an argument that never existed.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
What does this mean?

If it was only using 1/3 of PS4 power why didn't they enable Global Illumination like the PC version has?

They prepared the demo on an early dev. kit with less performance and limited to only 1.5 GB VRAM. They used non-final soft tools and zero optimization. It is incredible that they were so close to an i7 + 16 GB + GTX-680 (2 GB VRAM).

Moreover, the PS4 demo runs @ 1080p, whereas the PC demo runs sub-1080p.
 
Last edited:

galego

Golden Member
Apr 10, 2013
1,091
0
0
I find it funny that no one is saying or claiming this, yet you keep repeating it... I guess when your arguments have been destroyed time and again, you're left with little option than to (re)create an argument that never existed.


http://i.imgur.com/UdpPV.gif

We're still not 4Chan
-ViRGE
 
Last edited by a moderator:

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
Read then the quote by Ryan Shrout, because he says what I have been explaining to you for days.

It doesn't matter who says it, I'm not denying it isn't true.

If it wasn't true than the PS4 would be the worlds worst computer, combining a low power, tablet cpu from the slowest x86 vender and then stacking cores onto it would be nothing short of a bulldozer for consoles.

The issue is your need to take things out of context to fit your agenda.


Let me try one last time to spell this out for you in the simplest way possible...

Draw calls are what they're discussing, which is when the cpu tells the gpu to do something.

The CPU in the 360 is terrible compared to modern processors, however in this case, console vs PC it's actually capable of producing more draw calls (thus not bottlenecking the graphics card) than the fastest x86-64 processor overclocked to 5GHz+ on the market.

For what you're saying to have any merit whatsoever, the current class of CPU's, from AMD to Intel, from FX to i5 would need to be incapable of pushing an 7870 to 30 fps at 1080p.

You haven't produced a single shred of evidence to support your claims in this thread, this is evident by your need to take comments by others out of context and spin them in a fashion that will support your point of view.

The real truth of the matter is that individual draw calls is the brute force way to do textures, it's a poor way to program. You can do batch draw calls to alleviate a lot of the overhead. This is why despite the fact that the Xbox360 can trounce an i7-3770k in draw calls, games like BF3 which do batched calls look worlds better with far greater draw distances than either console, despite the consoles ability to churn out far more draw calls than any gaming PC currently could ever dream of.

When building a gaming PC the most important piece of hardware for graphics fidelity is the GPU. This was true in 2005, it was true before then, it's true now, and it will probably be true until something else happens.

Nobody is bagging on the PS4 because of it's anemic AMD processor, everyone except you it seems is aware the processor in a console, due to less API overhead is capable of doing much more with much less than in a PC. The problem for you, and for consoles is that the important piece of hardware, in this case the GPU - does not receive the same benefit. That's the reason Core2 and G80 blew the consoles out of the water (despite the 360 being capable of doing 15-20 times more draw calls) in 2006.

The fundamental problem you're failing to grasp here is that the reported GPU in the PS4 is between the 7850 and the 7870 performance wise. There is no magic pixie dust for anyone to sprinkle on that GPU to increase it's actual performance past that. It is what it is, nothing more.
 
Last edited:

galego

Golden Member
Apr 10, 2013
1,091
0
0
Draw calls are what they're discussing, which is when the cpu tells the gpu to do something.

The CPU in the 360 is terrible compared to modern processors, however in this case, console vs PC it's actually capable of producing more draw calls (thus not bottlenecking the graphics card) than the fastest x86-64 processor overclocked to 5GHz+ on the market.

For what you're saying to have any merit whatsoever, the current class of CPU's, from AMD to Intel, from FX to i5 would need to be incapable of pushing an 7870 to 30 fps at 1080p.

You haven't produced a single shred of evidence to support your claims in this thread, this is evident by your need to take comments by others out of context and spin them in a fashion that will support your point of view.

The real truth of the matter is that individual draw calls is the brute force way to do textures, it's a poor way to program. You can do batch draw calls to alleviate a lot of the overhead. This is why despite the fact that the Xbox360 can trounce an i7-3770k in draw calls, games like BF3 which do batched calls look worlds better with far greater draw distances than either console, despite the consoles ability to churn out far more draw calls than any gaming console currently could ever dream of.

When building a gaming PC the most important piece of hardware for graphics fidelity is the GPU. This was true in 2005, it was true before then, it's true now, and it will probably be true until something else happens.

Nobody is bagging on the PS4 because of it's anemic AMD processor, everyone expect you it seems is aware the processor in a console, due to less API overhead is capable of doing much more with much less than in a PC. The problem for you, and for consoles is that the important piece of hardware, in this case the GPU - does not receive the same benefit. That's the reason Core2 and G80 blew the consoles out of the water (despite the 360 being capable of doing 15-20 times more draw calls) in 2006.

The fundamental problem you're failing to grasp here is that the reported GPU in the PS4 is between the 7850 and the 7870 performance wise. There is no magic pixie dust for anyone to sprinkle on that GPU to increase it's actual performance past that. It is what it is, nothing more.

Your posts show that either you don't read or your ignore what you read.

We (at least me) were discussing OS/API/driver overheads. GPU draw calls were only introduced as one example of overhead at the API level. If you read some of the quotes given before, you would find the word "example".

We are discussing GPU performance losses in the PC due to overheads. This is the reason why you don't need the same GPU on a console than on a PC.

But you ignore that and continue insisting on that the PS4 GPU (1.84 TFLOP) performs as a PC GPU between the 7850 (1.76 TFLOP) and the 7870 (2.56 TFLOP). You also believe that the GPU on a PS3 performs as the GPU on a PC of seven years ago but as I said above:

2 TFLOP on a console does not equate to 2 TFLOP on the PC; the same than 200 HP on a motorbike does not equate to 200 HP on a car.
I am curious. Do you also believe that a motorbike with 100 HP performs like a 100 HP car? Or your misguided belief is only for consoles?

You mention batch calls... Again if you read what was said before you would easily find that batch calls reduce the overhead but do not eliminate it. It is reduced to something as 2x, which is precisely the factor I have mentioned before in this thread.

Now look at this: 1.84 TFLOP x 2 = 3.68 TFLOP

Precisely Epic selected a GTX-680 (3.1 TFLOP) in their PS4-PC comparison. Magic? No.

Finally, you fail to understand once again that the PS4 uses an APU with HSA and hUMA, two new technologies that improve the performance of the GPU. I believe it is very easy to grasp:

GPU + HSA + hUMA != GPU
 
Last edited:

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
We (at least me) are discussing OS/API/driver overheads. GPU draw calls were only introduced as example of overhead on API. If you read some of the quotes given before, you would find the word "example".

We are discussing GPU performance losses in the PC due to API overhead. This is the reason why you don't need the same GPU on a console than on a PC.

But you ignore that and continue insisting on that the PS4 GPU (1.84 TFLOP) performs as a PC GPU between the 7850 (1.76 TFLOP) and the 7870 (2.56 TFLOP). You also believe that the GPU on a PS3 performs as the GPU on a PC of seven years ago but you are wrong. As I said above:



I am curious. Do you also believe that a motorbike with 100 HP performs like a 100 HP car? Or your misguided belief is only for consoles?

You mention batch calls... Again if you read what was said before you would easily find that batch calls reduce the overhead but do not eliminate it. It is reduced to something as 2x, which is precisely the factor I have mentioned before in this thread.

Finally, you fail to understand once again that the PS4 uses and APU with HSA and hUMA, two new technologies that improve the performance of the GPU. I believe it is very easy to grasp:

GPU + HSA + hUMA != GPU

Your first paragraph is redundant after the second, they both say the same thing but the first attempts to refute what the second claims to do.

The second paragraph makes use of the same tactics you use in the CPU forum to bash Intel and praise AMD. That's to say, you state something as a fact, despite it not being one and after pages of discussing the same thing you are unable to produce any evidence of it being one.

I'm not ignoring anything, I'm stating the gpu in the 360 and in PS4 are what they are, there is no magic performance gain. I can take out my 9800 GT and easily beat anything consoles can put out. That's not even a doubling in performance, and there is already hardware out now on the PC that is faster vs the 360 than what G80 was vs the 360. You have produced nothing to refute that, you're making claims here, I can not disprove God when there is no proof of existence. Just like I can't disprove a performance gain that has never been documented or proven, not by you in this thread, or anyone else ever.


<The longer a thread becomes the higher the likelihood of an unrelated car analogy>

You don't need to eliminate it, eliminating it would mean the cpu is idle on draw calls, how can you not grasp this yet still comment on it relentlessly? It only needs to be reduced to the point where it is no longer preventing the gpu from operating at the desired fps, in consoles that's typically 30 fps, in PCs that's typically 60. Modern Intel and some AMD cpu's have no problem with draw calls in gaming, especially once overclocked. You still can't figure out that draw call performance only matters once you don't have enough, much like ram.


Sure it does, whatever you say JF_AMD. I'm guessing IPC went up too right? Make some more baseless, unconfirmed, PR spun statements please it's been awhile since we've had a discussion together.
 
Last edited:

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Recreating a non existant argument again galego? You must be getting desperate, it usually takes you longer than just a few minutes to repeat the same irilivemt line.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
lazygamer.net said:
Thirdly, said Cerny, "The original AMD GCN architecture allowed for one source of graphics commands, and two sources of compute commands. For PS4, we&#8217;ve worked with AMD to increase the limit to 64 sources of compute commands &#8212; the idea is if you have some asynchronous compute you want to perform, you put commands in one of these 64 queues, and then there are multiple levels of arbitration in the hardware to determine what runs, how it runs, and when it runs, alongside the graphics that&#8217;s in the system.

I'm highly skeptical of saying because AMD's GCN has three work queues that all PC graphics architectures also only have three work queues.
Point in case from nVidia's GK110 whitepaper: Hyper-Q
nVidia GK110 whitepaper said:
Hyper&#8208;Q enables multiple CPU cores to launch work on a single GPU simultaneously,thereby dramatically increasing GPU utilization and significantly reducing CPU idle times.Hyper&#8208;Q increases the total number of connections(work queues) between the host and the GK110 GPU by allowing 32 simultaneous, hardware&#8208;managed connections(compared to the single connection available with Fermi).

nVidia was claiming some pretty big efficiency improvements in compute from adding Hyper Q so clearly this move on the PS4 is aimed with compute in mind. It should provide a good efficiency improvement. However, the PS4 will be second-to-market with this feature as you can already get it on any Gk110 card.

Source: http://www.nvidia.com/content/PDF/kepler/NVIDIA-Kepler-GK110-Architecture-Whitepaper.pdf
 
Last edited:

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
Balla is absolutely right. Draw call overhead only matters when you can't keep up, and in this case PCs really can keep up. In many ways it only matters because it wastes CPU cycles and could potentially limit the max FPS. Modern PC games do not have a problem getting to 60 or even 120 fps when the GPU bottleneck is removed, thus the overhead of 10x isn't really becoming a problem. It does force the game to push its game world updates onto a separate thread and other aspects of the game also move off the main thread to give the DX interface maximum performance but that is part of the complexity of using many cores.

You could argue that the overhead is reducing the number of cores on a PC by 1 as one core is entirely spent on interfacing with DirectX and rendering the scene due to overhead, but then that is normally the only thread that even remotely gets high CPU overhead in a lot of games anyway.

Once the draws are done everything else is on the GPU, the draw calls tell it how to do a scene and then the GPU takes over and has to actually draw it, which is where the raw specs then come in. The overhead doesn't factor into the rendering performance at all. The grand majority of that compute number then goes into post processing effects and Antialiasing. Its here where the PS4 is 7850 levels of performance, and its this which will dominate its image quality as presumably its fix function is the same as AMDs usual as well. The overhead doesn't factor into rendering at all. There is no magic speed up pill for this part, the Tflop/s delivered is all you have and it limits how much post processing you can do at 30fps.

The PS4 targeting 30 fps is however going to give it a near doubling of time to render the image and that doubles the effective power for an image. That is going to get it a long way towards high end PC, but obviously at the cost of a less smooth image.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
Recreating a non existant argument again galego? You must be getting desperate, it usually takes you longer than just a few minutes to repeat the same irilivemt line.

existant? irilivemt?... Who is "getting desperate" to post my dear friend?
 

blckgrffn

Diamond Member
May 1, 2003
9,686
4,346
136
www.teamjuchems.com
The Gamasutra article I linked to way back specifically called out some rendering techniques they would be able to do on the PS4 that are not possible on the PC that would elevate its level of relative performance over the long term.

This is due to granularity of compute vs rendering they are able to achieve, and the ability to use a small bit of compute to markedly improve the efficiency of the rendering.

I don't understand the jargon enough to know whether it was BS or not, but the reader comments (there) which I assume are more in the know seemed positive.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
The Gamasutra article I linked to way back specifically called out some rendering techniques they would be able to do on the PS4 that are not possible on the PC that would elevate its level of relative performance over the long term.

This is due to granularity of compute vs rendering they are able to achieve, and the ability to use a small bit of compute to markedly improve the efficiency of the rendering.

I don't understand the jargon enough to know whether it was BS or not, but the reader comments (there) which I assume are more in the know seemed positive.

Yes I read that, compute in this case seems to be directed towards physics.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
Balla is absolutely right. Draw call overhead only matters when you can't keep up, and in this case PCs really can keep up. In many ways it only matters because it wastes CPU cycles and could potentially limit the max FPS. Modern PC games do not have a problem getting to 60 or even 120 fps when the GPU bottleneck is removed, thus the overhead of 10x isn't really becoming a problem. It does force the game to push its game world updates onto a separate thread and other aspects of the game also move off the main thread to give the DX interface maximum performance but that is part of the complexity of using many cores.

Lottes says just the contrary:

The real reason to get excited about a PS4 is what Sony as a company does with the OS and system libraries as a platform, and what this enables 1st party studios to do, when they make PS4-only games. If PS4 has a real-time OS, with a libGCM style low level access to the GPU, then the PS4 1st party games will be years ahead of the PC simply because it opens up what is possible on the GPU. Note this won't happen right away on launch, but once developers tool up for the platform, this will be the case. As a PC guy who knows hardware to the metal, I spend most of my days in frustration knowing damn well what I could do with the hardware, but what I cannot do because Microsoft and IHVs wont provide low-level GPU access in PC APIs. One simple example, drawcalls on PC have easily 10x to 100x the overhead of a console with a libGCM style API...
And he is working at the competence (Nvidia)...

The PS4 targeting 30 fps is however going to give it a near doubling of time to render the image and that doubles the effective power for an image. That is going to get it a long way towards high end PC, but obviously at the cost of a less smooth image.

The high-end PC with the i7 and the GTX-680 only can render the elemental demo @ 30 fps when the image quality is reduced to the sub-1080p. If the high-end PC is forced to run at 1080p then cannot achieve 30 fps.
 
Last edited:

R0H1T

Platinum Member
Jan 12, 2013
2,583
164
106
I am excited for the new consoles. I just don't let my excitement fool me into making claims like this APU being 2.5x more powerful than a 680 graphically and matching an i7 computationally.
Perhaps some things get lost in translation but following the thread from the first page what I see is that the PS4 will deliver 2~3x performance without the OS overhead that PC's have to suffer, in other words for all the raw performance metrics being equal a PS4 should theoretically deliver much better results than an equivalent PC based gaming system !
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
Yes I read that, compute in this case seems to be directed towards physics.

Can it dig? With all these features improving GPGPU it may be good minig device. Imagine if it coins bits when you are at work, or sleeping, or just not playing games. Then it could pay off itself in its lifetime!
 

blckgrffn

Diamond Member
May 1, 2003
9,686
4,346
136
www.teamjuchems.com
Yes I read that, compute in this case seems to be directed towards physics.

That wasn't what I took from the full interview.

Compute was going to be used to cull unseen portions of the screen before they were rendered (as far as I could tell.) Because you were able to only render what was necessary, the perceived performance would (could) be much higher.

The article mentioned this would likely only be optimized for in exclusive titles and only a year or two into the consoles lifetime, but would keep its perceived level of performance high for a longer period of time.

Because of HSA and the optimizations made to GCN for this application, the PS4 architect was certain this type of application performance optimization would not be available on other platforms for some time.

This seems to dovetail into their desire for longer life cycles (the ability to extract a higher level of perceived performance from current gen hardware vs other competing platforms.)

It seems clever and interesting to me. For folks claiming the PS4 is boring, I think it certainly has some interesting aspects. Like the Cell, there is some exciting possibilities. Unlike the Cell, the PS4 will be a solid piece of hardware straight away regardless.
 
Last edited:
Status
Not open for further replies.