How the PlayStation 4 is better than a PC

Page 39 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
You are bashing good old consoles because games look like shit. But that doesn't mean anything in terms of GPU performance. What is hurting your eyes are low resolution textures which are effect of 256MB video memory. You can't do anything about it.

You can have Hi-poly object, but if you slap low res upscaled texture on it, it will look like crap. On the other hand, if you have low-poly object and slap nice 4k textures on it, suddenly it looks so much better...all those details man!

If you want to compare GPU performance you need to say what you are comparing:
There is more to it than just a shader performance.
IMHO: We know very little about PS4 to say anything. We can only say: Bye bye ulgy textures
"No one will need more than 637 kb of memory for a personal computer."
;)
 
Last edited:

Spjut

Senior member
Apr 9, 2011
931
160
106
The consoles' draw calls advantage theoretically means the consoles can have more unique objects on screen at once. The PC games can look better by using the extra GPU horsepower to increase the resolution, AA etc, but it may be unable to match the next-gen consoles in having as much "unique stuff on screen" as possible

I personally think Battlefield 4 DX11.1 vs PS4/Xbox3 will be good for an initial comparison between console hw and PC hw
 
Last edited:

futurefields

Diamond Member
Jun 2, 2012
6,470
32
91
The consoles' draw calls advantage theoretically means the consoles can have more unique objects on screen at once. The PC games can look better by using the extra GPU horsepower to increase the resolution, AA etc, but it may be unable to match the next-gen consoles in having as much "unique stuff on screen" as possible

I personally think Battlefield 4 DX11.1 vs PS4/Xbox3 will be good for an initial comparison between console hw and PC hw

Why dx 11.1?

That big difference over 11? Shame on microsoft, because i refuse windows 8.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
The consoles' draw calls advantage theoretically means the consoles can have more unique objects on screen at once.
Windows Vista came out in 2006, and OpenGL isn't dead. Today, Blizzard, and some random MMO devs, are stuck with engines that can't draw lots of trees efficiently. Neither prior console had enough RAM, GPU power, or GPU bandwidth to make much of it, even though they could bypass DX9.

what console its better choise, the new play station or the the new xbox?
That depends on the final choices of MS and Sony. MS is rumored to be sticking to an always-online requirement, and per-account registration of all games (IE, no used market, which is what all the DRM is really about, anyway). If they do that, all Sony has to do is not do that, to have a far better option, regardless of hardware.
 

Spjut

Senior member
Apr 9, 2011
931
160
106
Why dx 11.1?

That big difference over 11? Shame on microsoft, because i refuse windows 8.

I just thought that if we are to compare consoles and PCs, it's more fair if the PC versions are using the latest DirectX/OpenGL.

I don't know how much better improvements DX11.1 can/will bring, but repi has mentioned a couple of times on his twitter that DX11.1 is being used to further reduce the CPU overhead, so there are obviously some worthwhile improvements over DX11.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
if the PS3 had the power of two 6800GT then why does it run COD4 at 1024 x 600?

Not a chance PS3's GPU was that powerful.

Here are the specs:

7950GT = 550mhz GPU, 24 TMUs, 8 Vertex pipelines, 16 ROPs, 44.8 GB/sec over 256-bit memory bus
7800GT = 450mhz GPU, 20 TMUs, 7 Vertex pipelines, 16 ROPs, 32 GB/sec over 256-bit memory bus
PS3 = 550mhz GPU, 24 TMUs, 8 Vertex pipelines, 8 ROPs, 22.4 GB/sec over 128-bit memory bus


We can say with near 100% certainty the GPU inside PS3 was slower than 7800GT and was much slower than 7950GT and miles slower than X1950XTX.

Memory bandwidth and ROPs are some of the most critical components of a GPU. Halving both of those will drop performance 30-40% on the GPU.

Radeon X1950XTX 512MB (DX9.0c) -- 27 VP
7950GT 256MB = 16.1 VP
7800GT 256MB = 12.3 VP
PS3 ~ 60% of 7950GT = 9.66 VP, maybe 10 VP
6800GT = 8.8 VP
http://forums.anandtech.com/showthread.php?t=2298406

Xbox 360's GPU was better, with ATI estimating it at roughly Radeon X1800XT 256MB (DX9.0c) level -- 16.2 VP level but still crippled by 8 ROPs, 100mhz lower GPU clock and thus nearly half the memory bandwidth of a 2900GT:
http://www.gpureview.com/show_cards.php?card1=537&card2=

People keep saying how the GPUs in PS3/360 were near top of the line when they came out and that's not accurate - it's one of the biggest myths that keeps persisting. By the time PS3 came out, even before G80 dropped, X1950XTX mopped the floor with it - at least 2.5x faster. The GPUs in PS360 were nowhere near X1950XTX series as they were severely crippled in the ROP/memory bandwidth area - arguably the most critical area for GPUs. The main reason Xbox 360's GPU came out looking slightly better overall is 512MB of console memory and the fact that it had a unified shader architecture competing against a fixed pixel/fragment pipeline G71 unit in PS3.

Since modern high end flagship GPUs use 185-240W of power, it's not reasonable to have expected PS4 to ever house such a GPU in a small box regardless of price.
 
Last edited:

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
You can't just look at the hardware, you have to look at the game design.

Little to no AA and low resolution get the bottleneck off the bandwidth and ROPs and focus the power on the other parts where it's equal. The PS3 also has the cell processor.

Games are designed to take advantage of texture fill, which the PS3 isn't lacking on. They also use streaming textures which works well on consoles because of their low memory buffer and high draw call potential. Of course it looks and is terrible for PCs which do not suffer from low buffer space and do not enjoy high draw call performance.

They also moved to a deferred rendering engine, which again isn't great for PCs, and hates AA, but it allows them to continue to focus on the strengths of consoles. Anyone using voodoo power on PC to compare directly to consoles is simply delusional. VP consists of resolutions and AA modes consoles never see. It's unfortunate users need to ignore the fact that consoles have much less pixel demand as evident by design, use far lower resolutions than any PC hardware is tested at, and do no use AA or have free AA through embedded dram not listed in the gpu specs.

Of course when those games come to PC they are bumped in resolution, and often get MSAA added to them, as well as additional texture bandwidth added and more effects and real lighting are added which makes any comparison based on PC performance futile.
 

futurefields

Diamond Member
Jun 2, 2012
6,470
32
91
You can't just look at the hardware, you have to look at the game design.

Little to no AA and low resolution get the bottleneck off the bandwidth and ROPs and focus the power on the other parts where it's equal. The PS3 also has the cell processor.

Games are designed to take advantage of texture fill, which the PS3 isn't lacking on. They also use streaming textures which works well on consoles because of their low memory buffer and high draw call potential. Of course it looks and is terrible for PCs which do not suffer from low buffer space and do not enjoy high draw call performance.

They also moved to a deferred rendering engine, which again isn't great for PCs, and hates AA, but it allows them to continue to focus on the strengths of consoles. Anyone using voodoo power on PC to compare directly to consoles is simply delusional. VP consists of resolutions and AA modes consoles never see. It's unfortunate users need to ignore the fact that consoles have much less pixel demand as evident by design, use far lower resolutions than any PC hardware is tested at, and do no use AA or have free AA through embedded dram not listed in the gpu specs.

Of course when those games come to PC they are bumped in resolution, and often get MSAA added to them, as well as additional texture bandwidth added and more effects and real lighting are added which makes any comparison based on PC performance futile.

Exactly, consoles have to make a lot of design compromise to achieve similar performance to PC, because they lack the brute power. Glad we agree!
 

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
I got the image from google. Can you prove that?

You bring the evidence, you defend it.

Is it from somewhere reputable, or was it just the first google-image result that popped up? Do you trust the source at a level deeper than "if it's on the internet, it must be true!"?
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
I think it's important to point out that Google is a search engine. Not a source. I know it sounds obvious, but there appears to be at least one person using those terms interchangeably, and they are anything but.
 

Sleepingforest

Platinum Member
Nov 18, 2012
2,375
0
76
Don't forget that Google presents what it thinks you want. So if you type in things like "scene where PS4 equals PC" enough all the images will shift to reflect that.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
This thread as gone completely inane. You're all arguing about which of two completely different machines is "better." "Stupidity" doesn't even approach the topic.

As far as the technical discussion goes, lower RAM bandwidth has much less of an impact at the lower resolutions consoles display. It's easy way to save space and power consumption that will have little if any performance hit. The same thing was done with mobile GPU's for years until the latest iterations of GDDR5 came out.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
You can't just look at the hardware, you have to look at the game design.

Little to no AA and low resolution get the bottleneck off the bandwidth and ROPs and focus the power on the other parts where it's equal.
So, going straight for inferiority from the start is a good thing? You can't go for low resolution and then call anything equal. Bandwidth is a necessity for performance. It can come from RAM, or it can come from cache, but not having it is crippling.

The PS3 also has the cell processor.
And MS got it's main CPU core :(. Sad all around, outside of FLOPS on spec sheets.

Games are designed to take advantage of texture fill, which the PS3 isn't lacking on. They also use streaming textures which works well on consoles because of their low memory buffer and high draw call potential. Of course it looks and is terrible for PCs which do not suffer from low buffer space and do not enjoy high draw call performance.
Except that they're mostly designed just like PC games, but with embedded tricks to make them work on lower-end hardware...often not very well, either. Frankly, they never did have the GPU power to actually take advantage of not having to wait so often for draws, even with DX9. Not only do both upcoming ones have enough GPU power for that, but they also can start from scratch without using APIs that impose such inefficiencies, anyway.

They also moved to a deferred rendering engine, which again isn't great for PCs, and hates AA, but it allows them to continue to focus on the strengths of consoles.
What deferred rendering wasn't good for was AA and HDR, in DirectX 9.0C. Even so, I'll take FXAA and better lighting, over poor lighting but nice AA, for such games. Deferred rendering is perfectly good on PCs, and can support HDR and AA, at the same time, with DX10. Deferred rendering requires additional rendering passes, which is not good for raw performance on any hardware. A single rendering might be split into as many as 3 passes, but allows for more information on those 2nd and 3rd passes than was available to the 1st pass, by having the whole scene's geometry (and possibly lighting info) available to subsequent passes (also, subsequent passes may not need to process all pixels over again).

Anyone using voodoo power on PC to compare directly to consoles is simply delusional. VP consists of resolutions and AA modes consoles never see. It's unfortunate users need to ignore the fact that consoles have much less pixel demand as evident by design
A 1920x1080 display is a 1920x1080 display. The pixel demand has been identical. We're just willing to bitch about it, and not buy into any acceptance of such horridness (a unique issue to the last generation of consoles).
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
So, going straight for inferiority from the start is a good thing? You can't go for low resolution and then call anything equal. Bandwidth is a necessity for performance. It can come from RAM, or it can come from cache, but not having it is crippling.

You have some liberties when games are designed for your system and then cobbled together for another. Games are designed around a specific hardware set, at a certain resolution with a specific fps target. You can optimize it perfectly for the exact piece of hardware you're targeting, with PC you just add a couple generic tiers and port it out.

As far as bandwidth, the higher your res the more you need, the lower your res the less you need. So comparing the specific specs of the 7900GT to the PS3 is disingenuous given the reasons I've already stated.


And MS got it's main CPU core :(. Sad all around, outside of FLOPS on spec sheets.

Luckily they got a more advanced GPU.

Except that they're mostly designed just like PC games, but with embedded tricks to make them work on lower-end hardware...often not very well, either. Frankly, they never did have the GPU power to actually take advantage of not having to wait so often for draws, even with DX9. Not only do both upcoming ones have enough GPU power for that, but they also can start from scratch without using APIs that impose such inefficiencies, anyway.

They have less gpu power now respectively vs the current PC market than they did back then. What is being discussed is that like before, PC games will quickly eclipse the settles console games ship with providing higher IQ due to having more power.

What inefficiencies, draw calls? You have to do them regardless of API.

The question is if the API overhead causes a noticeable loss in GPU power, given the same GPU and a modern processor would the PC fail to match the IQ, res, and FPS that is offered by the exact same gpu in the console.

What deferred rendering wasn't good for was AA and HDR, in DirectX 9.0C. Even so, I'll take FXAA and better lighting, over poor lighting but nice AA, for such games. Deferred rendering is perfectly good on PCs, and can support HDR and AA, at the same time, with DX10. Deferred rendering requires additional rendering passes, which is not good for raw performance on any hardware. A single rendering might be split into as many as 3 passes, but allows for more information on those 2nd and 3rd passes than was available to the 1st pass, by having the whole scene's geometry (and possibly lighting info) available to subsequent passes (also, subsequent passes may not need to process all pixels over again).

The performance loss from AA is huge, and even AMD which is in both consoles is trying to push Forward+ over Deferred rendering.

A 1920x1080 display is a 1920x1080 display. The pixel demand has been identical. We're just willing to bitch about it, and not buy into any acceptance of such horridness (a unique issue to the last generation of consoles).

The point is consoles aren't rendering a 1080p image on a 1080p screen.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
You have some liberties when games are designed for your system and then cobbled together for another. Games are designed around a specific hardware set, at a certain resolution with a specific fps target. You can optimize it perfectly for the exact piece of hardware you're targeting, with PC you just add a couple generic tiers and port it out.

As far as bandwidth, the higher your res the more you need, the lower your res the less you need. So comparing the specific specs of the 7900GT to the PS3 is disingenuous given the reasons I've already stated.
Those are just comparing why they resorted to doing the wrong thing: lack of bandwidth. The resolutions of the displays are largely identical, between PCs and consoles. You need more bandwidth not merely for higher resolutions, but simply to process more values. Add a normal map? More bandwidth needed at the same res. Got a window, or water (transparency)? Ditto. And so on.

Last time, Sony didn't have enough for games of the time of release, while devs for Xbox had to hope they could make good use of the small eDRAM.

Nintendo's decision to wait it out a few years (SD-only) was a very good one, as far as the games went (though they're set to become like Atari, or Sega, now :().

They have less gpu power now respectively vs the current PC market than they did back then. What is being discussed is that like before, PC games will quickly eclipse the settles console games ship with providing higher IQ due to having more power.
That didn't hurt the PS2, did it? The games need to be made within the scope of the hardware they are being used with. They tried to believe they had some kind of major edge, instead of focusing on making the games look good, within the limitations they had, and the results rarely have looked good.

What inefficiencies, draw calls? You have to do them regardless of API.
That is and has been a red herring since the start. DirectX 9 required draw calls, often flushing to GPU memory, for the simplest of things, like creating a texture object. It's not that you don't have to do them, it's that it was an issue about supporting Windows XP and DX9 in the game engine, and DX9 required way too many such calls, as scene complexity grew, while DX10 can batch most of it into small numbers of such calls. The combination of OpenGL becoming a small niche for PC gaming (and the XB360 not using it), and Vista not being a major success, is basically the draw call limitation issue. It's not there because they are on x86, or because of some GPU feature lacking, but because they needed to support WinXP/DX9 on the Windows/PC side of things, and/or were continuing to use an older engine that they had lots of custom tools for.

The question is if the API overhead causes a noticeable loss in GPU power, given the same GPU and a modern processor would the PC fail to match the IQ, res, and FPS that is offered by the exact same gpu in the console.
But that won't happen, will it? First, we won't have the same GPU--ours have more bandwidth to play with, typically. Second, we'll get far more powerful ones, before games that can look better than the last console gen come out, typically (a trend that using plain x86 might change). It's an enticing idea, but our sea of change, at some significant efficiency costs (mostly in power consumption) nets us better quicker, every time, and will continue to do so. We had more raw power on the day of release last time, and will this time, too. At least this time, the consoles will have lots of VRAM, and the CPUs aren't paper tigers.

The performance loss from AA is huge, and even AMD which is in both consoles is trying to push Forward+ over Deferred rendering.
Making scenes look better had a performance cost? Who knew? AA only remains "free" until some games start giving some GPU a workout. It's happened over and over again, in our PCs. There are also other ways to skin the cat--tiling is coming back, too, with DX10+ (IIRC, Frostbite is going this route). Point is, deferred rendering hasn't been a "bad on PC" v. "good on console" thing. Pretty much every game that's made a point about it has ended up with much better lighting, and/or other special shader effects, and the performance efficiency is also good on the PC. It's a trade-off that is orthogonal to PCs v. consoles. It eats bandwidth and GPU time on both, for some combination of reduced GPU computational load, sometimes smaller intermediate buffers, and/or just better imagery.

The point is consoles aren't rendering a 1080p image on a 1080p screen.
Exactly. They pushed for new hardware, marketed use of it, and then decided to use it like crap. If they couldn't do all the fancy GPU work at native res, then they shouldn't be doing it. 1024x600, 1152x640, 960x544, etc.? Inexcusable. They should have either done SD-only (large amounts of upscaling can look OK, low amounts look bad), or an HD. The issue with resolutions is that there are a small number of acceptable resolutions (480p 4:3, 480p 16:9, 720p, 1080p), and one ideal (your TV's, which is probably 768 lines, screwing you over either way :)).
 
Last edited:

sxr7171

Diamond Member
Jun 21, 2002
5,079
40
91
While several of you are bickering & baiting over the PS4 vs PC you're missing the best aspect of this new console. The PS4 and the Xbox 3rd gen are going to raise the minimum bar for across the board performance.

Having games being better coded for x64, multi-threading, and using plenty of system ram are very good things.

Exactly. And because these games will run on systems which are basically PCs some developers have already said they will develop their new games for PC and cut it down as necessary for the consoles. This is a great boon for PC gamers. Our games will not be ports with better textures and a few extra features. Our games will be the real thing while they port PC games down to consoles and optimize as best as they for those consoles. So we are going be in for a real treat this next generation.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
"For The Witness we're mostly interested in the base machine and how fast it is -- the fact it has faster RAM than a PC, which really helps in shuttling graphics resources around, and since it's not running a heavyweight operating system like Windows that gets in the way of your graphics," he told Edge. "Rendering stuff through Windows has an impact on performance. Since a console is just about games, that doesn't happen, and the equivalent game will run faster. And if you can target to specific hardware you can make it run faster, too."


Blow's not wrong, really. PCs need their memory to do all sorts of things, while a console is a dedicated machine, and if your game is being targeted to a specific platform, it'll naturally run better. The PlayStation 3 knows all about that, famously acquiring rubbish ports of games like Modern Warfare 2 and Skyrim. If anything, the PS4 should mark an age where at least Bethesda can get it right. That'll be nice.

http://www.destructoid.com/jon-blow-ps4-games-to-run-faster-than-pc-248474.phtml
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
Nobody more making the "Photoshopped pics" claim? Ok then let us continue with some more shots

Crysis_2_PC_Xbox_360_Vergleich_1.jpg


Crysis_2_PC_Xbox_360_Vergleich_3.jpg


Crysis_2_PC_Xbox_360_Vergleich_4.jpg


Crysis_2_PC_Xbox_360_Vergleich_5.jpg
Crysis_2_PC_Xbox_360_Vergleich_6.jpg


Crysis_2_PC_Xbox_360_Vergleich_7.jpg


Crysis_2_PC_Xbox_360_Vergleich_8.jpg


Now for those of you that still believe that a 1.84 TFLOP on a PC is the same than a 1.84 TFLOP on the PS4, and for those of you still negating the overhead mentioned by Carmack, Huddy, Lottes and many others... could you name some seven year-old gaming PC with 512 MB (RAM + VRAM) and same GPU that can play crysis 2 or equivalent quality game?
 
Last edited:

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
Now for those of you that still believe that a 1.84 TFLOP on a PC is the same than a 1.84 TFLOP on the PS4, and for those of you still negating the overhead mentioned by Carmack, Huddy, Lottes and many others... could you name some seven year-old gaming PC with 512 MB (RAM + VRAM) and same GPU that can play crysis 2 or equivalent quality game?

i think that, nobody in this thread really doubts that consoles are more efficient in the same FLOP counts... but how it's equivalent to a >3 TFLOP GPU is the problem
 
Status
Not open for further replies.