• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

How the PlayStation 4 is better than a PC

Page 31 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Wow.

Games are written from the ground up not to saturate the bus. Why is that difficult to understand?

128-double-facepalm-when-the-fail-is-so-strong-one-facepalm-is-not-enough.jpg


So you're saying the developers looked at PC hardware and had a conference and it went something like this:

Developer A: Guys, how are we going to get around this slow PCIe Bus?
Developer B: Lets just use 25% of it!?
Developer A: Great idea!

Never mind the fact that PC games are modular in that you can adjust settings to suite your hardware capabilities.
 
Last edited:

galego

Golden Member
Apr 10, 2013
1,091
0
0
CAN it outperform SLI 680's? Yeah, potentially it can if your SLI 680's are 2GB and the game uses more than 2GB of VRAM. Short of that, there's no way it's going to outperform a pair of SLI 680's.

I really love to see how the performance of the new PS4 changes so radically before released... :whiste:

First I was said that PS4 only could play tablet-like games.

After that it could play games like a PC with a graphic card less than a HD 7870.

Then that moved to GTX-680 level performance "with optimization".

Now, magically, the PS4 could outperform a pair of GTX-680 in SLI if "the game uses more than 2GB of VRAM".

Did I mention that early dev. kits have 8 GB RAM and more than 2 GB VRAM? If didn't before, then I mention this little detail now.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
I really love to see how the performance of the new PS4 changes so radically before released... :whiste:

First I was said that PS4 only could play tablet-like games.

After that it could play games like a PC with a graphic card less than a HD 7870.

Then that moved to GTX-680 level performance "with optimization".

Now, magically, the PS4 could outperform a pair of GTX-680 in SLI if "the game uses more than 2GB of VRAM".

Did I mention that early dev. kits have 8 GB RAM and more than 2 GB VRAM? If didn't before, then I mention this little detail now.

Things change as people learn more about them. PS3 games looked far worse than their 360 counterparts early on, and now they are pretty equal for the most part. And don't get excited, I'm not saying it will outperform a pair of 680's, I'm indulging the other side (you mainly) and providing a very fringe case where that could theoretically happen. I'd say the odds of that happening are about as good as my winning the powerball. And to give you an idea of how slim I think it is, I don't even play powerball, so I'd have to find a winning ticket floating in the wind.

The difference between you and me is you read something you want to hear and roll with it. I look at that same bit of information and analyze it and try to figure out in what way could it be possible and provided a scenario that COULD happen, though highly unlikely TO happen.

And I wouldn't worry about what you've mentioned. I have no doubt that anything you have to say, you've said at least half a dozen times already. ;)
 
Last edited:

Rakehellion

Lifer
Jan 15, 2013
12,181
35
91
Never mind the fact that PC games are modular in that you can adjust settings to suite your hardware capabilities.

Holy shit. Computer Science 101?

After data is transferred across the bus, it still has to be processed, which is the bulk of the work.

Homework: Does upgrading your memory from PC3-1066 to PC3-2133 double your framerates? Why or why not?

Bonus question: If you write a game that targets a 4Gbit graphics bus and only a 1Gbit bus is available, what happens to your framerates?
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
galego, sorry but the gpu is 7850 level from a hardware perspective and that is a fact. I did not say that it could not be way more efficient though which of course it will be. the laughable part is that you think it will be 10 times more efficient. AGAIN they claim the current consoles were so efficient too yet the gpu in there is not even twice as fast as a desktop equivalent even after all this time to work with it. funny how you just ignored that part.

The laugdable part is that I did not say that the gpu on the PS4 will be 10 times more efficient. I think that you would start to read what I really wrote instead imagining things...
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Holy shit. Computer Science 101?

After data is transferred across the bus, it still has to be processed, which is the bulk of the work.

Homework: Does upgrading your memory from PC3-1066 to PC3-2133 double your framerates? Why or why not?

Bonus question: If you write a game that targets a 4Gbit graphics bus and only a 1Gbit bus is available, what happens to your framerates?

Processed? so the processing is the bottleneck now? not the bus?

Actually you're right about that! The processing IS the bottleneck. That's why if I upgrade from a 660 to a Titan, I see a performance gain while upgrading from PCIe 2.x to 3.x doesn't!

It took a small village, but I'm glad you're starting to learn something
 

toyota

Lifer
Apr 15, 2001
12,957
1
0

Rakehellion

Lifer
Jan 15, 2013
12,181
35
91
http://www.opengl.org/wiki/Performance

The Frame Time is not proportional to the amount of things that you render

Bus bandwidth: There is a finite limit to the rate at which data can be sent from the CPU to the graphics card. If you require too much data to describe what you have to draw - then you may be unable to keep the graphics card busy simply because you can't get it the data it needs.

Also, the bulk of what a modern game is doing on the GPU is fragment shader processing. Vertex and other operations are trivial in comparison.
 

Rakehellion

Lifer
Jan 15, 2013
12,181
35
91
Actually you're right about that! The processing IS the bottleneck. That's why if I upgrade from a 660 to a Titan, I see a performance gain while upgrading from PCIe 2.x to 3.x doesn't!

Exactly the way it was intended. Because Nvidia is not selling buses.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Chicken and egg problem.

No game requires 8GB of VRAM yet because it doesn't exist.
Except with weak IGPs, of course.
No game requires PCIe 32x because it doesn't exist.
It probably won't, either. PCIe will get a bit faster, but 32x wouldn't take care of the bus problem, which is one of time. It's not how many GBs can be sent each way per second, but how long it takes for any data to go there, be used, and a result come back. IE, forget gigabytes. How much time will it take for say, 640 bytes? That's where any and every peripheral bus has been, is now, and will be, slow. Bandwidth is really fine, as it is. We need PCIe 4.0 more for 1x-4x devices, than for 16x ones.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
It's not how many GBs can be sent each way per second, but how long it takes for any data to go there, be used, and a result come back.

Seems like people who should know this are simply ignoring it because arguing bandwidth suites their position better. :rolleyes:
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
Draw calls are not GPU, they're cpu releated, the 10x thing is draw calls, how many different calls the system can make.

The PS3 and 360 can do more draw calls than the 3960x running 5GHz on every core.

This is the overhead they're talking about, it has nothing to do with gpu performance.

There are gpu draw calls, there are cpu draw calls, and there are cpu draw calls that affect gpu performance.

Take a look to some of the links given to you, explaining how API overhead affects gpu performance.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Exactly the way it was intended. Because Nvidia is not selling buses.

No, NVidia sells GPU's and Titan has by far the most powerful gaming GPU in existence today which can be saturated to 100% utilization by the PCIe bus operating at roughly 25% of it's total capacity. That makes your earlier claim is null and void.

Glad we got to the bottom of that. :)
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
It's not how many GBs can be sent each way per second, but how long it takes for any data to go there, be used, and a result come back.

I'm assuming you're referring to latency?

It'll be interesting to see how that compares once the PS4 is released. I mean really compares, not what a developer says. Wonder if AT will do a technical write-up on PS4.
 

galego

Golden Member
Apr 10, 2013
1,091
0
0
Actually it appears you don't hence your need to attack members rather than points.

Saying you are wrong and trying to explain you why you are wrong is not attacking you.

That is different from you making irrelevant appeals to AMDZONE and stuff as that.
 
Jan 31, 2013
108
0
0
Off Topic:
This thread is full of so much fail.

On Topic:
lol more nonsense. they say the same thing with current consoles too but all it takes is a 9600gt to exceed the same experience as on consoles. and a 9600gt is not quite even twice as fast as the gpu in the ps3.
The PS4 won't be rocking 9600GT equivalent hardware, it will be rocking HD 78xx equivalent hardware.

and? that does not make the 7850 level of graphics some magical gpu that can utilize 8gb of ram. wait until games come out and this thread will be looked back on as a joke. that will be especially true in a few years when what little technical advantage the ps4 has at launch will likely be surpassed on even mobile devices.
Seeing as how games still wont exceed 720p/1080p. There will be a whole lot of extra memory to spare. The reason behind the PS4 having 8GB of unified memory, is because it runs x86 cores. So software can't use beyond the 3.75GB limitation. Also 4k playback requires at least 4GB of its own. So you figure you got an even 4GB split between the two compute units at all times (which is more than enough). Also like I said above, even tho people are saying the graphics performance is to match the 7850. I assure you there should be much more power than a 7850 sitting on die. The PlayStation 4's APU has 18 GCN compute units, which means it has 1152 stream processors (18 x 64). The unified memory I am willing to bet will be 800MHz QDR. So you can expect 7870 levels of performance from the on die GPU, even tho its right smack in between the 7850 and the 7870 specification wise. I guess that argument lay's where SONY plans on having the core frequency set.

The whole idea behind this thread, is statistically similar desktop hardware will be a fraction faster once packaged into a APU. No bus's to create latency, and the CPU is on the same die to feed the GPU. If anything APU's should provide a much smoother gameplay experience, since there is no possibilities of a hickup. Especially with unified memory, the CPU loads up what needs to be crunched on the GPU, allocates a pointer for that memory area, and then simply passes it directly to the GPU. Cutting out a lot of twists and turns that a normal desktop computer (with dedicated graphics) would have to do to accomplish the same goal. Speculation suggests this will improve gameplay performance, I for one agree (not drastically, but at least a few frames).
 

Rakehellion

Lifer
Jan 15, 2013
12,181
35
91
No, NVidia sells GPU's and Titan has by far the most powerful gaming GPU in existence today which can be saturated to 100% utilization by the PCIe bus operating at roughly 25% of it's total capacity. That makes your earlier claim is null and void.

It nullifies I claim I never made or one you didn't understand.
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
Off Topic:
This thread is full of so much fail.

On Topic:

The PS4 won't be rocking 9600GT equivalent hardware, it will be rocking HD 78xx equivalent hardware.


Seeing as how games still wont exceed 720p/1080p. There will be a whole lot of extra memory to spare. The reason behind the PS4 having 8GB of unified memory, is because it runs x86 cores. So software can't use beyond the 3.75GB limitation. Also 4k playback requires at least 4GB of its own. So you figure you got an even 4GB split between the two compute units at all times (which is more than enough). Also like I said above, even tho people are saying the graphics performance is to match the 7850. I assure you there should be much more power than a 7850 sitting on die. The PlayStation 4's APU has 18 GCN compute units, which means it has 1152 stream processors (18 x 64). The unified memory I am willing to bet will be 800MHz QDR. So you can expect 7870 levels of performance from the on die GPU, even tho its right smack in between the 7850 and the 7870 specification wise. I guess that argument lay's where SONY plans on having the core frequency set.

The whole idea behind this thread, is statistically similar desktop hardware will be a fraction faster once packaged into a APU. No bus's to create latency, and the CPU is on the same die to feed the GPU. If anything APU's should provide a much smoother gameplay experience, since there is no possibilities of a hickup. Especially with unified memory, the CPU loads up what needs to be crunched on the GPU, allocates a pointer for that memory area, and then simply passes it directly to the GPU. Cutting out a lot of twists and turns that a normal desktop computer (with dedicated graphics) would have to do to accomplish the same goal. Speculation suggests this will improve gameplay performance, I for one agree (not drastically, but at least a few frames).
way to take things out of context. perhaps if you actually paid attention to what and who I was replying to then you could have saved us both a lot of typing. so just for you let me explain at least one thing for you again. people here are acting like the ps4 gpu could be up to 10x more efficient than an equal gpu in a pc. I said that similar nonsesne was said about the current console gpus yet in reality the ps3 gpu is maybe twice as fast as the desktop equivalent. twice as fast at this point is a joke unless you consider a 9600gt a good gaming card for the last few years.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Which is BS because like I said, console games have more object pop in than anything.
That is very much a function of limited RAM, as is uber-fuzzy textures.

What I really hate is how it bleeds over, when that happens. Playing Divinity II (replay of ED, will be 1st play of FoV), FI, which doesn't suffer from many common console game issues, even though there's no significant popping (they fixed most of the, "why is it this way on the PC?" problems with early patches), there's tons of near-distance LOD fading, which is entirely unnecessary (for PCs), but exists even at the highest settings, and it's one of those games where they traded pop-in for fade-out, such that revisiting a spot just after a fight, FI, may be disorienting, because the lighting and shadow quality changed, as more got loaded or unloaded while moving around in the fight (but, no distracting pop-in/out).

Also, it's hard to decide whether to stay with the Hunter's set, or go to a Wild Dwellers plus some uniques, again. Dweller's is more balanced, and allows me higher defenses, but less powerful, while Hunter's requires enchants to take care of its weaknesses, and has the ugly tattoo, but makes my character's butt look better (I see that view a lot while playing, after all). Tough decisions. At least they didn't go to the lengths of making the more powerful set use a chainmail bikini...some games have, and really, that's a level of cheesecake that makes it hard to enjoy playing. Now back to your regularly scheduled console v. PC thread...

I don't expect miracles, but lack of any of that would be very good, for the future, and 8GB should allow it to happen, where with last gen, it would have needed substantially different code bases and engine configurations, so many devs wouldn't do it.

They still have memory for the CPU and GPU separated by a slow bus.
Nope. They use the CPU's memory. With newer Intel HDs, they even share the CPU's cache. It's low bandwidth, but much faster than going from the CPU over PCIe to the GPU, then to the GPU's RAM, or the other direction. When the console devs are talking about the buses being slow, that's what they're talking about. Bandwidth is an issue, but a minor one, in comparison, since they can't opt to throw more power at it, like we can, and can't expect drop-in generation performance gains, like we can.

There are gpu draw calls, there are cpu draw calls, and there are cpu draw calls that affect gpu performance.
No, there are not. There are only CPU draw calls. It's the CPU that handles the API, and it's the CPU whose time is wasted waiting on all those calls to complete. Vista never gaining massive market share basically caused DX9 to have a longer life than it otherwise would have, else it would be less of an issue.
 
Last edited:

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Lol

Jaguar vs i7.

Upper-Mid-Range Sea Islands vs Sea Islands.

You have got to be kidding.

I remember when "Cell" was going to change the world.

How can you all fall for Sony marketing over and over and over?
 
Status
Not open for further replies.