• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

NextGen Console Graphics And Effects on PC (E3 Coverage!)

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
On the CPU it's a huge issue, but not on the GPU.
The GPU can't do everything itself, hence the reason unified memory is so valuable. As for Hawken, to each his own but the destructibles look poor to me. Also how do the various blocks of destruction affect gameplay? And don't tell me that is all done in the GPU, if so then remove the CPU and see how far you get.

And this:
So despite there being a lot of communication and data transfer between the CPU and GPU over the PCIe bus, it doesn't appear to be impacting performance.
Compared to what? How do you know doing the same thing with a unified memory space would not enable even better visuals?
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,071
3,575
126
And don't tell me that is all done in the GPU, if so then remove the CPU and see how far you get.


ROFL...
I will pull a nostrodomus!
You pull the CPU and the result is....

Your board will not turn on... your board will not boot... it will not post... it will just sit there and stare at you going WTF do u want me to do?
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Then why overclocking RAM?
Go back to DDR1 CL2! Almost no latency what so ever!

I never said bandwidth was unimportant, only that latency was more important for desktop because most desktop apps tend to be serial in nature so if a thread or cycle stalls due to latency, there is a big performance penalty.

I call you out on this. Give me links, numbers, graphs. Show me that GDDR5 have so much latency.

I did a search on google and I got nothing conclusive as far as hard numbers are concerned.

Man, it would be faster to sent pigeons than using this gddr shit!
If today apps don't benefit from additional bandwidth that doesn't mean it can't be beneficial. It only means that those can be further optimized to take advantage of it.
Bandwidth doesn't help?

You really need to pay closer attention because you're just imagining stuff. I NEVER SAID bandwidth doesn't help. I said that the extra bandwidth afforded by GDDR5 (in a APU package similar to the PS4) would not be of any use in a desktop environment, because desktop applications cannot use that much bandwidth effectively.

Your graph even shows that. Up to a certain point, there is a performance increase but then it levels off.

And WinRAR is one of the very few desktop apps that are actually bandwidth sensitive. Most desktop apps are latency sensitive rather than bandwidth sensitive.

Still, that's not to say that latency and bandwidth are contradictory or anything. You can have BOTH low latency and high bandwidth.

Here's a AIDA64 cache and memory benchmark I ran on my computer:

xj.png


Now compare that to the AMD Athlon 64 3000+ from years ago which used single channel low latency DDR:

cm-latency.gif


So as you can see, despite running DDR3 2133 in quad channel mode, my memory access latency isn't significantly higher than the Athlon 64 3000+ which used single channel DDR memory with a cas latency of 2 while my bandwidth is FAR higher.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Latency absolutely is a huge issue when you're doing calculations of the CPU, AI, physics etc. Shuffling data back and forth on a slow bus limits what is possible, why do you think PhysX is largely limited in what it can do, pretty much all PhysX content looks the same, variations of the same routines.

Agreed that the new consoles will help push utilization of multi-core, which is why the various devs have been recommending 8-core AMD processors. PS4 game code will naturally synergy to desktop AMD.

Please note that cross-platform gamedevs program for the lowest common denominator. It used to be that xb360/PS3 were the laggards, so PC gamers just got crappy ports of games that were hamstrung by inferior console hardware.

Now, the picture has changed somewhat.

CPU: XBO/PS4 both have a bunch of slow core CPUs. There's much noise made over 8 cores, but at the end of the day, even if you manage to fully saturate them simultaneously (unlikely), a fast quad-core desktop Intel CPU might still be able to keep up.

GPU: XBO is the laggard here, PS4 stronger, PC can be upgraded to be stronger as well. I would not be surprised if multiplatform devs aim for a target fps of say, 30 on XBO, and just let PS4 gamers have better framerates but same visuals.

Latency: Even if PCs are the relative laggards here, gamedevs have had a long history of catering to the lowest common denominator. They might figure out some tricks that can only be done with low-latency console advantages, but rest assured they will have a fallback plan for PCs. Similar to how games might have special DX11 Tessellation or GPU PhysX but also a fallback plan to DX9/10 and non-GPU PhysX if those are not available.

From what I've heard so far, the main benefit of PS4 low latency unified memory is that you can do things like physics simulations and switch back to rendering frames, back and forth, more quickly. So maybe PS4 gamers get the console equivalent of PhysX and PC gamers don't, or at least not as well. Who cares? It's like console gamers don't really care that TressFX/PhysX/tessellation are PC-exclusives.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
The GPU can't do everything itself, hence the reason unified memory is so valuable. As for Hawken, to each his own but the destructibles look poor to me. Also how do the various blocks of destruction affect gameplay? And don't tell me that is all done in the GPU, if so then remove the CPU and see how far you get.

Well of course the CPU needs to be involved (for game logic) and there is certainly a lot of communication going on between the CPU and GPU. What I mean is that the calculations for the physics are done on the GPU and not the CPU. No current CPU could handle that level of physics as we're talking about hundreds of thousands of particles.

As far as gameplay goes, you can collapse an entire building or a wall on an enemy, or blow up the ground underneath him making him fall down into a pit..

This is all beta still and it's not fully refined, but no game has ever seen calculated destruction physics on this scale.

Compared to what? How do you know doing the same thing with a unified memory space would not enable even better visuals?

You're the one who made the assertion that latency was a massive problem on the PCIe bus, not me. But you have to ask yourself, if latency is such a huge problem for PCIe, then how is it that a game like Hawken can exist?
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
You can't find it? Here:
http://forums.anandtech.com/showpost.php?p=35069149&postcount=12
GDDR5 latency is nowhere near bad. It all comes down to memory controller latency - for that we need to wait until those come to desktop.

LOL did you even read BrightCandle's post? He said and I quote:

Conversely DDR3 has much less bandwidth, but it delivers responses considerably quicker. Its latency is 1/10th that of GDDR5. The nature of GDDR5 is that it matches its usage. A GPU is a massively parallel device, it readis memory in sequential chunks and if one of its processors is stuck awaiting on memory it doesn't matter much. Whereas a CPU is very random in its access of memory and needs to hide as much of the latency as it can or its performance will plummet. That is the memories are designed around the different usage patterns that a GPU and CPU see.

This is exactly what I've been saying. DDR3 is optimized for desktop usage which is latency sensitive, while GDDR5 is optimized for graphics which is throughput oriented in nature.

You're right though, that it will come down to how good the memory controller is on that APU. If it's really good, it should be able to mask a lot of the latency associated with GDDR5. Cache will help as well undoubtedly.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
As far as gameplay goes, you can collapse an entire building or a wall on an enemy, or blow up the ground underneath him making him fall down into a pit..
You won't be able to do this if you don't own an Nvidia card?
You're the one who made the assertion that latency was a massive problem on the PCIe bus, not me. But you have to ask yourself, if latency is such a huge problem for PCIe, then how is it that a game like Hawken can exist?
I'm making the assertion because it makes sense, although you're exaggerating I'm not trying to say it is a "massive problem". Right now, I can't show hard data to back that up because so far the PS4/Xbone are the only pieces of hardware with such a memory configuration and they are not out yet. But at the same time, you're making a claim that Hawken is not being hampered buy PCIe intercommunication, yet there is nothing to compare this to, meaning we don't know exactly much better it would be without the data shuffling.

Let's put it this way, the PC architecture is extremely mature in its current form, so when I see things like a unified memory space for the CPU/GPU, I feel this is a very significant step forward and is a much needed innovation, especially from a gaming perspective.
 
Last edited:

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
LOL did you even read BrightCandle's post? He said and I quote:



This is exactly what I've been saying. DDR3 is optimized for desktop usage which is latency sensitive, while GDDR5 is optimized for graphics which is throughput oriented in nature.

I don't think it's going to make a real difference in consoles. Pretty much all the apps will be streaming apps. It's not like people will be running large Excel spreadsheets or CAD drawings on a console. There will be small differences, somebody will time how long it takes Netflix to load on Xbox One vs PS4, but in the end that won't be a discriminating factor for buyers.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
LOL did you even read BrightCandle's post? He said and I quote:



This is exactly what I've been saying. DDR3 is optimized for desktop usage which is latency sensitive, while GDDR5 is optimized for graphics which is throughput oriented in nature.

You're right though, that it will come down to how good the memory controller is on that APU. If it's really good, it should be able to mask a lot of the latency associated with GDDR5. Cache will help as well undoubtedly.

And did you read what he was referring to? Now LOL@yourself. He was referring to 200ns measured by someone which was latency between CPU and VRAM on graphics card (PCIe latency and whatnot).
Actual memory latency:
GDDR5 timings as provided by Hynix datasheet:
CAS latency= 10.6ns
tRCD = 12ns
tRP = 12ns
tRAS = 28 ns


DDR3 timings for some Corsair 2133 RAM 11-11-11-28
CAS 10.3ns
tRCD 10.3ns
tRP 10.3ns
tRAS 26.2ns
What it is? Not even 20%?
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Early games, maybe. But going by what the various devs are saying, extracting the most out of the PS4 for example is going to be a much shorter learning curve versus the cryptic PS3's hardware.

True, but

1) It'll carry over quite a bit to the pc being build on x86.

2) You will see less improvements over the lifespan of the console due to the first games being better comparatively speaking to the first games of the xbox 360/ ps3 where the devs had to learn how to code.

Needless to say if the most they can achieve is around 2x from ps3/xbox360 I'd expect it to be less this generation given that the architectures are in common.

how is that even possible when the console is frame limited to 30fps?

Who even plays games at 30fps on a PC?

Anyhow until the console is able to play at least 45-60FPS constant... dont even bother comparing it to a PC.
Anyone whose played a console and then a high grade gaming PC always complain....

CONSOLES SHUTTER WAY TOO MUCH @ 30FPS... WAY MORE THEN SLI MICRO SHUTTERS!!!

and u know a lot of people whine about SLI micro shutters... thats nothing compared to a shutter at 30fps vs 60.

This has absolutely nothing to do with what I was saying.

IE) console generally has ~2x performance advantage. So given the same hardware the game would run at half the fps on the pc. (Obviously you need more power or turn down settings....).
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
The reason why Xbox One games looked so good in the presentation - they run on Geforce:
http://www.cinemablend.com/games/Xb...ng-Windows-7-With-Nvidia-GTX-Cards-56737.html

D:

LOL. This really has to make one think, will the graphics of these games look that good once they are running on much lesser hardware on the console? And if they do look the same, will it run at 60fps like the demos, or will it run at 30fps?

On the other hand, this proves to use that porting games to a PC should be very simple.
 

Jaskalas

Lifer
Jun 23, 2004
35,794
10,088
136
Wow, I see a lot of PC gamers in full denial mode.

No need to deny anything.

Next Gen Consoles will catch up to PC... then go nowhere. PC will keep on sprinting ahead with each new year of hardware and software. PC not having to wait for a new generation will allow it to have a new "Crysis" in a year or two.
 

Hitman928

Diamond Member
Apr 15, 2012
6,701
12,379
136
The reason why Xbox One games looked so good in the presentation - they run on Geforce:
http://www.cinemablend.com/games/Xb...ng-Windows-7-With-Nvidia-GTX-Cards-56737.html

D:

This is very common, especially since MS is rumored to be way behind in the actual system. Most likely games that are coming to XBO and PC, they had the PC version on display. For XBO exclusives, they probably had some sort of VM set up if they were even playable at all (most likely the XBO exclusives were just target renders, just a guess as I don't follow this stuff too closely to know what was playable and what wasn't on the floor).

The same thing happened back in 2005 for those with a short memory, the X360 demos were running on Mac towers back then. That doesn't change anything that's been talked about in this thread though.
 

RampantAndroid

Diamond Member
Jun 27, 2004
6,591
3
81
I believe one of the devs of Metro LL said that "generally you can get about twice as much out of a console as an equivalent pc" (paraphrased) and he was talking about the ps3/xbox 360. I would expect for the early games of the ps4 it will be much less.

Can you define what equivalent means for a PC here?
 

mikegg

Golden Member
Jan 30, 2010
1,976
577
136
The reason most PC games do not look as good as they could is because they are all console ports. Very few AAA games these days that are PC exclusives that are not a game purposely made to run on most PC's (MMO's, etc).

That game looks great, but is expected to come to PC as well. Same for many other games that were shown.

The issue with your comparisons is you are coming last gen console ports to next gen console games. Its not that PC's are not fast enough, its that old consoles held them back. Not to mention most games devs put next to zero optimizations in the PC ports, making them run worse than they should.

1. The Division was not announced for PC. My guess is that there will be a PC version after they release it for consoles.

2. I know PCs receive mostly ports. Check the second post of this thread on page 1. I already said so.

3. Main argument is that I've seen better looking games on Xbox 360/PS4 than I have ever seen for PC. That's the ONLY argument here. Some one else here said PC had graphics like the Division in 2011 which is absolutely false and stupid to say.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
Some one else here said PC had graphics like the Division in 2011 which is absolutely false and stupid to say.
I have not owned a console in, well basically forever, and I am impressed with what I've seen so far on the PS4. I might break tradition and actually buy a console this generation.

Coming from a die hard PC gamer.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
You won't be able to do this if you don't own an Nvidia card?

You won't, because AMD cards don't support PhysX. This could change in the future though.

I'm making the assertion because it makes sense, although you're exaggerating I'm not trying to say it is a "massive problem". Right now, I can't show hard data to back that up because so far the PS4/Xbone are the only pieces of hardware with such a memory configuration and they are not out yet. But at the same time, you're making a claim that Hawken is not being hampered buy PCIe intercommunication, yet there is nothing to compare this to, meaning we don't know exactly much better it would be without the data shuffling.

If the PCI-E controller was not located on the CPU die, then maybe latency would be more of an issue..

Let's put it this way, the PC architecture is extremely mature in its current form, so when I see things like a unified memory space for the CPU/GPU, I feel this is a very significant step forward and is a much needed innovation, especially from a gaming perspective.

UMA components make sense in low power devices like tablets, smartphones and consoles. But in high end gaming rigs or workstations? I don't think so..
 
Status
Not open for further replies.