NextGen Console Graphics And Effects on PC (E3 Coverage!)

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
626
126
You must be blind. The amount of detail and texture clarity in the Star Citizen trailer destroys the Division.
I must be also blind because that Star Citizen clip didn't impress me in the slightest. It's okay, but looks generic and not even close to being something that wows me.
Except you have a HUGE problem with that argument. The next-gen consoles are using the same hardware as current x86 PC's, the same video card API's, the same directx APIs, etc., etc...
The PC is a very wasteful platform when it comes to computing cycles, this is repeated over and over again but somehow many simply choose to ignore it. You cannot compare the same hardware on a PC and a console. Even if they were 100% identical, the console will still be extracting at least 2X the performance out of the same hardware, often much more. Also there are no PC's currently that are using 8 gigs of unified GDDR5 system memory that I am aware of. I'm hoping we will see this tech come to the PC soon, but even if we do, we won't be realizing the same performance.

The up side, x86 in the Xbone/PS4 is the best thing to happen to PC gaming, well probably ever. Especially for people with AMD hardware, all the next gen games will be coded first on the AMD architecture.
 

finbarqs

Diamond Member
Feb 16, 2005
3,617
2
81
lol I really love the name 'Xbone'. just sounds like xbox got boned lol. Boned by PS4!
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
After watching the 1080p video of The Division, I find it pretty much on par with : http://www.youtube.com/watch?v=CUT6skIrvnI, which I've found to be the best looking game to date (especially at 1440p, just wow). Yes there are some flaws (trees), but the amount of detail and high resolution textures more than made up for it.

I watched the video in 1080p as well (you can watch it here), and to me it looked very blurry. Texture detail was definitely compromised to deal with the sheer amount of objects.

And that is what I'm seeing in The Division, is the amount of detail in the environment. 2GB of VRAM, imo, just won't cut it anymore on the PC soon. I've had Crysis 3 go as high as 3GB of usage on my Titan at 1440p. Whatever comes out for the PS4, is going to end up looking even better, and require even more power than we already have, on the PC.

Not really. Just because a game will utilize 3GB of VRAM, does not mean it REQUIRES it. Crysis 3 runs perfectly well and smooth (average about 40 FPS) at 2560x1440 very high settings with SMAA set to 1x and V-sync off on my overclocked GTX 580 SLI rig despite them having only 1.5GB..

The CryEngine 3 is so memory efficient that a lot of detail can be rendered without requiring obscene amounts of VRAM. Crysis 3 also makes excellent use of system memory to buffer the VRAM, which a lot of game engines these days don't do because they're primarily designed for the memory deficient Xbox 360 and PS3 (*cough* Unreal engine 3 and Dunia 2 *cough*)
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
I must be also blind because that Star Citizen clip didn't impress me in the slightest. It's okay, but looks generic and not even close to being something that wows me.

Well everyone has their subjective tastes, but graphics is objective, and the texture detail in Star Citizen is objectively superior to the Division.

You can watch it here and see for yourself..

Also there are no PC's currently that are using 8 gigs of unified GDDR5 system memory that I am aware of. I'm hoping we will see this tech come to the PC soon, but even if we do, we won't be realizing the same performance.

And there's a good reason for this. GDDR5 has very high latency and would not be a good choice for system memory on a PC..

The up side, x86 in the Xbone/PS4 is the best thing to happen to PC gaming, well probably ever. Especially for people with AMD hardware, all the next gen games will be coded first on the AMD architecture.

If history is an indication, it won't really matter that the PS4 and Xbox One use GCN..
 

CakeMonster

Golden Member
Nov 22, 2012
1,621
798
136
You cannot compare the same hardware on a PC and a console. Even if they were 100% identical, the console will still be extracting at least 2X the performance out of the same hardware, often much more.

I know its impossible to put an exact number on this, but do you have any sources for this statement? It seems rather high still.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
626
126
I don't think latency of GDDR5 memory is the actually problem, the challenge comes in when making the memory controller itself. But the latency issue is far worse when you're shuffling data back and forth on the PCIe bus.
I know its impossible to put an exact number on this, but do you have any sources for this statement? It seems rather high still.
There have been statements from game devs, 2X would be the lower end of the claim. But also common sense applies, look at the available power of the Xbox360 and compare that to a PC with about the same power, then consider visually what was done on each platform.
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
There is no precedent for this gen of console hardware. This time, the entire architecture is from one vendor.

It still doesn't matter. During the optimization process, the games will still be optimized for both vendors and while I am no programmer, I would imagine the game would run on a separate pathway specifically for NVidia GPUs.
 
May 13, 2009
12,333
612
126
I don't think latency of GDDR5 memory is the actually problem, the challenge comes in when making the memory controller itself. But the latency issue is far worse when you're shuffling data back and forth on the PCIe bus.

There have been statements from game devs, 2X would be the lower end of the claim. But also common sense applies, look at the available power of the Xbox360 and compare that to a PC with about the same power, then consider visually what was done on each platform.
I have no doubts I could get the same visual quality out of a 8800gtx at 720p 30 fps that are available on current consoles. So I call bs on 2X.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
626
126
It still doesn't matter. During the optimization process, the games will still be optimized for both vendors and while I am no programmer, I would imagine the game would run on a separate pathway specifically for NVidia GPUs.
It's not just the GPU, it's the entire architecture.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
It still doesn't matter. During the optimization process, the games will still be optimized for both vendors and while I am no programmer, I would imagine the game would run on a separate pathway specifically for NVidia GPUs.

Ever asked yourself why Metro LL works better on NV gpu than AMD?
m%201920.jpg

or PS3 than Xbox360?
http://www.eurogamer.net/articles/digitalfoundry-metro-last-light-face-off

It will no longer be the case. Game devs make their profits on consoles. PC sales are not even comparable. They design games around consoles, which both will be GCN powered. If nv want games to be optimized for them, they need to throw $ and people at games. Looking at upcoming GE campaign I don't see how they want to compete.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
I call this a substance that goes out of bulls rear end.

I can't help that you're ignorant :whiste:

Why on Earth would anyone want to use GDDR5 (you do know what the G stands for right?) for desktop memory when GDDR5 is specifically designed to optimize bandwidth and speed and absolutely no desktop application requires such high levels of bandwidth. Also the latency penalty would be horrible since the CPU operates mostly in a serial manner unlike the highly parallel nature of the GPU.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
I don't think latency of GDDR5 memory is the actually problem, the challenge comes in when making the memory controller itself. But the latency issue is far worse when you're shuffling data back and forth on the PCIe bus.

Latency is definitely part of the problem. The other part is cost. Using GDDR5 for system memory wouldn't increase performance because desktop applications cannot utilize such high bandwidth, and the latency penalty would likely decrease performance because most desktop applications are serial in nature.

Also, latency isn't an issue on GPUs because whenever a thread or cycle stalls due to latency, they can switch over to another one due to their explicitly parallel nature.

Just look at the PCIe bus. Performance is barely affected by going from 16x to 8x, so obviously latency or external bandwidth aren't much of a problem for the GPU.

It's not just the GPU, it's the entire architecture.

I still don't see a problem Performance on both AMD and Intel CPUs should increase because developers will be forced to make more parallel game code for multicore processors.
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Ever asked yourself why Metro LL works better on NV gpu than AMD?

Ever asked yourself why Far Cry 3 and Crysis 3 run faster on NVidia hardware despite being part of the AMD's Gaming Evolved?

And these benchmarks look like they were taken before the patch which solved a lot of issues on AMD hardware and increased performance.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
626
126
Latency absolutely is a huge issue when you're doing calculations of the CPU, AI, physics etc. Shuffling data back and forth on a slow bus limits what is possible, why do you think PhysX is largely limited in what it can do, pretty much all PhysX content looks the same, variations of the same routines.

Agreed that the new consoles will help push utilization of multi-core, which is why the various devs have been recommending 8-core AMD processors. PS4 game code will naturally synergy to desktop AMD.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
I can't help that you're ignorant :whiste:

Why on Earth would anyone want to use GDDR5 (you do know what the G stands for right?) for desktop memory when GDDR5 is specifically designed to optimize bandwidth and speed and absolutely no desktop application requires such high levels of bandwidth. Also the latency penalty would be horrible since the CPU operates mostly in a serial manner unlike the highly parallel nature of the GPU.
Then why overclocking RAM?
Go back to DDR1 CL2! Almost no latency what so ever!
I call you out on this. Give me links, numbers, graphs. Show me that GDDR5 have so much latency.
Man, it would be faster to sent pigeons than using this gddr shit!
If today apps don't benefit from additional bandwidth that doesn't mean it can't be beneficial. It only means that those can be further optimized to take advantage of it.
Bandwidth doesn't help?
50925.png
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
There have been statements from game devs, 2X would be the lower end of the claim. But also common sense applies, look at the available power of the Xbox360 and compare that to a PC with about the same power, then consider visually what was done on each platform.

I believe one of the devs of Metro LL said that "generally you can get about twice as much out of a console as an equivalent pc" (paraphrased) and he was talking about the ps3/xbox 360. I would expect for the early games of the ps4 it will be much less.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,042
3,522
126
No way. Graphics definitely look better on the One than any PC games now.

I'm sorry.

u must have viewed them on a really crappy PC... no offense...

I believe one of the devs of Metro LL said that "generally you can get about twice as much out of a console as an equivalent pc" (paraphrased) and he was talking about the ps3/xbox 360. I would expect for the early games of the ps4 it will be much less.

how is that even possible when the console is frame limited to 30fps?

Who even plays games at 30fps on a PC?

Also the Dev's at Metro totally didnt live up to there promise.
They said they wouldnt invest in multiplayer to increase game content and all the other stuff.
Yet... after playing last light... i wish we had less content and MULTI PLAYER activated.
We expected LL to be like the original... and 75% of us who played the original was disappointed in LL.

This thread has been hashed, and it never gets past fan worship.
Its like Apple Vs. Intel from way back ago.... and u know how that went...

Anyhow until the console is able to play at least 45-60FPS constant... dont even bother comparing it to a PC.
Anyone whose played a console and then a high grade gaming PC always complain....

CONSOLES SHUTTER WAY TOO MUCH @ 30FPS... WAY MORE THEN SLI MICRO SHUTTERS!!!

and u know a lot of people whine about SLI micro shutters... thats nothing compared to a shutter at 30fps vs 60.
 
Last edited:

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
626
126
I would expect for the early games of the ps4 it will be much less.

Early games, maybe. But going by what the various devs are saying, extracting the most out of the PS4 for example is going to be a much shorter learning curve versus the cryptic PS3's hardware.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Gamedevs that derive lots of sales from consoles have a vested interest in promoting consoles.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,042
3,522
126
i just hope game makers dont get strong armed into royalties with console only.

Or i hope the big daddy EA decides to go up against SONY and buy out all the game makers Sony has forced royalties from.

mmm .... then i might not be as angry at EA then i am now...
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Latency absolutely is a huge issue when you're doing calculations of the CPU, AI, physics etc.

On the CPU it's a huge issue, but not on the GPU.

Shuffling data back and forth on a slow bus limits what is possible, why do you think PhysX is largely limited in what it can do, pretty much all PhysX content looks the same, variations of the same routines.

Do you mean PhysX on the CPU or GPU? PhysX is limited mostly by developers because they want gameplay to be similar across the various configurations. If you look at Hawken for instance, fully destructible levels that impact gameplay are now possible and this is running on the GPU, not the CPU.

So despite there being a lot of communication and data transfer between the CPU and GPU over the PCIe bus, it doesn't appear to be impacting performance.
 
Status
Not open for further replies.