• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Will a 7950 keep pace with PS4 @ 1080p

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Yeah right, dream on.

Next gen straight up PC guts is a zillion times easier to code for than the current-gen frankensteinian architecture, have no dumb memory size bottlenecks and HSA secret sauce. The optimization will be through the roof. It you think having a 320 SP 3870 to match a positively ancient 40 SP Xenos GPU in Skyrim is bad enough, get ready for a new world of hurt.
 
Ball doesn't like facts so he makes up comparisons that are not relevant to put down PS4 because he hates consoles.

Let's see Witcher 2 running on a Core 2 Duo and X1900XTX 256MB. The interesting part is by the time PS3 came out, 8800GTX launched. 8800GTX was at least 3.12x faster than X1800XTX/7950GT 256MB. The GPU in PS4 is at minimum at HD7850 2GB level. The Titan is about 2.5x faster than HD7850 2GB and is expected to be the fastest single GPU for all of 2013.

That means PS4's GPU is actually better off in relation to the fastest single PC GPU than PS3's was. PS4 will also allow access to lower level of hardware for coding, something that was missing for most of PS3's life for 90% of developers.

if he still uses a 9800GT to play games on then he doesnt like PC's either lol
 
That means PS4's GPU is actually better off in relation to the fastest single PC GPU than PS3's was. PS4 will also allow access to lower level of hardware for coding, something that was missing for most of PS3's life for 90% of developers.

100% wrong 🙄

The 360 gpu was faster than the 7900 GTX.

G80 came out a year later.

0031.jpg


I dunno why you insist on misinformation.

PS4 is way behind the curve, it's in a worse position than the 360 was compared to the rest of the market.

By the time the PS4 comes out, 20nm will be ~half a year away. And they don't even need 20nm, Titan is already well past the PS4's potential.


Why don't you use this time to tell me how draw call overhead is the reason the PS4 will be so much more powerful gpu wise, I'm dying for another good laugh from you RS. Or how x86 and DX11.1 type API optimizations are going to change the entire game. :hmm:
 
Last edited:
Who cares how powerful the PC is as compared to the PS4. Spare a very few handful of developers almost everyone codes for the console and then gives us half-baked ports. We can run the most powerful hardware around but the software isn't making use of it. PS4 is going to give an amazing experience due to an immensely better thought out hardware package and developers spending the money to code for it. Even though the PC market continues to exceed the naysayers claiming it's dying we continue to be second class citizens. (There are even quarters where PC revenue exceeded the PS3 for a given company)

http://www.pcgamer.com/2012/08/01/ea-financial-results-revealed/

We should be treated better than we are when it comes to software and developer focus.
 
I'm kinda with balla here. Look at straight console ports and a 540m. Same GFLOPS (~250) and a 540m can run any game at approximately (straight console ports only leave out watered down BF3 or crysis 3) the same settings as the xbox 360 (720p low). Skyrim could run on medium at 768p (which is higher than what the console version renders at) at ~30-40 fps. Sure the pc now has more RAM and cpu power but what you are not considering is that when the devs do a console port they basically don't try to optimize anything because they dont have to, there is so much excess cpu power and ram vram that they can be very innefficient (bioshock infinite vram usage is like 1.8 GB at 1080p) and not cause problems. Its not the fact they they don't optimize for pc as much as they are deliberately wasteful and don't bother with low end systems because the user is unlikely to play the game. Its a port and they don't give a crap about using more ram because any pc that can play the game is going to have 2+ GB (and they don't want to spend any more money than they have to). So they release crappy ports that make people think the game is more demanding than it is.

Also the fact that they don't support games based on older hardware (no driver optimizations on the release of the title for older gpus). Look at game performance before nvidia or AMD release drivers with optimizations; it can be a nice 40+%. Would a 7900 or x1900 recieve that driver boost? No.

I never really got that secret console sauce thing.
 
100% wrong 🙄
The 360 gpu was faster than the 7900 GTX.
G80 came out a year later. I dunno why you insist on misinformation.

My comment/quote was clearly discussing PS3. I didn't say anything about the Xbox 360's GPU in relation to 8800GTX. Also, the specs for Xbox next haven't been confirmed so in fairness I am comparing PS3 to PS4 because for all we know the final specs on the 720 are still subject to change.

No way was the GPU in 360 faster than 7900GTX (22.8 VP). That's impossible if you look at the specs:

Xenos GPU: 500mhz, 240 VLIW-5 shaders, 16 TMUs, 8 ROPs, with only 22.4GB/sec memory bandwidth
vs.
600mhz, 240 VLIW-5 shaders, 16 TMUs, 16 ROPs, with 51.2GB/sec memory bandwidth of 2900GT

The penalty of halving the memory bandwidth on 2900GT and dropping ROPs from 16 to 8 would result in at least a 30% reduction in its performance, plus a 16.7% reduction in GPU clock speed would mean it'll end up at least 40% slower than a real 2900GT. 2900GT has a VP rating of 26. 26* (1-40%) = 15.6 which is roughly around X1800XT level at best. 7900GTX trashes this.

Why do you keep linking benchmarks with a 9800GT when the GPU inside Xbox 360 is miles worse than the anemic 2900GT? Why don't you post screenshots of Bioshock Infinite on a 22.4GB/sec memory bandwidth and heavily GPU downclocked 2900GT and see what you get?

You brought up 10mb of eDRAM, let's talk about that:

"On this platform I’d be concerned with memory bandwidth. Only DDR3 for system/GPU memory pared with 32MB of “ESRAM” sounds troubling. 32MB of ESRAM is only really enough to do forward shading with MSAA using only 32-bits/pixel color with 2xMSAA at 1080p or 4xMSAA at 720p. Anything else to ESRAM would require tiling and resolves like on the Xbox360 (which would likely be a DMA copy on 720) or attempting to use the slow DDR3 as a render target.

I’d bet most titles attempting deferred shading will be stuck at 720p with only poor post process AA (like FXAA). If this GPU is pre-GCN with a serious performance gap to PS4, then this next Xbox will act like a boat anchor, dragging down the min-spec target for cross-platform next-generation games.”
~ Source

The GPU inside Xbox 360 was still weak if you start going down to the details. We know for a fact that dropping memory bandwidth in half on a GPU drastically tanks its performance. AMD estimated that the GPU inside Xbox 360, while unified shader, was comparable to X1800XT. This is why I referenced that figure. But again this was a special case of comparing a unified shader vs. a fixed pixel pipeline tech. In reality the GPU inside PS4 has maintained the entire memory bandwidth of HD7850/7870 (actually bests those). If a similar approach was used on PS4's design was utilized on the Xbox 360/PS3, we'd end up with an HD7850 with half of its memory bandwidth and half the ROPs chopped! (Note: the RSX in PS3 also has half the ROPs and half the memory bandwidth of the 7950GT...look it up).

The graphics that PS360 consoles put out relative to their crippled GPUs is very good. There is no way you could get games that look like PS3's on a 7950GT with just 8 ROPs and 22.4GB/sec memory bandwidth.

PS4 is way behind the curve, it's in a worse position than the 360 was compared to the rest of the market.

That was also a special time in GPU history that saw a transition from a fixed pixel pipeline GPU architecture to a unified shader one. Because of this alone, the GPU inside Xbox 360 was way ahead of its time; although it was still crippled on the ROP/memory bandwidth side. Of course back then high-end GPUs didn't use 230-250W of power like GTX480/580/Titan/HD7970Ghz do. Comparing PS4 to modern GPUs like GTX680/Titan/7970Ghz to what the case was for Xbox 360 is misleading.

By the time the PS4 comes out, 20nm will be ~half a year away. And they don't even need 20nm, Titan is already well past the PS4's potential.

Oh wow, a $1,000 GPU is better than a $500 console? What's next a $2,500 PC is better than a $500 console because you can't play games on a Titan alone as you need other components like case, PSU, CPU, memory, mobo, PSU, etc. It's funny how in your world PS4 is being compared to $500-1000 GPUs, but the total cost of the PC that houses such GPUs is ignored, as well as the cost of upgrading those GPUs over the next 6-7 years is also ignored. :whiste:

Balla you seem to constantly ignore the fundamentals / constraints and just state everything as if there are no engineering limitations. Even if PS4 had dual Titans, you'd still find a way to crap all over it. Sounds like you just hate consoles. Despite Titan's amazing specs, where are PC games that look like UE3.5 Samaritan Demo or the UE4 Infiltrator demo? They are nowhere to be found. You end up spending $1,000 to play high-rez console ports and by the time next gen games like Witcher 3 start arriving, we'll have a $500 GPU with the performance of a Titan from Maxwell/Volcanic Islands family, etc. You said yourself that 20nm GPUs would arrive in 2014, which makes the case for buying high-end GPUs today to future-proof for PS4's games over the next 6-7 years a total waste of $.

Why don't you use this time to tell me how draw call overhead is the reason the PS4 will be so much more powerful gpu wise, I'm dying for another good laugh from you RS. Or how x86 and DX11.1 type API optimizations are going to change the entire game. :hmm:

Why don't you tell us about PC games that look 10x better than God of War 3, Uncharted 3 and the Last of Us or a racing game that looks 10x better than Forza Motorsports 4 because the Titan has 20x+ the power of an Xbox 360/PS3's GPUs. HD7950 is less than 50% as powerful as the GPU inside PS4. By the time all the optimizations and benefits of coding directly to the metal are taken into account, PS4 is going to be good enough to play next gen games for the next 6 years while HD7950 will be too slow, just like a 7950GT/X1800XT/X1950XTX are all too slow for today's PC games.

I guess all the professional game developers / hardware makers who discuss API overheads on the PC, and Windows OS / DX inefficiencies are talking out of their ***.
 
Last edited:
However your link does not address new techniques to reduce draw call overhead on PCs, nor was AMD too keen on multithreading at that time with DX11 (nvidia had it for awhile, which was why the 580 did so well against the 6970 in titles like Civ V which had multithreaded DX11 enabled).


This all ignores some major facts that seems to go unaddressed, such as you can address draw calls in batches ( http://blogs.amd.com/play/2011/12/12/bf3techinterview/ ), and can increase the PC's draw call capability by using the multithreading technique used in DX11 by several fold.


Looks like you missed something. Same article, bit further down. Talking about DX11 improvements, but still a problem. 5000 is maximum for PC games(on 2011 hardware), consoles games use 10000-20000(and with the next-gen consoles, the console GPUs will be much stronger and able to take advantage of this)
Now the PC software architecture – DirectX – has been kind of bent into shape to try to accommodate more and more of the batch calls in a sneaky kind of way. There are the multi-threaded display lists, which come up in DirectX 11 – that helps, but unsurprisingly it only gives you a factor of two at the very best, from what we've seen. And we also support instancing, which means that if you're going to draw a crate, you can actually draw ten crates just as fast as far as DirectX is concerned.

But it's still very hard to throw tremendous variety into a PC game. If you want each of your draw calls to be a bit different, then you can't get over about 2-3,000 draw calls typically - and certainly a maximum amount of 5,000. Games developers definitely have a need for that. Console games often use 10-20,000 draw calls per frame, and that's an easier way to let the artist's vision shine through.'

DX11 multithreaded rendering apparently also isn't fit for all use cases. Battlefield 3 is confirmed not to be using it, and Far Cry 3 had it patched away at release.

Nvidia also doesn't have a clear advantage in Civ5 anymore, despite AMD's lack of support for full DX11 multithreaded rendering

Oh, here you can also read what the guy who created FXAA has to say about the PS4
http://www.neogaf.com/forum/showthread.php?t=510076


When it comes to trusting people, I'd rather go with the ones working in the industry than with the self-proclaimed experts roaming the forums
 
Last edited:
All hail the mythical console!

http://www.youtube.com/watch?v=mFgVR2Ke0XY

You're good OP, some weird things are happening right now because AMD is on the box, but trust me, you're good.

Arent you the one who is hating on consoles just because AMD is in BOTH boxes? And dont give me the "oh but I have a 7950" excuse

Look at the Ps3 and Xbox 360, they can actually run Crysis 3... A system with barely any RAM, and a 1800XT/7900GTX equivalent, running Crysis 3

Let that sink in for a moment

Now take a 7850, add 8 GB ram to the system, and a x86 cpu... can you imagine the potential? Ps4 will RAPE current pcs, Titan included, when you take optimization into account

And this is coming from someone who has never owned a console, and has no plans to
But you gotta face the facts, different worlds here
 
My comment/quote was clearly discussing PS3. I didn't say anything about the Xbox 360's GPU in relation to 8800GTX. Also, the specs for Xbox next haven't been confirmed so in fairness I am comparing PS3 to PS4 because for all we know the final specs on the 720 are still subject to change.

No way was the GPU in 360 faster than 7900GTX (22.8 VP). That's impossible if you look at the specs:

Xenos GPU: 500mhz, 240 VLIW-5 shaders, 16 TMUs, 8 ROPs, with only 22.4GB/sec memory bandwidth
vs.
600mhz, 240 VLIW-5 shaders, 16 TMUs, 16 ROPs, with 51.2GB/sec memory bandwidth of 2900GT

The penalty of halving the memory bandwidth on 2900GT and dropping ROPs from 16 to 8 would result in at least a 30% reduction in its performance, plus a 16.7% reduction in GPU clock speed would mean it'll end up at least 40% slower than a real 2900GT. 2900GT has a VP rating of 26. 26* (1-40%) = 15.6 which is roughly around X1800XT level at best. 7900GTX trashes this.

Why do you keep linking benchmarks with a 9800GT when the GPU inside Xbox 360 is miles worse than the anemic 2900GT? Why don't you post screenshots of Bioshock Infinite on a 22.4GB/sec memory bandwidth and heavily GPU downclocked 2900GT and see what you get?

You brought up 10mb of eDRAM, let's talk about that:

"On this platform I’d be concerned with memory bandwidth. Only DDR3 for system/GPU memory pared with 32MB of “ESRAM” sounds troubling. 32MB of ESRAM is only really enough to do forward shading with MSAA using only 32-bits/pixel color with 2xMSAA at 1080p or 4xMSAA at 720p. Anything else to ESRAM would require tiling and resolves like on the Xbox360 (which would likely be a DMA copy on 720) or attempting to use the slow DDR3 as a render target.

I’d bet most titles attempting deferred shading will be stuck at 720p with only poor post process AA (like FXAA). If this GPU is pre-GCN with a serious performance gap to PS4, then this next Xbox will act like a boat anchor, dragging down the min-spec target for cross-platform next-generation games.”
~ Source

The GPU inside Xbox 360 was still weak if you start going down to the details. We know for a fact that dropping memory bandwidth in half on a GPU drastically tanks its performance. AMD estimated that the GPU inside Xbox 360, while unified shader, was comparable to X1800XT. This is why I referenced that figure. But again this was a special case of comparing a unified shader vs. a fixed pixel pipeline tech. In reality the GPU inside PS4 has maintained the entire memory bandwidth of HD7850/7870 (actually bests those). If a similar approach was used on PS4's design was utilized on the Xbox 360/PS3, we'd end up with an HD7850 with half of its memory bandwidth and half the ROPs chopped! (Note: the RSX in PS3 also has half the ROPs and half the memory bandwidth of the 7950GT...look it up).

The graphics that PS360 consoles put out relative to their crippled GPUs is very good. There is no way you could get games that look like PS3's on a 7950GT with just 8 ROPs and 22.4GB/sec memory bandwidth.



That was also a special time in GPU history that saw a transition from a fixed pixel pipeline GPU architecture to a unified shader one. Because of this alone, the GPU inside Xbox 360 was way ahead of its time; although it was still crippled on the ROP/memory bandwidth side. Of course back then high-end GPUs didn't use 230-250W of power like GTX480/580/Titan/HD7970Ghz do. Comparing PS4 to modern GPUs like GTX680/Titan/7970Ghz to what the case was for Xbox 360 is misleading.



Oh wow, a $1,000 GPU is better than a $500 console? What's next a $2,500 PC is better than a $500 console because you can't play games on a Titan alone as you need other components like case, PSU, CPU, memory, mobo, PSU, etc. It's funny how in your world PS4 is being compared to $500-1000 GPUs, but the total cost of the PC that houses such GPUs is ignored, as well as the cost of upgrading those GPUs over the next 6-7 years is also ignored. :whiste:

Balla you seem to constantly ignore the fundamentals / constraints and just state everything as if there are no engineering limitations. Even if PS4 had dual Titans, you'd still find a way to crap all over it. Sounds like you just hate consoles. Despite Titan's amazing specs, where are PC games that look like UE3.5 Samaritan Demo or the UE4 Infiltrator demo? They are nowhere to be found. You end up spending $1,000 to play high-rez console ports and by the time next gen games like Witcher 3 start arriving, we'll have a $500 GPU with the performance of a Titan from Maxwell/Volcanic Islands family, etc. You said yourself that 20nm GPUs would arrive in 2014, which makes the case for buying high-end GPUs today to future-proof for PS4's games over the next 6-7 years a total waste of $.



Why don't you tell us about PC games that look 10x better than God of War 3, Uncharted 3 and the Last of Us or a racing game that looks 10x better than Forza Motorsports 4 because the Titan has 20x+ the power of an Xbox 360/PS3's GPUs. HD7950 is less than 50% as powerful as the GPU inside PS4. By the time all the optimizations and benefits of coding directly to the metal are taken into account, PS4 is going to be good enough to play next gen games for the next 6 years while HD7950 will be too slow, just like a 7950GT/X1800XT/X1950XTX are all too slow for today's PC games.

I guess all the professional game developers / hardware makers who discuss API overheads on the PC, and Windows OS / DX inefficiencies are talking out of their ***.

I think that balla's point is that often pc games can do quite well with limited bandwidth. Compare a console port (say dishonoured) with the xbox version. With a 630m (~300 gflops) you can play at 768p at high ~50 fps. Yes you have more vram and slightly more bandwidth (28 GB/s--really not needed and overclocking the ram is not going to affect performance because the gpu really only requires ~20--I had a 540m and only core clocks mattered). That is significantly better than the console version (yes I am ignoring vram, ram and cpu). Either way, looking strictly at the gpu performance there is no (or little) console magic sauce.

You are also forgetting driver tweaks. How good did tomb raider run on nvidia cards before nvidia released a driver? There is a substantial improvement with newer drivers. Do older cards get these tweaks? No. A lot of times its not that the hardware can't play it but rather that nvidia and amd don't bother tweaking really old cards.
 
I don't think a 7950 will be able to touch a PS4. Maybe when the PS4 launches because it will be running games meant for current gen consoles. But later on there will be no comparison. Just like a card from when the PS3 launched cannot run more recent games at the same quality settings as a PS3 (30fps@1080P for most games).

The way it typically goes is when a console first launches, it surpasses current PC's. Then as things go on, PC's pass up the consoles. We are currently in the "PC's surpass consoles" time frame. But in a year we wont be.
 
In this case the standard platform is x86 and DX11.1 or did I miss something?

Point is driver optimizations stop long before console optimizations stop for the same hardware. In 2 or 3 years on the 7950 AMD will spend little to no effort improving performance for new games. The 7850/7870 on the PS4 will keep getting more and more optimized. Eventually the 7950 won't be able to run multiplatform games on the same detail levels as the PS4 hardware.
 
Last edited:
A 7950 already struggles in modern games at 1080p ultra without msaa, it struggles to maintain 60 fps and often fails. This situation will get worse by the end of the year and a lot worse in the next 12 months. Within 12 months you won't even be thinking of playing games on ultra but more like high or very high.
 
I don't think a 7950 will be able to touch a PS4. Maybe when the PS4 launches because it will be running games meant for current gen consoles. But later on there will be no comparison. Just like a card from when the PS3 launched cannot run more recent games at the same quality settings as a PS3 (30fps@1080P for most games).

The way it typically goes is when a console first launches, it surpasses current PC's. Then as things go on, PC's pass up the consoles. We are currently in the "PC's surpass consoles" time frame. But in a year we wont be.

Epic confirmed that UE4 was running real time on a GTX680 which is not that much faster than a 7950 overall. UE4 was shown running in real time on a PS4 but the demo was much less intensive than the one shown off on PC.

Ubisoft has shown Watch Dogs running on a PC only, they haven't shown what it looks like on console...wonder why(they did not disclose the specs of the PC though so it was likely SLI or similar).

I highly doubt you're anywhere close. A 7950 is already ahead of a PS4 GPU specs. PC versions of games often come with higher resolution textures as well.

The biggest issue for consoles is weak CPU power. This makes games with lots of stuff going on that use the CPU struggle. The GPU might be fine for 1080p, but the CPU will be the bottleneck most of the time. Multithreaded engines could help for sure, but we will have to wait and see.


A 7950 already struggles in modern games at 1080p ultra without msaa, it struggles to maintain 60 fps and often fails. This situation will get worse by the end of the year and a lot worse in the next 12 months. Within 12 months you won't even be thinking of playing games on ultra but more like high or very high.

Console games always lock to 60fps do they? News to me. I don't expect every game to run 60fps at 1080p locked all day on a PS4. The UE4 demo and Deep Down are likely best case scenarios and when you throw in complex AI and physics calculations in there you might start to see the quality of the image go down when bandwidth either on the CPU or the GPU becomes constrained.

We will have to see but I really am not expecting a miracle here. I'm expecting positive progress over the current consoles that's for sure.
 
Last edited:
A 7950 already struggles in modern games at 1080p ultra without msaa, it struggles to maintain 60 fps and often fails. This situation will get worse by the end of the year and a lot worse in the next 12 months. Within 12 months you won't even be thinking of playing games on ultra but more like high or very high.

Struggles.... Come on man. No it doesn't. What modern games do you speak of?
 
Struggles.... Come on man. No it doesn't. What modern games do you speak of?

Crysis 3 and Tomb Raider, but...look at the complexity of what's going on in those games. No Console game even comes close. Besides, I have found that many times you don't need 60fps for a quality game experience. Pretty much every single GPU available except the GTX Titan (and even that in some cases) struggles in these games. So I do think it's a little unfair to say, especially when there are no console games anywhere close to that level of detail to compare to.
 
Back
Top