Nvidia: Not Enough Money in a PS4 GPU for us to bother

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
exactly. John carmack clearly states the PC has layers of inefficiency. a game console is a purpose designed gaming device with close to metal programming acess, a lightweight and highly optimized OS.

http://www.pcper.com/reviews/Editor...-Graphics-Ray-Tracing-Voxels-and-more/Intervi

" I don't worry about the GPU hardware at all. I worry about the drivers a lot because there is a huge difference between what the hardware can do and what we can actually get out of it if we have to control it at a fine grain level. That's really been driven home by this past project by working at a very low level of the hardware on consoles and comparing that to these PCs that are true orders of magnitude more powerful than the PS3 or something, but struggle in many cases to keep up the same minimum latency. They have tons of bandwidth, they can render at many more multi-samples, multiple megapixels per screen, but to be able to go through the cycle and get feedback... “fence here, update this here, and draw them there...” it struggles to get that done in 16ms, and that is frustrating."

The PS4 GPU cannot be compared to a HD 7850. its much more powerful and capable in a console. a PS4 GPU running a console OS could be as powerful as a desktop HD 7970 GPU running windows 7 with DX11 drivers. here is an interesting quote from john carmack

https://twitter.com/ID_AA_Carmack/status/50277106856370176

I agree on this. The GPU in the PS4 being on parity with a 7850 is going to probably be able to deliver 7970 performance, maybe better.

Console gamers are going to wet their nappies when they get a look at Battlefield 4 on the PS4. It's going to sell consoles like crazy over the Christmas season. When a new console arrives, it gets marketed on its new and more powerful technology, generally in the context of visuals. The crap pile console gamers have been using for the past 10 years compared to what they will get on the PS4 is going to blow them away.

I'm still curious to see if they go for native 1080p or do 720p upscaled for better framerates. With the given specs of the PS4, they can easily do native 1080p @ 30fps with incredible visuals.
 

NTMBK

Lifer
Nov 14, 2011
10,519
6,028
136
To give you all a quick idea of the benefits that can come from a dedicated gaming machine having the CPU and GPU on one die, with stupidly fast interconnects and close to no latency- when the 360 CPU + GPU were shrunk onto the same die, they needed to add an extra "FSB replacement" block to simulate the latencies that were present in the original hardware setup so that the extra performance wouldn't break compatibility with the original model.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
You know what, this argument was similar with AMD and it's on die memory controller and Intel with it's Northbridge design. Only in a few situations was there any real difference in performance between them. Everybody claimed there was too much communication latency in the Intel design and AMD had virtually none. And yet, we all survived.
 

wand3r3r

Diamond Member
May 16, 2008
3,180
0
0
You know what, this argument was similar with AMD and it's on die memory controller and Intel with it's Northbridge design. Only in a few situations was there any real difference in performance between them. Everybody claimed there was too much communication latency in the Intel design and AMD had virtually none. And yet, we all survived.

Yet intel proceeded to bring the memory controller on-die? (no supposed benefit :D) /ot
 

NTMBK

Lifer
Nov 14, 2011
10,519
6,028
136
You know what, this argument was similar with AMD and it's on die memory controller and Intel with it's Northbridge design. Only in a few situations was there any real difference in performance between them. Everybody claimed there was too much communication latency in the Intel design and AMD had virtually none. And yet, we all survived.

a) Intel then added an on-die memory controller, so they clearly thought it was a good idea too.

b) PCIe latencies, combined with APIs like DirectX creating buffering latencies, are MASSIVE. Accesses to main memory don't even compare.

c) The real benefits are in letting developers use totally new algorithms which wouldn't have been possible before. In the past the only realistic method is to have the CPU parcel up a big chunk of work, then send it off to the GPU to never be seen again (until it's displayed on screen), because the communication overhead is just too high. But now they have a fixed platform which kills those costs, they can fine-grain their tasks, with back and forth communication between the two.
 
Last edited:

BladeVenom

Lifer
Jun 2, 2005
13,365
16
0
Some people will believe anything. Just like the PS3 was going to be a supercomputer with movie like graphics.
 
Feb 19, 2009
10,457
10
76
You're taking it from an elitist perspective. I don't need superb graphics to enjoy a game. I have to find yet a PC game giving me goosebumps like old PS1 and PS2 games did back then. The first MGS, Soul Reaver, good old Final Fantasy VII, Vagrant Story, Silent Hill, Driver... the list is endless.

I'd gladly go back to PS1/2 graphics to have another golden era again.

I enjoyed a lot of JP console RPGs as well, but PC had: Baldurs Gate, Fallout 1/2, Planescape Torment and many others. Not to mention the plethora of RTS or turn-based strategy games. You don't need great graphics, but as you get accustomed to great graphics over the years, its very very difficult to enjoy a newly released game with crap graphics.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
To give you all a quick idea of the benefits that can come from a dedicated gaming machine having the CPU and GPU on one die, with stupidly fast interconnects and close to no latency- when the 360 CPU + GPU were shrunk onto the same die, they needed to add an extra "FSB replacement" block to simulate the latencies that were present in the original hardware setup so that the extra performance wouldn't break compatibility with the original model.

How many die shrinks were there between the original xbox 360 and when it merged the CPU and GPU onto the same die? I realize there are benefits to be had by sharing transistors, but it's also not a fair argument given the technological advantage of die shrinks and design improvements. Saying an A10 APU is superior to a 9800GTX + equivalent CPU processor is kinda :rolleyes: when we're talking about several years difference.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
I can't say for sure how fast a mobile AMD 8 core chip will be, but given bulldozer and piledriver both have some of the sorriest bandwidth I've ever seen... Maybe it will be great, or maybe it will be a case of more bandwidth than the cores can use.

It's interesting insofar as the cpu really, the gpu is already 30%~ or more slower than Titan (best case, a few years down the road when they go full direct to hardware), which should be offered by GM104 on 20nm when it arrives. However the more important aspect is the cpu, and how much performance they can get out of it imo.
 
Last edited:

NTMBK

Lifer
Nov 14, 2011
10,519
6,028
136
How many die shrinks were there between the original xbox 360 and when it merged the CPU and GPU onto the same die? I realize there are benefits to be had by sharing transistors, but it's also not a fair argument given the technological advantage of die shrinks and design improvements. Saying an A10 APU is superior to a 9800GTX + equivalent CPU processor is kinda :rolleyes: when we're talking about several years difference.

This was the second die shrink, if my memory is correct- I think it was 90->65->45nm.

Don't get too hung up on the A10. In currently available APUs, there are a few factors that limit their potential-

a) Sheer quantity of graphics hardware. Top end Trinity has a fifth of the number of shaders of the HD6970, the discrete GPU it has most in common with (VLIW4). However, if you attempt to increase this number dramatically then you run into the next problem:

b) Memory bandwidth. Dual channel DDR3 just doesn't offer that much, as we know from years of the GPU market. Even something low end like a HD6670 is bottlenecked by DDR3 memory, and that doesn't have to share the bandwidth with CPU cores.

c) Split memory spaces. Llano and Trinity offer some features for memory snooping between CPU and GPU (Onion and Garlic buses), but these are still limited. Without the two working directly in the same memory space, you lose a lot of the inherent efficiencies of an APU design as you need to copy resources between the two, waste chunks of memory by having duplicates of resources, and run into utilisation problems (CPU or GPU uses all of its partition up, while there is spare space going to waste in the other side).

d) Software design limitations. Pretty much every single current PC game is written to run well on discrete graphics cards. Even when APUs started exposing a common memory space, they will still not be used efficiently by current games. These are built on APIs which presume an object worked on by the CPU cannot be worked on by the GPU in the same spot, and vice versa. Large quantities of memory and performance are going to go to waste.

The PS4 fixes all four of these issues. Totally new frameworks built from the ground up for this platform, with buckets of graphics power and the bandwidth to feed it. Not to mention that rumours are pointing towards the next Xbox having essentially the same design.

It's interesting insofar as the cpu really, the gpu is already 30%~ or more slower than Titan (best case, a few years down the road when they go full direct to hardware), which should be offered by GM104 on 20nm when it arrives. However the more important aspect is the cpu, and how much performance they can get out of it imo.

In terms of raw horsepower, yeah, it's considerably weaker. But this design means that they can support whole new features that a standard PC design just can't do. Look at GPGPU physics- PhysX can come up with some seriously nice looking graphical effects (especially with the GPGPU power of GK110), but it can't affect the underlying simulation of the game world. But imagine if all of these thousands of simulated particles actually impacted on the world, knocking players out of place, deflecting bullets, washing bridges away in a torrent of GPGPU fluids, whatever. That's what you could realistically achieve by killing the communication cost between CPU and GPU.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
a) Intel then added an on-die memory controller, so they clearly thought it was a good idea too.

b) PCIe latencies, combined with APIs like DirectX creating buffering latencies, are MASSIVE. Accesses to main memory don't even compare.

c) The real benefits are in letting developers use totally new algorithms which wouldn't have been possible before. In the past the only realistic method is to have the CPU parcel up a big chunk of work, then send it off to the GPU to never be seen again (until it's displayed on screen), because the communication overhead is just too high. But now they have a fixed platform which kills those costs, they can fine-grain their tasks, with back and forth communication between the two.

It is a more elegant design and could also cut costs to produce, but the benefits were minimal in most applications including gaming.

b) You exaggerate. Even if you aren't, how does that effect us? The end users? Will we know the difference? Will I be playing RE7 and saying to myself, "Wow, I'm so glad this console has the CPU and GPU in a single die, if it hadn't I might not be able to play this game the exact same way." ?? No way man. Sell that somewhere else.

As for developers, that may prove beneficial, but I'm an end user. What do I care about developers and how hard they work? They are getting my money one way or the other.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Agreed. Tim Lottes has NO IDEA what he's talking about, he's clearly an idiot when he praises the PS4. The engineers and developers in this thread have far, far more knowledge than any of these guys, I certainly put no faith in the words of Timothy Lottes. Again, he's an idiot.

John Carmack as well. Who is that guy? So when these guys mention the fact that DX11 has significant latency overhead, I put more faith in the words of the excellent engineers in this very thread. Their engineering and development skills are proven, while John Carmack is just a newcomer with no experience. Really, screw that Carmack guy.

“If PS4 has a real-time OS, with a libGCM style low level access to the GPU, then the PS4 1st party games will be years ahead of the PC simply because it opens up what is possible on the GPU. Note this won’t happen right away on launch, but once developers tool up for the platform, this will be the case.

As a PC guy who knows hardware to the metal, I spend most of my days in frustration knowing damn well what I could do with the hardware, but what I cannot do because Microsoft and IHVs wont provide low-level GPU access in PC APIs. One simple example, drawcalls on PC have easily 10x to 100x the overhead of a console with a libGCM style API.”

Not sure what Lottess is smoking when he makes this statement
 
Last edited:

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
What I can't stand is that they make consoles with only meager processing power in "today's" standards. A GTX660 or 7870 (and thats even if they go this high) being put into a console now, is already lower mid range in today's lineup and soon to be bumped down even further if both AMD and Nvidia release new lineups in time for the holiday season. Don't get me wrong, they are great cards and I have a GTX660 and love it, but there are literally 5 grades above it on Nvidia's side and 4 on AMD's side. And if they plan on all games at 1080p or better with AA and PhysX and TressFX and whatever, they need to make them a bit more powerful.
They make them this way for longevity I suppose, but it still should be stronger. IMHO.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
Agreed. Tim Lottes has NO IDEA what he's talking about, he's clearly an idiot when he praises the PS4. The engineers and developers in this thread have far, far more knowledge than any of these guys, I certainly put no faith in the words of Timothy Lottes. Again, he's an idiot.

John Carmack as well. Who is that guy? So when these guys mention the fact that DX11 has significant latency overhead, I put more faith in the words of the excellent engineers in this very thread. Their engineering and development skills are proven, while John Carmack is just a newcomer with no experience. Really, screw that Carmack guy.



Not sure what Lottess is smoking when he makes this statement

"Will I be playing RE7 and saying to myself, "Wow, I'm so glad this console has the CPU and GPU in a single die, if it hadn't I might not be able to play this game the exact same way."

Address this ^ .

You don't have to be an engineer to have common sense or at least experienced similar for yourself. It was all hype at the time. "AMD has on die memory controller and it's so superior to Intels FSB". No, it wasn't. It was simply more elegant and Intel proved this when C2D came out still using FSB. There are some instances where the integrated memory controller did show superiority, but not in gaming.
This argument, while not exactly the same, does have similarities to that old argument.
 
Last edited:

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
Agreed. Tim Lottes has NO IDEA what he's talking about, he's clearly an idiot when he praises the PS4. The engineers and developers in this thread have far, far more knowledge than any of these guys, I certainly put no faith in the words of Timothy Lottes. Again, he's an idiot.

John Carmack as well. Who is that guy? So when these guys mention the fact that DX11 has significant latency overhead, I put more faith in the words of the excellent engineers in this very thread. Their engineering and development skills are proven, while John Carmack is just a newcomer with no experience. Really, screw that Carmack guy.



Not sure what Lottess is smoking when he makes this statement

What line of thought are you trying to convey? A engineer discussing the technical aspects of console hardware, PS4 or the next Xbox or past units, whatever, how does that have anything to with the financials of who designs the hardware? Who got the contracts? Most bizarre strawman argument ever.
Are you moving to a console now, because of the praise you are citing? Should Nvidia and AMD stop R+D on PC gaming? What are you smoking?
 

The Alias

Senior member
Aug 22, 2012
646
58
91
"Will I be playing RE7 and saying to myself, "Wow, I'm so glad this console has the CPU and GPU in a single die, if it hadn't I might not be able to play this game the exact same way."

Address this ^ .
.

Yes you would be, because the console with it's extremely low latencies and without pc's bloated dx11 api will give the gpu and cpu to do a lot more time for computations and take a lot of the time it would spend jumping through hoops .
 

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
Yes you would be, because the console with it's extremely low latencies and without pc's bloated dx11 api will give the gpu and cpu to do a lot more time for computations and take a lot of the time it would spend jumping through hoops .

The experiences will be pretty similar, but the PC will be brute forcing its way to comparable performance.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
Yes you would be, because the console with it's extremely low latencies and without pc's bloated dx11 api will give the gpu and cpu to do a lot more time for computations and take a lot of the time it would spend jumping through hoops .


:: buzzer sound ::

Correct answer is, "No you wouldn't tell the difference".

Incredible exaggeration on your part. The actual difference is minute.
What do you mean "without PC's bloated dx11 api" ?????
Wouldn't a console with or without CPU/GPU on same die utilize DX11 api?
 

The Alias

Senior member
Aug 22, 2012
646
58
91
The experiences will be pretty similar, but the PC will be brute forcing its way to comparable performance.

with all the optimization that the ps4 will allow the difference would be huge between the pc's 7850 and the ps4's version even more so than the the 360's gpu counterpart on pc
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
You don't understand. I'm not speaking about any comparison to any PC.
Or maybe I'm not understanding the point you're trying to make.
 

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
with all the optimization that the ps4 will allow the difference would be huge between the pc's 7850 and the ps4's version even more so than the the 360's gpu counterpart on pc

That's pretty much what I'm saying. A PC will need much more powerful hardware to achieve the same performance.
 

The Alias

Senior member
Aug 22, 2012
646
58
91
You don't understand. I'm not speaking about any comparison to any PC.
Or maybe I'm not understanding the point you're trying to make.
my point is that a 7850 on die without pcie latency, memory copying latency, and more would make this thing MUCH more capable than with them
 

BladeVenom

Lifer
Jun 2, 2005
13,365
16
0
with all the optimization that the ps4 will allow the difference would be huge between the pc's 7850 and the ps4's version even more so than the the 360's gpu counterpart on pc

Maybe a few years from now in PS4 exclusive games. Of course by that time a 7850 will be low end.
 
Last edited: