[ArsTechnica] Next-gen consoles and impact on VGA market

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
Watching retarded monkeys throw fecal matter at each other is more insightful to the workings of game code then participating in the debates about game code structures in this forum. Also, you are talking about x86 based development, you don't have a lot of choice but to use scripted physics in such a development environment(outside of PhysX and perhaps Bullet if that ever gets anywhere).

That was an ackward way of saying you are wrong :whiste:
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
That was an ackward way of saying you are wrong

Really? So when calculating intersections on effects with layered particles, are you using view frustum for determination of intersections, or are you calculating out your particles in physical space and using vertex level intersection calculations?

Trivial question.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
Really? So when calculating intersections on effects with layered particles, are you using view frustum for determination of intersections, or are you calculating out your particles in physical space and using vertex level intersection calculations?

Trivial question.

No...you are trying to throw up an red herring.
You were/are wrong...how long will you go in order to try and cover that up behind smoke and mirrors? ;)
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
No...you are trying to throw up an red herring.

No, it is a question directly related to this discussion on several different levels.

One of the ways I mentioned is the best way to do it on PCs right now, the other the best way to do it on the PS3(360 would probably be a wash, I'd have to test it). If you have the slightest clue about the discussion involved, you should know which is which.

One is going to be more reliant on shader and fill, the other is going to be more reliant of FP performance. One is going to be more computationally intensive with increased resolution, the other is going to be more computationally intensive with increased granularity.

My question was carefully chosen based on what was relevant to this thread and multiple streams of conversation. It is also elementary in its' complexity and if anyone can't understand it then they should have a good idea why I laugh at the pathetic discussions you all have about physics on this forum.

You were/are wrong...how long will you go in order to try and cover that up behind smoke and mirrors? ;)

I'd like people to demonstrate even a sliver of understanding, that would certainly go a long way to proving they have something worth while to say :)
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
I'd like people to demonstrate even a sliver of understanding, that would certainly go a long way to proving they have something worth while to say :)

Pot, kettle, black...you ignore the one factor having the biggest impact on physics calculations (if you want them done real time).

As long as you do that, you can try and play smart and use "fancy" words...you are still off the mark.

But what an you expect from a guy who think the Cell is a outstanding CPU...
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Pot, kettle, black...you ignore the one factor having the biggest impact on physics calculations (if you want them done real time).

What do you think that would be exactly? I can think of a ton that people could use depending on their perspective, most of them are only relevant to a certain subset of factors.

As long as you do that, you can try and play smart and use "fancy" words...you are still off the mark.

If anyone took 'Basics of 3D' at the middle school level every term I used in my question would have been covered on day one. It is *very* simple. It doesn't require any sort of advanced, or even mediocre understanding- it is as simple as you can get.

But what an you expect from a guy who think the Cell is a outstanding CPU...

When did I say the Cell was an outstanding CPU? It is incredibly powerful and very well suited for specific tasks. I'd take several current ARM based consumer processors over Cell for general purpose computing(all of them being in the sub 1 watt category).
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
What do you think that would be exactly? I can think of a ton that people could use depending on their perspective, most of them are only relevant to a certain subset of factors.

It's been the crux of real time physics since AGIEA...o_O



If anyone took 'Basics of 3D' at the middle school level every term I used in my question would have been covered on day one. It is *very* simple. It doesn't require any sort of advanced, or even mediocre understanding- it is as simple as you can get.

Physics != 3D...o_O



When did I say the Cell was an outstanding CPU? It is incredibly powerful and very well suited for specific tasks. I'd take several current ARM based consumer processors over Cell for general purpose computing(all of them being in the sub 1 watt category).

Translate into:
It is a IO CPU that only excells in a very FEW limited cases and perform subpar in all others areas.
For instance it's memory controller is optimized for throughput at the expense of lantency...thus hampering it like all other CPU for real time physics.

I just found an ooooooooooold link, back from the days when hardware physics was spitting new:
http://www.blachford.info/computer/articles/PhysX1.html

You should read it.
 

Arzachel

Senior member
Apr 7, 2011
903
76
91
Edit- That last comment needs a bit of expanding. On this forum you have your group of AMD loyalists who bash any advancement in physics simulations without having a clue what they are talking about. You also have a group of nVidia loyalists who champion advancements in physics simulations without having a clue what they are talking about. With visuals pushing on the uncanny valley, advances in physical simulations are the best way to increase immersion at this point, no matter what kind of hardware you cheerlead. How well that is being done is another discussion entirely, but the general point is valid.

While I generally agree with you, I find both of these points pretty flawed. Reducing the Physx debate to Nvidia vs AMD is pretty ignorant considering the issue is anything but straightforward. And uncanny valley is only an issue if the "hunt" for photorealism with little regard to artistic direction continues, which is a dead end in any case.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
While I generally agree with you, I find both of these points pretty flawed. Reducing the Physx debate to Nvidia vs AMD is pretty ignorant considering the issue is anything but straightforward. And uncanny valley is only an issue if the "hunt" for photorealism with little regard to artistic direction continues, which is a dead end in any case.

If everything gets physically simulated, there is no need for "an artists touch"...expect for "photoshopping" graphics.
That would be like "correcting" raytraycing...kinda silly ^^
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
It's been the crux of real time physics since AGIEA...

AGIEA was a brief lived babe in physics simulations, not sure what your point is supposed to be?

Physics != 3D...

We are talking about physics being applied to a 3D world, none of the terms I used are specific to physics, they are very basic 3D rendering elements.

For instance it's memory controller is optimized for throughput at the expense of lantency...thus hampering it like all other CPU for real time physics.

Unless you are doing trivial levels of physics computations you will almost always be compute limited in physics calculations. The SRAM you have direct access to for the SPEs should be more then enough for handling most computation sets, then it is a matter of moving data(which, again unless you are doing trivial computations, you will be compute limited).

You should read it.

OK....... it agrees with what I'm saying......? :)

Reducing the Physx debate to Nvidia vs AMD is pretty ignorant considering the issue is anything but straightforward.

I absolutely agree. Advancing physics on this forum comes down to precisely that argument though. You can't even talk about physics simulations on the consoles without it being brought up.

And uncanny valley is only an issue if the "hunt" for photorealism with little regard to artistic direction continues, which is a dead end in any case.

Do you see anyone trying to stray far from precisely that though? I guess you could say Squenix, but they aren't exactly thought of too much by most gamers who avoid consoles. Not saying I disagree with you, but we can see the valley on the road ahead of us and it seems the gas is to the floor ;)

That would be like "correcting" raytraycing...kinda silly ^^

Raytracing is actually rather horribly inaccurate for anything not reflective. It is terrible for refraction or diffuse. A lot of people like to talk about it like its' the holy grail, I was using ray tracing closing in on twenty years ago, it isn't all that great outside a couple of small uses(admittedly, its' impact is *very* dramatic in those uses).
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
AGIEA was a brief lived babe in physics simulations, not sure what your point is supposed to be?



We are talking about physics being applied to a 3D world, none of the terms I used are specific to physics, they are very basic 3D rendering elements.



Unless you are doing trivial levels of physics computations you will almost always be compute limited in physics calculations. The SRAM you have direct access to for the SPEs should be more then enough for handling most computation sets, then it is a matter of moving data(which, again unless you are doing trivial computations, you will be compute limited).



OK....... it agrees with what I'm saying......? :)



I absolutely agree. Advancing physics on this forum comes down to precisely that argument though. You can't even talk about physics simulations on the consoles without it being brought up.



Do you see anyone trying to stray far from precisely that though? I guess you could say Squenix, but they aren't exactly thought of too much by most gamers who avoid consoles. Not saying I disagree with you, but we can see the valley on the road ahead of us and it seems the gas is to the floor ;)



Raytracing is actually rather horribly inaccurate for anything not reflective. It is terrible for refraction or diffuse. A lot of people like to talk about it like its' the holy grail, I was using ray tracing closing in on twenty years ago, it isn't all that great outside a couple of small uses(admittedly, its' impact is *very* dramatic in those uses).

You should look at the architechture of the PPU again...it even talks about the PS3 in my link...as PPU > Cell for physics.
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
Vector processing and physical simulations are what supercomputers are used for most often. Both of which are commonplace in gaming.

Maybe. But still -- those are supercomputer chips. Not the small, efficient chips that are used in a game console. That would be like saying, oh, jet engines are better for cars than internal combustion because jets are used on airplanes. There's no question that jets are good for planes and have advantages over internal combustion, but it does absolutely nothing to show those advantages are good for everyday use in cars. Similarly, there's no question that other architectures are better than x86 for certain kinds of supercomputers, but it does nothing to show those advantages are good for everyday use like in a game console. Why not actually show the architecture being used to exceed x86 in actual gaming performance on the PC level?
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Maybe. But still -- those are supercomputer chips. Not the small, efficient chips that are used in a game console.
Most of them are PC CPUs with a few features turned on to make them more expensive. A few have actually been none other than game consoles. Only IBM and Fujitsu still makes CPUs mainly for supercomputer uses, and I think IBM is the only one marketing them well beyond supercomputers.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
The big elephant in the room was never addressed: PS3 games don't look any better than Xbox360 games 6 years later. Since developers complained that the Wii was underpowered, that means they do pay attention to the level of hardware inside consoles when developing and porting games. We therefore cannot use the excuse that developers just didn't take the time to make PS3 games look better -- they probably tried but the Cell is either: (1) not much more powerful against the 3.2 GHz PowerPC Tri-Core Xenon in the Xbox360, (2) extremely difficult and expensive to code for to actually achieve that theoretical performance. If even 6 years later developers couldn't figure out a way to extract that maximum theoretical performance due to other bottlenecks that affect game code vs. applications that are used in super-computers, or because doing so would have been prohibitively expensive, that it doesn't matter at all how great the Cell could have been. If a CPU isn't easy to code for to run games fast, it's poorly engineered. Considering all developers who worked on PS3 keep saying that the Cell is very inefficient and a pain to code for, it was a failure.

For example if I run SuperPi, it will be so much faster on a 2500K than it is on an FX8150. Do games actually run 2x faster on a 2500k than on an FX8150? No, not even close. That means there are probably some applications that work very well on the Cell, but not games. The best racing game on the PS3 (GT5) has worse graphics than Xbox360's best racing game (Forza 4), and also doesn't run at 60 fps smoothly like Forza 4 does. Dark Souls drops to chugging frame rates in Blighttown on the PS3, while a $50 Phenom II X4 and a $60 HD6670 videocard can play the game smoothly at 30 fps.

The Cell has not lived up to the hype and based on how PS3 games look, its performance is hardly better than the Tri-core Xenon in the 360, nevermind a modern Core i7 CPU. It's not even close. The only thing that matters is real world gaming performance for consoles and Cell shows none of its advantages for a variety of reasons.
 
Last edited:

Ancalagon44

Diamond Member
Feb 17, 2010
3,274
202
106
The big elephant in the room was never addressed: PS3 games don't look any better than Xbox360 games 6 years later. Since developers complained that the Wii was underpowered, that means they do pay attention to the level of hardware inside consoles when developing and porting games. We therefore cannot use the excuse that developers just didn't take the time to make PS3 games look better -- they probably tried but the Cell is either: (1) not much more powerful against the 3.2 GHz PowerPC Tri-Core Xenon in the Xbox360, (2) extremely difficult and expensive to code for to actually achieve that theoretical performance. If even 6 years later developers couldn't figure out a way to extract that maximum theoretical performance due to other bottlenecks that affect game code vs. applications that are used in super-computers, or because doing so would have been prohibitively expensive, that it doesn't matter at all how great the Cell could have been. If a CPU isn't easy to code for to run games fast, it's poorly engineered. Considering all developers who worked on PS3 keep saying that the Cell is very inefficient and a pain to code for, it was a failure.

For example if I run SuperPi, it will be so much faster on a 2500K than it is on an FX8150. Do games actually run 2x faster on a 2500k than on an FX8150? No, not even close. That means there are probably some applications that work very well on the Cell, but not games. The best racing game on the PS3 (GT5) has worse graphics than Xbox360's best racing game (Forza 4), and also doesn't run at 60 fps smoothly like Forza 4 does. Dark Souls drops to chugging frame rates in Blighttown on the PS3, while a $50 Phenom II X4 and a $60 HD6670 videocard can play the game smoothly at 30 fps.

The Cell has not lived up to the hype and based on everything we know in actual games, its performance is hardly better than the Tri-core Xenon in the 360, nevermind a modern Core i7 CPU. It's not even close. The only thing that matters is real world gaming performance for consoles and Cell shows none of its advantages.

In before PS3/GT5 fanboys insult your mother!

My take on it is that the Xbox 360 simply had a more powerful graphics chip, and its memory was laid out in a fashion that made things much easier on devs (unified 512mb instead of 256mb + 256mb as in PS3). That is the major reason for the difference, the other reason is that while the Cell is theoretically more powerful than the 360 CPU (Xenon or Xenos or whatever), in practice it is much more difficult to extract performance out of it. This means, without significant optimization, it will be far behind.

Say what you will about the Cell, but that means its a failure for its purpose - game development. Considering that most games are cross platform and developed under a tight budget and with strict deadlines, making your console more difficult to develop for was just silly.

Evidently Sony agrees, judging by the rumours that the PS4 uses an AMD Fusion chip. That will make things MUCH easier.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
The notion that PS3's Cell is somehow still superior to modern Core i7 CPUs for games absurd. Taken from a professional's mouth:

'Limits Hit For PS3 & Xbox 360': What Devs Want From Next-Gen Consoles
"The Amazing Spider-Man
Brant Nicholas, Executive Producer,

"At the moment, developers have hit the limits of what the current generation of hardware is capable of in terms of data capacity, memory, processing power and streaming speed from the media. As a former artist myself, I am not really waiting for that extra blade of grass to make the world more visually stunning. For the next gen, I’m personally looking forward to the increased processing power and what this will allow people to achieve with in-game AI-driven decision-making. The next set of hardware is shaping up to be a major leap forward in terms of the power of the consoles."

Notice how he is discussing processing power limitations to AI. Last time I checked it was the CPU that was responsible for AI calculations. Sounds to me like developers have fully maxed out what current console's CPUs such as the Cell have to offer and specifically they are complaining that these CPU are actually to slow to take AI to the next level. It can't be that the Cell is more powerful than modern x86 CPUs for games if actual game developers are stating that all the processing power has been exhausted based on reasonable means and game development timelines. BenSkywalker's comments the Cell is supposedly 4-5 CPU generations ahead of Core 2 Duo and still faster than modern i7 CPUs don't align with what people who actually make the games say. If the Cell was more powerful than x86 CPUs for game development, the developers would have no faster CPU to choose from......and yet they are already agreeing that it will be a major leap in terms of processing power. The only logical conclusion that can be made is that the Cell was horribly inefficient to code for, which either resulted in it being slow, or it was slow for games to begin with, or a combination of these factors.
 
Last edited:

BlockheadBrown

Senior member
Dec 17, 2004
307
0
0
The big elephant in the room was never addressed: PS3 games don't look any better than Xbox360 games 6 years later.
It depends on the developer and tools used to create the game.
Dragon Age: Origins has better textures on the PS3
Dragon's Dogma has more particle and atmospheric effects on the PS3 (you can see it in the demo just after the dragon burns everything around)

Mass Effect 2 had "enhanced" graphics on the PS3.
There appears to be more detail (going from memory here) in the PS3 version of Mass Effect 3 (unsubstantiated). However, the 360 version plays more smoothly on the 360 (personal experience).

There are other examples out there. The point is, there have been improvements. It just depends on how the devs chose to either apply or not apply it.

We therefore cannot use the excuse that developers just didn't take the time to make PS3 games look better -- they probably tried but the Cell is either: (1) not much more powerful against the 3.2 GHz PowerPC Tri-Core Xenon in the Xbox360, (2) developers couldn't figure out a way to extract that maximum theoretical performance due to other bottlenecks that affect game code vs. applications that are used in super-computers.


  1. You're guessing too much here (and I do respect your views and approach to things :) )
  2. Many have stated publicly that there were difficulties taking full advantage of the PS3. This is something that at one point, Sony touted and attached it to the system's potential longevity
All that said, it doesn't mean that the system isn't capable. It really boils down to what's possible/feasible with available resources within time constraints.

The best racing game on the PS3 (GT5) has worse graphics than Xbox360's best racing game (Forza 4), and also doesn't run at 60 fps smoothly like Forza 4 does.
Apples and oranges. Different games, dev teams, artists, priorities, etc. Forza is easily the better of the two games in my opinion. That doesn't mean that it's perfect though. I've seen plenty of threads pointing out Forza 4's faults.

Dark Souls drops to chugging frame rates in Blighttown on the PS3, while a $50 Phenom II X4 and a $60 HD6670 videocard can play the game smoothly at 30 fps.
Again, it depends on system optimizations. With Demon's Souls, you'd think that they'd have the PS3 streamlined. We know that they focused on the 360 for this release. To what extent, I have no idea. :)
ADDED : Here you're comparing to a PC. IMHO, that's an unfair fight. :) Console-to-console should be the comparison.

The Cell has not lived up to the hype and based on everything we know in actual games, its performance is hardly better than the Tri-core Xenon in the 360, nevermind a modern Core i7 CPU. It's not even close. The only thing that matters is real world gaming performance for consoles and Cell shows none of its advantages.
I disagree, but that doesn't mean that I think the 360 is garbage either. :)
 
Last edited:

BlockheadBrown

Senior member
Dec 17, 2004
307
0
0
In before PS3/GT5 fanboys insult your mother!

My take on it is that the Xbox 360 simply had a more powerful graphics chip, and its memory was laid out in a fashion that made things much easier on devs (unified 512mb instead of 256mb + 256mb as in PS3). That is the major reason for the difference, the other reason is that while the Cell is theoretically more powerful than the 360 CPU (Xenon or Xenos or whatever), in practice it is much more difficult to extract performance out of it. This means, without significant optimization, it will be far behind.

Say what you will about the Cell, but that means its a failure for its purpose - game development. Considering that most games are cross platform and developed under a tight budget and with strict deadlines, making your console more difficult to develop for was just silly.

Evidently Sony agrees, judging by the rumours that the PS4 uses an AMD Fusion chip. That will make things MUCH easier.

Agreed - mostly. :)
 
Last edited:

BlockheadBrown

Senior member
Dec 17, 2004
307
0
0
...the Cell is supposedly 4-5 CPU generations ahead of Core 2 Duo and still faster than modern i7 CPUs don't align with what people who actually make the games say. If the Cell was more powerful than x86 CPUs for game development, the developers would have no faster CPU to choose from......and yet they are already agreeing that it will be a major leap in terms of processing power. The only logical conclusion that can be made is that the Cell was horribly inefficient to code for, which either resulted in it being slow, or it was slow for games to begin with, or a combination of these factors.

Note that I'm not addressing comparisons to PC specs in any way in my remarks.
 

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
The notion that PS3's Cell is somehow still superior to modern Core i7 CPUs for games absurd. Taken from a professional's mouth:

Cell is shit and awesome at the same time....

i am really under the impression that the APUs were inspired by cell...

it's Computing is very weak, just a total shit compared to today cpus....i won't be surprised if bobcat is better than it

at the same time Cell is very good at doing paralel codes and geometry...cell is actually better than ps3 GPU at doing some traditional gpu tasks...(RSX was already weak on it's release day)
Cell actually is the responsiable for the ps3 can have the same graphics power than xbox360...

maybe, with AVX-2, haswell will have the same power of those SPEs (or better)...
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
they probably tried but the Cell is either: (1) not much more powerful against the 3.2 GHz PowerPC Tri-Core Xenon in the Xbox360, (2) extremely difficult and expensive to code for to actually achieve that theoretical performance.
It's both, but #2 is more true. You can't make a huge team make better code, so it takes more expertise, instead, a ton more time, and it would be hard to make something generally applicable in an engine, v. per-game. The small local memory, and high latencies, and performance killers, and require much more time and specialized work to fix, but a whole lot less to just ignore. When a developer hits on a suitable test case, the Cell can help make things better. But that's as much luck as anything else.

Partly because of that, most console physics is done using the main core's Altivec, with some VMX here and there, which is infinitely easier to program, can make good use of the high GHz, and have been superior to x86's vector extensions per core/clock for a very long time (one of several good reasons not to use x86, though AVX2 has some serious promise). It can probably be minimally affected by the small caches, too, though I'm not 100% on that.

For GFX, there is also (3) that Xenos was generally superior to RSX, and the limited use of eDRAM can do a lot to help make up for the limited memory bandwidth.

i am really under the impression that the APUs were inspired by cell...
Nah. The Cell is a miniaturized mainframe all over again, with multiple statically allocated SPUs and no hardware multitasking (S and in special-function, not synergistic). Just like narrow in-order processors, and unusual memory schemes, some things that let hardware be stupid and fast keep being allowed to come up to the surface and be tested out in the market. Then they do it, and learn why they hadn't been doing it, and it doesn't happen again for another 10-15 years (by the same group of management, anyway). I think it could have evolved to something much better (smaller processes would allow a lot to be added, while still shrinking total size), but the bad PR from the PS3 pretty much killed it.

Fusion came about because AMD did not have the high-efficiency vector processing expertise to do it in their CPUs, so it is more efficient do it by merging CPU and GPU over time, rather than building out from their CPU. Ultimately, the idea is that the regular processing and graphics will be different in that running graphics code will turn power on to raster-only units on the CPU, and probably use a different algorithm for prioritizing DRAM memory accesses.

Not long after researchers started hacking GPUs for parallel computing, that the GPU itself was eventually going to become an artifact of large processors, not unlike outboard FPUs, MMUs, cache, etc., was a foregone conclusion. The question is how to do it with limited knowledge and resources.

maybe, with AVX-2, haswell will have the same power of those SPEs (or better)...
In practice, it absolutely will. Any C/C++ programmer that knows loop X can be run fully in parallel can make that happen, and there are none of the downsides the Cell had, like local memory, shared high-latency buses, and slow-bandwidth unit-to-unit communication. I've argued that existing code, and incidental performance improvements won't be much higher than in the past, because compilers are really stupid. But, for any new code being made with support for AVX2 in mind, it'll be duck soup...or worth the effort, if it's more difficult.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
For GFX, there is also (3) that Xenos was generally superior to RSX, and the limited use of eDRAM can do a lot to help make up for the limited memory bandwidth..

Ya, that's my point too. I said that Sony made a mistake in balancing their capital budget allocation in regard to PS3's hardware: $230 cost for Cell vs. $70 for RSX based on Merrill Lynch cost breakdown analysis at PS3's launch. I can't find a more reputable source, so I have to go with that for now. From this link, I see that MS spent $141 on R500 GPU. MS put more of their eggs into a faster GPU, and it paid off since PS3 didn't really come out as the best looking console this generation. Some may argue the best PS3 games look a little better than 360's exclusives but it's so close to call that simply based on the fact that PS3 launched 1 year after and still couldn't convincingly have better graphics shows it was a poor design.

Even if we all agree that PS3's Cell > Xenon Tri-Core CPU in the Xbox360, in the end it hardly made much of a difference since the RSX GPU was weaker. So what was the point exactly of putting such a complex CPU with a weaker GPU if the GPU ended up the bigger bottleneck in the first place? It seems to me MS product planners got it right. They were able to launch the console 1 year earlier, priced much cheaper and it still ended up with a faster GPU. That's embrassment for Sony. PS3 should have been a lot more powerful overall than the Xbox360 was. 12 months from the time 360 launched, AMD had even faster chips. Sony execs put all their eggs into the PS3's Cell and by spending more $ on the CPU they thought it would help them achieve superior graphics.

We'll see how the next generation pans out but if PS4 does have a substantially faster GPU than the Xbox 720, then this time I am willing to bet that PS4's exclusives will actually look much better compared to 720's exclusives. If it was up to me, I'd just take a $100 AMD APU and pair it with at least a mobile version of HD7850 style GPU. I wouldn't even $1 extra over $100 on a CPU until that GPU budget was maxed out within reason. The GPU will become the greater bottleneck much quicker when next generation games based on CryEngine 3.4, Unreal Engine 4 and Frostbite 2.0 engines launch. As graphical effects become even more complex (global lighting model via DirectCompute shaders, contact hardening shadows, tessellation, bokeh depth of field, HDAO), the load on graphics will continue to shift more and more onto the GPU first. Even now you have to pair a Core i5 2500K with something like a GTX670 SLI setup to even start seeing major CPU bottlenecks and even then it won't show up in Crysis 1/Warhead, Metro 2033, Witcher 2 graphically demanding games. The CPU speed is far more critical for strategy/MMO games such as Shogun 2, Warcraft 3, Diablo 3, WOW, Starcraft 2, Guild Wars 2. Those aren't the genres which are popular on consoles in the first place. imo what next generation consoles need first are powerful GPUs since they'll help immensely for FPS, racing, action-adventure and 3rd person style games.
 
Last edited:

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
"decent symmetrically cored general purpose execution with high GFLOPS" I think is the emphasis here.

Xenon was this.

Jaguar is looking to be as such as well, though 256 bit FPUs on Jaguar cores with lots of cache would really make things interesting, since it would give 360 developers the OoO capability and large cache that has been an issue with Xenon, but keep core size down, yet doubling the per clock GFLOPS per core. Even i3s and FX-4100s with 8 total issues at most (across the 2 cores or modules respectively) is enough for modern, current games.
 
Last edited:

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
It depends on the developer and tools used to create the game.
Dragon Age: Origins has better textures on the PS3
Dragon's Dogma has more particle and atmospheric effects on the PS3 (you can see it in the demo just after the dragon burns everything around)

Mass Effect 2 had "enhanced" graphics on the PS3.
There appears to be more detail (going from memory here) in the PS3 version of Mass Effect 3 (unsubstantiated). However, the 360 version plays more smoothly on the 360 (personal experience).

Don't know about Dragon's Dogma, but Dragon Age: Origins and Mass Effect 3 had rather pervasive performance problems on the PS3. DA: O was a rather ugly looking game on either console, and the PC had higher resolution textures. Mass Effect 2 on PS3 didn't really have performance problems (which leaves the ME3 performance problems something of a mystery) but DigitalFoundry's analysis found no substantial evidence of higher resolution assets or effects, with the changes to the lighting being largely a matter of taste. Same with ME3, only with less differences in lighting.

There are other examples out there. The point is, there have been improvements. It just depends on how the devs chose to either apply or not apply it.

I've followed DigitalFoundry's analyses for a while, and the 360 usually comes out on top, and when it doesn't it's either not by a large margin or because of issues not relating to the actual comparative potential of the consoles.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
You should look at the architechture of the PPU again...it even talks about the PS3 in my link...as PPU > Cell for physics.

That is ignoring abstraction layers. Cell *is* the CPU, it doesn't have that rather huge amount of overhead to deal with.

The big elephant in the room was never addressed: PS3 games don't look any better than Xbox360 games 6 years later.

The Last of Us, Unchartered 3.

The only thing that matters is real world gaming performance for consoles and Cell shows none of its advantages for a variety of reasons.

That would be a fantastic point, if it were true. Go build a PC with an i7 and 7800GT and put 256MB of RAM in it and see how games run. The CPUs are not the only factor.

Considering that most games are cross platform and developed under a tight budget and with strict deadlines, making your console more difficult to develop for was just silly.

It was a wager. If the PS3 had the same market position as the PS2 it would have worked out extremely well for them. The PS2 had the same type of processor in it, yes it was MIPS but it was a single general purpose CPU core two vector processors, and they utterly obliterated the competition in the market.

Ya, that's my point too. I said that Sony made a mistake in balancing their capital budget allocation in regard to PS3's hardware: $230 cost for Cell vs. $70 for RSX based on Merrill Lynch cost breakdown analysis

That analysis is flat out wrong. I can only assume they were front loading the initial R&D to pay for all of it in the first couple of years, which I guess from an accounting standpoint may make sense, but the CPU itself is very cheap. The killer cost at the PS3's launch was the BluRay drive. Sony again made a wager that the PS3 would allow them to win the high def format war against HD DVD.

The best racing game on the PS3 (GT5) has worse graphics than Xbox360's best racing game (Forza 4), and also doesn't run at 60 fps smoothly like Forza 4 does.

I linked a video above, if you clicked on it you would see they have a framerate tracker going too.

Taken from a professional's mouth:

I quote Carmack, you quote an executive producer. That about sums up the content of your discourse :)

If you notice I'm not arguing with most of what Cerb is saying because his points are valid. He is bringing up issues with the architecture and system layout that have an actual impact on useage. You just keep saying "360 good, PS3 bad". As far as them using the RSX, the original plan was to use dual Cells and no GPU at all, everyone pointed out to KK that that was going to be an abject failure so the RSX ended up being swapped in to replace the second Cell very late in development- it *increased* the cost of the system.
 
Status
Not open for further replies.