[ArsTechnica] Next-gen consoles and impact on VGA market

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
These days, the extra resolution, FPS, and AA are pretty much all that PC gaming has going for it (in terms of graphics), but when you consider that so many of the titles are multiplayer specific, that is a very big deal.

Yes realistically, the Radeon 7770 really represents what could be considered right now the "ultimate sweet spot" for graphics rendering throughput in conjunction with power requirements and cost, as well as having the same overall feature set (as in compatibility and capabilities) as Pictairn and Tahiti which makes it a great target for consoles. In a code to metal closed environment, a proper CPU and enough memory, Cape Verde could probably run any PC game currently maxed out at 1080p and 30 FPS with some form of reasonable AA method.

What is the most powerful PC you need to realistically match any console game today when it comes to multiplats @ 720p? I would say an i3 and a Radeon 4670 at most.

Going back in time, I could run Mirror's Edge at 1080p with 2x AA (and no PhysX) on an Athlon II x2 3.0 GHz, 2 GB DDR3, Radeon 4670 1 GB on WinXP. Neither the PS3 or 360 could ever dream of that.

Both Cell and the Xenon in comparison to older PC CPUs have vastly superior GFLOPS which makes them useful for games like BF3 where a quad-core AMD or newer Intel dual-core are necessary for smooth gameplay in multiplayer since so much of what goes on is very math heavy. The deficiencies of the consoles CPUs in terms of their in-order architecture and small caches is made up by SMT and their GFLOPS, assuming the kinks in the code are worked out.

Neither console CPU (especially the Cell) is really suited for common PC computing. Not by a long shot. They are GFLOPS monsters (for their time), because that is what a gaming system really needs. Good AI and background code can be programmed on a relatively low silicon budget (think Halo 1 and 2 on Xbox, or the original FEAR which can run on very old CPUs just fine!). Physics, animation, sound, etc need GFLOPS, and that is what Cell and Xenon provided with the necessary amount of general purpose CPU capability to make all these things workable with gaming performance with longevity beyond the best dual core PC processors of yesteryear.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Polyphony will almost certainly run at 4K resolution for their games on the PS4. You take a top tier console developer and it is amazing what they can do with fixed hardware. It is a completely different development process then dealing with PCs where you need to worry about code flexibility. It tends to be why console devs suck at PC ports and PC devs suck at console games(nothing this generation has changed that perception either).

Yes, just like PS3 ran games at 1080P....

Gran Turismo 5 = 1280x720 (4xAA), 1280x1080 (QAA, TAA selectable), 1280x720 (QAA) in 3D mode

And despite that 'revolutionary and ground-breaking' Cell architecture you keep talking about, GT5 looks worse than Forza Motorsports 2 or 3 do on the Xbox 360. Compared to the PC, GT5 is now at least 1 generation behind the best looking racers like Project Cars.

I remember Sony's launch presentation, what did they say that the system is capable of 2 Tflops of computing power? It looks like you bought into that hype like Sony did when IBM sold the Cell architecture to them (AnandTech touched on that in the pulled article on how worthless theoretical flop specs were used to sway Sony execs). What Sony should have done is used an AMD CPU and put $150 into the GPU. Instead the Cell was the most expensive component in the PS3, other than the BluRay drive and it did nothing at all to differentiate PS3's graphics from a 1-year-old Xbox360, with many console ported games actually having superior performance and graphics on the Xbox360. Perhaps PS3's best looking game, Uncharted 3, is only rendered at 896x504 in 3D because the RSX+Cell combination cannot handle much more. So much for your idea of a powerhouse console and the Cell being 4-5 generations ahead of Intel Core 2 Duo modern CPUs at the time.

Carmack also was very critical on consoles. He basically said catering to consoles negatively affected the design of the game where they lost track of how fast PC hardware was evolving. Instead, they spent wasted time on Mega Textures and other GPU saving techniques because they were limited by the memory amount in the PS3/360. He said that for the next game they will just make the game specifically for the PC and worry about porting it to consoles later. Part of the reason Rage was such a failure from a technical level is because Carmack didn't focus on making it a PC game first, and a console game second. He admitted so himself in the interview. Trying to optimize the code for Rage so that it would even run on consoles has prolonged the development time and negatively affected how well that game actually ran on the PC.
 
Last edited:

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
Sony is to blame for the Cell debacle. It was well designed but Sony overestimated it's capabilities for graphics rendering, which made the double-Cell approach they originally intended for the PS3 unrealistic in the face of dedicated GPUs. Using RSX was the quick fix to the rendering problem since G70 was already out and ready. Adding in the 256 MB of GDDR3 either was a counter to the 360 being equipped with 512 MB of RAM or just the need for a PC derived dedicated GPU of it's era needing a real, low-latency frame buffer directly attached to it. The PS3 is in many ways a frankenstein since many of it's features were quick-fixes to previous problems related to the Cell being the original vision and hub of processing in the system. Cell in itself isn't a bad chip. It was Sony's unrealistic vision. Cell + G94 (assuming Nvidia had a 64 shader G80 derived part like it envisioned already) and an appropriate amount of RAM would've been quite an amazingly capable system where Cell could truly show it's ability to calculate very excellent physics and animation, with G80 handling all the rendering unlike where the RSX was dependent on the Cell to make up for it's deficiencies vs the 360's superior Xenos.

Now, Cell + G94 or a derivative of it would've been hella expensive, but it would've been appropriate for the period. A 64 shader, 32 TMU, 16 ROP part like G94 would've I think been a perfect fit for the PS3 in terms of power and thermal requirements. It would've affected the memory system though, however as expensive as it might have been initially, having a single 1 GB unified system of XDR would've been good enough for Cell + G94 and might have been cheaper than having a complicated memory system using two different memory types and such a complicated set of memory buses to make it happen.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
The problem is Cell + G80 was impossible without delaying the console for 9 months and using a crippled G80 mobile chip. This is because:

1) The desktop G80 chips launched 2 days after PS3 launch. That means Sony wouldn't have been able to start manufacture of PS3 with ready G80 chips 6 months prior to launch as NV itself hasn't even launched the G80 on the desktop;

2) Physical size limitation - You can't fit an 8800 desktop card inside a PS3, and no mobile versions were available when PS3 launched;

3) Power consumption - NVIDIA recommended at least a 450W power supply that can deliver up to 30A on the +12V rail for the cards. PS3 in total used 240W of power. They would have needed to sell a 450W PSU on the side......

4) Cost - The cheapest card cost almost as much as the retail price of the PS3! ($599 for the 8800GTX and $449 for 8800GTS 640mb)

The whole approach of putting a $230 CPU and a $70 GPU though by itself sounds like a poor strategy as you said. The heart of a console for gaming is the GPU, not a CPU. Thus ideally for the best graphics, you'd rather have a $70-80 CPU and a $210-220 GPU.
 

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
The problem is Cell + G80 was impossible without delaying the console for 9 months and using a crippled G80 mobile chip. This is because:

1) The desktop G80 chips launched 2 days after PS3 launch. That means Sony wouldn't have been able to start manufacture of PS3 with ready G80 chips 6 months prior to launch as NV itself hasn't even launched the G80 on the desktop;

Very true, but Sony might have been able to work with Nvidia to get some sort of G80 derivative ready for the PS3. I wouldn't be surprised if Nvidia for all intents and purposes deliberately pushed G70 specifically, since the PS3 could've been a threat to their PC market.

2) Physical size limitation - You can't fit an 8800 desktop card inside a PS3, and no mobile versions were available when PS3 launched;

A graphics card and a console are not the same thing. Why would you even mention such an idea? All the power regulation hardware present on 8800s would've been combined into the PS3's PSU, so all the same components are not present. Even still, the original model PS3's mobo is quite huge. However, G80 itself was a huge die being at 90 nm, hence why I mentioned a G94 like chip (which was G80 derived). It wouldn't be as big as full 128 shader G80 by a long shot, even at 90 nm.

3) Power consumption - NVIDIA recommended at least a 450W power supply that can deliver up to 30A on the +12V rail for the cards. PS3 in total used 240W of power. They would have needed to sell a 450W PSU on the side......

Original style PS3s had a 380 Watt PSU. No joke. Also, extra components involved on a PC like a very large memory array and power for extra components like a sound card, many HDDs, etc wouldn't be needed. A 90 nm G94 like chip wouldn't need so much power anyways.

4) Cost - The cheapest card cost almost as much as the retail price of the PS3! ($599 for the 8800GTX and $449 for 8800GTS 640mb)

The 8800GTS 320MB was around $300. Remember that G80 was a huge, expensive die at 90 nm. I'm mostly referring to G94 type GPU, so it would be a smaller GPU in comparison, hence cheaper. Extra cost of PCB, fan, and other components there for the graphics card specifically wouldn't exist since similar components pull double duty for the entire system in a console. You're paying for more than just a graphics processing die with a graphics processor.

The whole approach of putting a $230 CPU and a $70 GPU though by itself sounds like a poor strategy as you said. The heart of a console for gaming is the GPU, not a CPU. Thus ideally for the best graphics, you'd rather have a $70-80 CPU and a $210-220 GPU.

Cell's original intent was of course beset by Sony's delusions of grandeur. As a CPU by itself in combination with a proper GPU like G94, it would've been amazing, and I think we would've seen some really interesting interactions and features beyond what the best Core 2 Duos and perhaps Core 2 Quads could've done, like 3D volumetric and interactive smoke, procedural destruction, water (at least more commonly), etc in grander amounts and combinations than what we've seen, perhaps on par with Nvidia PhysX implementations such as in Mirror's Edge or Mafia II. Plus G94 is probably a good 2x as powerful as RSX, even if it was clocked at 500 MHz. G94 is my favorite GPU from Nvidia simply because it was such an excellent combination of features and performance in a very efficient, reasonable and fast package just like RV770 and Cape Verde from ATi/AMD.

At least in the early days, like I did when I built my first PC in early 2007, a fast dual core Athlon (I had a Windsor!), 8800GTS 320, 2 GB of fast DDR2 could easily best the PS3 in multiplatform games like CoD4 (at 1440 x 900, 4x AA, 60 FPS!). I will say that I think Cell has given the system the legs to exist these 6 years, simply because it can aid the RSX in some pretty important aspects. It's just a more difficult system to develop for which has effected it's adoption and development negatively, especially in the early days.
 
Last edited:

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Gran Turismo 5 = 1280x720 (4xAA), 1280x1080 (QAA, TAA selectable), 1280x720 (QAA) in 3D mode

In race mode, GT5 has several different modes and it runs some of them at 1920x1080 w/AA.

GT5 looks worse than Forza Motorsports 2 or 3 do on the Xbox 360.

Care to find anyone to back up the utterly idiotic assertion that Forza 2 looks better then GT5? I'd like to see you link anyone who backs that viewpoint up. I've heard people make the comparison to 4, normally it comes down do GT5's premium cars look better then anything in Forza 4, but Forza 4 has better backgrounds and nothing like GT5's legacy cars. I bought a 360 for the Forza series, I doubt you can find anyone who is willing to compare F2 to GT5.

Compared to the PC, GT5 is now at least 1 generation behind the best looking racers like Project Cars.

The PC got a generational edge against a six year old console? Wow, what a shocker ;)

It looks like you bought into that hype like Sony did when IBM sold the Cell architecture to them (AnandTech touched on that in the pulled article on how worthless theoretical flop specs were used to sway Sony execs).

Anand's article got pulled because it was laughably inaccurate. He talked to PC devs who had no idea how to deal with AMP particularly not combined with in order explicit code let alone manual SRAM management over register allocation.

Carmack also was very critical on consoles.

Absolutely he was, it's one of the reasons I like to point out that he made it clear that Cell was considerably more powerful then x86 CPUs. It isn't like he was trying to talk the consoles up, he bashed them many times for many different things. Raw CPU power was the one area that the consoles were miles ahead of PCs when they launched, in the case of the PS3 it is *still* ahead of PCs. That is the point I am making, swapping to x86 must come with huge financial savings to be remotely viable. x86 is too slow in gaming when compared to more exotic architectures. Those same architectures would be garbage for general purpose uses, but for gaming they excel.

From a power perspective, x86 processors are very poor for gaming, looking at it from a performance/watt or performance/$ ratio.

Cell's original intent was of course beset by Sony's delusions of grandeur.

You should lay that one at KK's feet, it was his delusion. I was very vocal about what a terrible choice they were making at the time too(the dual Cell setup would have been massive fail). That does bring up the point of the current consoles reported specs though. One year out Sony was planning on shipping the PS3 without a GPU at all, they made a change very late in development because they weren't getting the results they wanted. Nothing is preventing them from doing that again(the rumors of specs could be a target Sony wants devs to shoot for, then the finalized hardware ends up being considerably more powerful, that wouldn't even be unusual in the console space- this isn't the spec they talk up to the public that they all try to overhype, but the targets the devs are given well in advance.
 
Last edited:

jpiniero

Lifer
Oct 1, 2010
16,780
7,233
136
Well, one of the biggest problems of consoles in general has been lack of memory. Remember the N64 expansion pak? The extra memory that it gave allowed some games to run better or run at 640x480. I imagine if the PS3 had come with 512+256 you might have seen more games run at proper 720.

You know people will complain 5 years from now about how little memory the 720/PS4 have. That is if they aren't playing f2p games on their tablets!
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
In race mode, GT5 has several different modes and it runs some of them at 1920x1080 w/AA.

I'd like to see your source on that since GT5 prologue is not running native 1080p, it's runnning 1280x1080 upscaled to 1080p

AFAIK only the car selection and menus are 1920x1080.
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
According to DigitalFoundry's analysis, "1080p" mode indeed just 1280x1080 horizontally upscaled with some AA applied. That is a fair bit better than plain 720p, but it's not true 1080p by any stretch of the imagination.
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,663
2,526
136
x86 CPUs are very weak at gaming computations, the POWER architecture is far better suited

Bullshit. Firstly, there is not much real difference in performance between the ISAs, it all comes down to the CPU implementations. And as far as those go, a good fat OoO is vastly better than restricted, crappy in-order architecture.
(PC devs are the only ones you see complaining about in order architecture
That's bullshit. Every time I've cracked open a beer with game devs who have to actually work on the pieces of shit, if the discussion turns into programming, they tend to go on long tirades about just how awful the Xenon/Cell are.
they have been spoiled by Intel's phenomenal compilers).
? That's a complete non-sequitur? Do you have any idea what you are talking about?

Having better compilers make more restricted architectures easier to deal with. Working on good, smart OoO cpus is better because you don't need the kind of hard work from the compiler and the programmer to get good results. Also, most of the industry is not even using Intel's compilers -- mostly, they tend to use the crappy MS ones, just because it's easier.

Nothing is stopping IBM from producing an OoO POWER based CPU if the console makers are willing to take the rather sizeable hit in perf/watt.
In the real world, every time we've had comparable designs (in manufacturing, ISA, et all) where one is in-order and the other isn't, the OoO core wins in perf/watt. Just look at ARM cortex A9 vs A8, PPC 470 vs pretty much anything else PPC, the list is long...
As of 2010 the fasted six core i7 was able to exceed half of Cell's floating point performance
That's complete and utter nonsense. Peak throughput is not performance. By Microsoft's numbers, and corroborated by my experience, Xenon can typically maintain about 0.2IPC per thread (x6). You can get more than that out of a (hyperthreaded) P4. Since the P4 instructions typically do more (load+op vs load-store), the difference is even larger than it seems. Cell is harder to compare because it's programming model is completely different, but in practice the SPEs spend most of their time stalling for data from the DMA controller, and their utilization is worse than that.

in gaming the overwhelming majority of the code base is floating point.
Not true. Gaming does have more FP than most other loads, but most of all executed instructions are still control and data flow.

Speaking of which, just because you can multiply two numbers together really fast does not actually mean that your CPU is good at FP. You need to actually get the data there, and that is precisely where Cell/Xenon fail really badly.

When we talk about GPUs it is very easy to see the superiority of PCs, when it comes to CPUs PCs are actually shockingly poor when looking at gaming explicitly. In relative terms, the CPUs the consoles shipped with were four to five generations ahead of PCs when looked at from a GPU based perspective.
No. In real loads, the console CPUs were at least a generation or two behind desktop CPUs at their respective launches. Even in really contrived loads, the PC CPUs caught up at release of Conroe.

Don't confuse limitations of the GPU or RAM with the CPUs. When Carmack was posting notes on Rage one of the things he mentioned was that the PS3 had an advantage over the 360 and PC because it had so much raw CPU power they could use an entirely different and more intensive compression method to save IO overhead, this was versus the i7 too. That's the 'terrible' POWER architecture. Wasn't very good as a general purpose CPU, it is a *beast* at floating point computations as configured in the consoles particularly when compared to the extremely weak x86 offerings of the time era.
Might be worth it to read follow ups on how that rage PS3 thing turned out... Didn't work as well as he said it would.

Forgot to mention, perf/xtor is significantly higher on the POWER parts too, a "cheap" AMD CPU is considerably more expensive on a performance basis then the rather tiny POWER chips that MS used this generation. Not saying they won't go x86, but if they do it won't be to save money.

Umm, no? Don't forget to count the entire memory subsystem. Right now, the best performance/xtor in synthesisable, available designs is a race between Bobcat, Cortex A15 and PPC 470. All of which are OoO.
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
In race mode, GT5 has several different modes and it runs some of them at 1920x1080 w/AA.

As I noted earlier, no, GT5 does not run at true 1920x1080. At best it runs at upscaled 1280x1080.

Anand's article got pulled because it was laughably inaccurate. He talked to PC devs who had no idea how to deal with AMP particularly not combined with in order explicit code let alone manual SRAM management over register allocation.

Ok then. Where's your interview with actual game developers go on about how great the Cell is over x86 processors?

Absolutely he was, it's one of the reasons I like to point out that he made it clear that Cell was considerably more powerful then x86 CPUs. It isn't like he was trying to talk the consoles up, he bashed them many times for many different things. Raw CPU power was the one area that the consoles were miles ahead of PCs when they launched, in the case of the PS3 it is *still* ahead of PCs. That is the point I am making, swapping to x86 must come with huge financial savings to be remotely viable. x86 is too slow in gaming when compared to more exotic architectures. Those same architectures would be garbage for general purpose uses, but for gaming they excel.

From a power perspective, x86 processors are very poor for gaming, looking at it from a performance/watt or performance/$ ratio.

[citation needed]
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
That is the point I am making, swapping to x86 must come with huge financial savings to be remotely viable.
No, swapping to x86 will have to come with restrictions they don't want to have. They didn't choose IBM, and lately ARM, because of performance, regardless of they are capable of, but because they are synthesizable, and they can they get a license to do practically whatever they want too, as far as integrating it. X86 just doesn't fit well with that.

x86 is too slow in gaming when compared to more exotic architectures. Those same architectures would be garbage for general purpose uses, but for gaming they excel.
...except every time anyone has ever actually tested it. Can you show any single benchmark for that? The latest crappy port, FI, Dark Souls, apparently doesn't go below max frames even on total POS desktops, while it drops much lower on the console, and is starved for VRAM, to boot. the chances that the GPU is causing the FPS drops is almost 0, given the poor image quality.

From a power perspective, x86 processors are very poor for gaming, looking at it from a performance/watt or performance/$ ratio.
If you're only working at 1/2 the pixels and 1/4 the framerate, it might look that way, but we like higher resolutions and framerate, and don't use that much power to get there. Cost is a misnomer, because the console is only a console. If the same level of integration were applied to PCs, they'd be cheaper (see AIOs). Some makers even tried integrating midrange GPUs into mobos, some years ago, for better value (it didn't catch on).
 

jpiniero

Lifer
Oct 1, 2010
16,780
7,233
136
The latest crappy port, FI, Dark Souls, apparently doesn't go below max frames even on total POS desktops,

I haven't seen any benchmarks, but based on this one youtube video I found (heh) the HD 3000 struggles badly in DS.
 

Ancalagon44

Diamond Member
Feb 17, 2010
3,274
202
106
Absolutely he was, it's one of the reasons I like to point out that he made it clear that Cell was considerably more powerful then x86 CPUs. It isn't like he was trying to talk the consoles up, he bashed them many times for many different things. Raw CPU power was the one area that the consoles were miles ahead of PCs when they launched, in the case of the PS3 it is *still* ahead of PCs. That is the point I am making, swapping to x86 must come with huge financial savings to be remotely viable. x86 is too slow in gaming when compared to more exotic architectures. Those same architectures would be garbage for general purpose uses, but for gaming they excel.

What is this I dont even....

You do know that the Cell has about 200 million transistors? Modern x86 CPUs are over a billion transistors. You must be the only person who says that Cell does not suck in real life.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
In race mode, GT5 has several different modes and it runs some of them at 1920x1080 w/AA.

Right, the replay mode looks great, but the actual game doesn't run at 1920x1080, and it doesn't run have that 60 fps sense of speed either. GameTrailers has the Review. They also mentioned that while GT5's core/premium group of cars (200) were well done, but the rest (800 standards cars) look very plain, lack cockpit views or ability to customize them in the same was as the premium models, in other words not up to the rest of the game's standards. In local split-screen mode with a friend, GT5 cannot render other AI cars in the game, which is most likely a limitation of PS3's graphics/CPU power capabilities.

"The most disappointing and inconsistent aspect of Gran Turismo 5 is by far the visual presentation. It's hard to appreciate how good a car looks when its covered with jagged flickering shadows and poorly blended dust trails. Unbelievably the background elements like crowds and trees are crudely constructed or completely 2-dimensional. The chasm and quality between the standard and the premium models is extreme indeed. Presentation Score 8.6. " ~ Gamertrailers Review (see link above)

And now Forza Motorsport 4 Review from Gametrailers @ 7 min mark:

"...intricately detailed racing models, for all 500 of its cars, with smooth shading, image-based lighting and reflection techniques that really let them shine. The handful of Auto Vista models [~25 of them] go beyond that with fully detailed engines and small touches like dials and labels. Presentation Score: 9.1"

In other words Forza 4 car models are more consistent overall and the game runs faster than GT5 does. It looks like the Xbox360 is actually a more powerful console than the PS3. Whether this is because of cost cutting or PS3's inability to handle 60 fps with many detailed cars like Forza 4 can I can't say for certain, but what I can say from this comparison without a doubt is that Cell's power advantages are nowhere to be seen against the Xbox360. Xbox360 itself though pales in comparison to a $50 AMD Phenom II X4 + $60 HD6670 setup today. So by extension if PS3 cannot convincingly outperform the Xbox360, it cannot be faster than modern AMD/Intel's CPUs for games.

If the Cell is supposedly 4-5 generations ahead of Core 2 Duo (the modern architecture at the time of PS3) as you claimed, why do PS3 games look so outdated, and hardly any better than Xbox360 games? Dark Souls on PS3 vs. PC version.....Uncharted 3 doesn't even run at native 1280x720....etc. etc.

The performance difference between Radeon R500 in the Xbox360 and RSX G70 isn't that much. We can only conclude that the Cell in the PS3 is not any better in the real world than the PowerPC 3-core architecture in the 360. That's because those 7 PPE units were horribly slow and inefficient in the real world and PS3 only has 1 PowerPC core vs. 3 in the Xbox360 -- all in all both CPUs are horribly slow compared to even a 1 Core i7 Nehalem.

The Cell has always been overhyped, only good for theoretical marketing benchmarks, Folding@Home and other non-gaming tasks. Not once has it been shown that the Cell is faster than any modern AMD or Intel CPU in games. Also, I don't ever recall Carmack saying that the Cell can actually outperform modern Intel/AMD CPUs, since it sure didn't in Rage when the game was finished.

Care to find anyone to back up the utterly idiotic assertion that Forza 2 looks better then GT5? I'd like to see you link anyone who backs that viewpoint up. I've heard people make the comparison to 4, normally it comes down do GT5's premium cars look better then anything in Forza 4, but Forza 4 has better backgrounds and nothing like GT5's legacy cars.

Sorry, I mixed up 2/3 with 3/4. Forza 4 definitely looks better than GT5. Here is a 2 min video head-to-head on graphics:
http://www.gametrailers.com/videos/fg0u7h/forza-motorsport-4-forza-4-vs--gran-turismo-5

Where is that magical 4-5 generations ahead Cell advantage in Games?

Raw CPU power was the one area that the consoles were miles ahead of PCs when they launched, in the case of the PS3 it is *still* ahead of PCs.

RAW CPU power is meaningless I told you already. You can't compare Intel and AMD CPUs using theoretical floating point calculations and extrapolate that to games either. Bulldozer FX-8150 is superior based on theoretical floating and integer point performance to a 2500K but loses badly to the 2500K in actual games. That's because there is a lot more to running game code fast than theoretical raw performance.

Case in point going back to Forza 4 vs. GT5. While you keep touting that Cell is way ahead of PCs, the fact of the matter is PS3 doesn't even have superior graphics to the Xbox360, a 1-year-older console. That means the $$$ that Sony spent on the Cell didn't materialize into any tangible benefits compared to the 3-core PowerPC Xenon in the 360. So how are you arguing that the "alien" technology in the Cell is so much superior to modern x86 CPUs when the Cell + NV GPU combo couldn't even best the Xbox360?? That's ironic.

PS3's GT5
gt5h.jpg


360's Forza 4
forza4d.jpg


GT5
gt5b.jpg


Forza 4
forza4b.jpg


GT5
gt5c.jpg


Forza 4
forza4c.jpg


GT5
gt5d.jpg


Forza 4
forza4d.jpg


GT5
gt5ex.jpg


Forza 4
forza4e.jpg


That is the point I am making, swapping to x86 must come with huge financial savings to be remotely viable. x86 is too slow in gaming when compared to more exotic architectures. Those same architectures would be garbage for general purpose uses, but for gaming they excel.

No. Sony will ditch the Cell for 2 reasons: (1) Cell architecture is outdated for games compared to modern OoO CPU designs (2) They won't be stupid enough to fall for the theoretical TFLOP/GFLOP marketing BS again and then take on massive losses by pairing a $230 CPU with an anaemic $70 GPU. I am pretty sure Sony executives will have learned from this major major mistake and will do everything to put a faster GPU in the PS4 than what Xbox720 will have. Sony learned by now what drives graphics is GPU, not CPU. Even though PS3 launched 1 year later and had the supposedly superior Cell, it couldn't really outperform the Xbox360 convincingly. Actually, most cross-platform games run faster/smoother on the Xbox360. What saved the PS3 were free online gaming, Blu-Ray drives and a wider variety of games/stronger exclusives. PS3's graphics couldn't really topple the 360. That just goes to show that GPU is the most important component for a console for graphics and I am willing to bet that Sony won't make the same mistake again. The Cell will be ditched because it was too expensive and slow (i.e, awful price/performance and performance/watt for games). I am always willin got bet that Sony's balance for CPU/GPU will change from that 3 to 1 ratio ($230 to $70). Perhaps they'll go with some AMD APU and dedicated AMD GPU for Cross-fire, but the focus will be on the GPU side this time because that's really where you gain the edge, not on the CPU side. This is no different on the PC where beyond a $225 2500K, you pretty much can spend up $500 on a GPU and still be GPU limited in demanding titles.

From a power perspective, x86 processors are very poor for gaming, looking at it from a performance/watt or performance/$ ratio.

You always argue without facts like that? What's next, you are going to argue that ARM CPUs are better for gaming from a performance/watt perspective than Haswell will be? I am sure ARM CPUs are better for their uses, but your statement is too broad to withdraw anything useful out of it.

Here are the facts:

Performance/watt argument for the Cell debunked
Nvidia GeForce Go 7950GTX 90nm is rated at 45W TDP. That's a full-fledged G71xM chip with 24/8 pixel/vertex pipelines, 16 ROPs and 256-bit / 512MB memory configuration. The RSX in PS3 has 24/8, 8 ROP, 128-bit/256MB 90nm configuration of this chip. So at most it was using 45-50W of power. However, the PS3 in total used 240W of power under gaming. Since the 2 most power consuming components for gaming are GPU + CPU, that means the Cell was using the most power in the PS3 by logical deduction. There goes your performance/watt argument. I'll address the performance/$ argument at the end of my post. That means the Cell was not at all power efficient. What has allowed Sony to reduce the Cell's power consumption was constant die shrinks from 90nm. The actual PowerPC/Cell architecture is actually horribly inefficient from a performance/watt perspective compared to modern CPUs.

Performance/$ argument for the Cell debunked
As of December 2009, IBM Cell Broadband Engine cost Sony just $37.73. What you are saying is that Sony will ditch the Cell for what you claim to be are inferior x86 CPUs to save $. How can you even use the cost savings argument since even the AMD A8-3850K will cost Sony more $ to use than continuing to use the Cell. Sony is ditching the cell because it's horrendously slow for games and because it has poor performance/watt, despite it being so cheap. There goes your performance/$ argument.

If you put a $90 Phenom II X6 into a PS4, it would mop the floor with the Cell in modern games, plain and simple. Even the $110 A8-3870K would be at least 2x faster for gaming the Cell. As far as I am aware, you are the only person in the last 6 years on our forum who games a PC and PS3 but continues to claim that the Cell is superior for games than a modern x86 CPU, and yet you haven't once showed how this is evident in real world games. Even if theoretically the Cell is more powerful than the Core i7 3770K, it is only on paper. Unless programmers or designers can harness the power of the Cell to show us what it's capable of, it's just blowing smoke. In 6 years, PS3 has not shown us that the Cell is capable of anything amazing for games compared to the Xbox360 and especially compared to the PC.

I know you really really love your PS3 and have defended it for 6 years now every time someone brings up the point that Cell was a flop. Maybe it's time to let go and look forward to PS4 and Sony finally embracing the superior x86 CPUs or some other modern CPU architecture. PS3 couldn't provide better graphics overall than the Xbox360, and we know by extension that the Xbox360 was far inferior to a Core 2 Duo + 8800GTS system. By today's standards the Cell architecture is hopelessly outdated for games even against a budget Phenom II X4 CPU.
 
Last edited:

jpiniero

Lifer
Oct 1, 2010
16,780
7,233
136
I am pretty sure Sony executives will have learned from this major major mistake
The mistake was making the console so expensive. MS beat Sony because it's console was cheaper. Period. Coming out earlier helped, and so did Kinect, but it was really about being cheaper.

put a faster GPU in the PS4 than what Xbox720 will have
The rumors suggest they are both getting a 6670.

why do PS3 games look so outdated, and hardly any better than Xbox360 games
Because it makes sense for a developer who is making a game for both consoles to keep it consistent. GT5 isn't one of them, but most of the second party games look pretty decent (MGS4, GoW3, Uncharted games, etc).

No. Sony will ditch the Cell for 2 reasons
If Sony is using PD, it'll be because of reasons other than technical, ie: AMD gave them a godfather offer.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
So just like the specs on console, it seems some people (not you) just eat console-PR raw...

Or you could check out this cool new thing called reality, several GT5 modes run at 1920x1080, the racing is scaled.

Firstly, there is not much real difference in performance between the ISAs, it all comes down to the CPU implementations.

x86 CPUs use decode hardware so they can function as if they were one of the more modern architectures. This has xtor overhead.

Every time I've cracked open a beer with game devs who have to actually work on the pieces of shit, if the discussion turns into programming, they tend to go on long tirades about just how awful the Xenon/Cell are.

If they are PC devs, it isn't surprising. GPGPU is a lot more difficult to code for then x86 too, doesn't mean it isn't profoundly faster at the tasks it is well suited for.

In the real world, every time we've had comparable designs (in manufacturing, ISA, et all) where one is in-order and the other isn't, the OoO core wins in perf/watt.

Do you anything resembling something approaching a viable example of that? All of the ones you listed are entirely different architectures. OoO has advantages in terms of they are much easier to deal with as you can throw out lousy code and they deal with it much better.

No. In real loads, the console CPUs were at least a generation or two behind desktop CPUs at their respective launches.

Not remotely close to being accurate. Carmack disagrees with whomever you heard that from also.

You do know that the Cell has about 200 million transistors? Modern x86 CPUs are over a billion transistors.

And they still flat out lose in straight throughput. x86 CPUs are terrible in terms of performance/watt or performance/xtor.

Forza 4 v GT5-

http://www.youtube.com/watch?v=lHdI053Qq3g&feature=related

Side by side comparison, no commentary. Trade offs with either game, Forza4 is certainly shinier. Cell's advantage over Xenos would be in physics for a racing game, where GT5 has a rather decisive edge(Xenos likely could do much better, but Turn10 seems to go for more of the arcade/accessible feel for their games).

While you keep touting that Cell is way ahead of PCs, the fact of the matter is PS3 doesn't even have superior graphics to the Xbox360, a 1-year-older console.

http://www.youtube.com/watch?v=VsqE2255064

If you put a $90 Phenom II X6 into a PS4, it would mop the floor with the Cell in modern games, plain and simple.

You claim you know more about coding then John Carmack, which games have you released on the systems? He says Cell handily bests an i7.

They won't be stupid enough to fall for the theoretical TFLOP/GFLOP marketing BS again and then take on massive losses by pairing a $230 CPU with an anaemic $70 GPU.

Holy stupid comment. Sony owns rights to Cell, they can produce it themselves. Wherever you saw the $230 CPU comment from it is profoundly wrong, not even remotely close.

Maybe it's time to let go and look forward to PS4 and Sony finally embracing the superior x86 CPUs or some other modern CPU architecture.

Do you know anything about CPU design? x86 is so bad they use decode hardware to translate x86 to uOps which they do to mimic a POWER style layout for their chips because x86 was such a shockingly bad architecture, floating point wasn't even native. x86 design isn't modern, and it isn't remotely close to approaching superior for gaming or any other FP intensive tasks.

http://www.top500.org/

Cell, Sparc and nVidia dominate. x86 is a great general purpose processor, it sucks for highly optimized code.

Edit-

People asking for Carmack's comments about Cell's power, couldn't find the particular write up I saw but around the 9:00 minute mark of this video he talks about it, he actually comments how much of a PITA it is, but its' power isn't in question-

http://www.youtube.com/watch?v=hapCuhAs1nA

If you keep watching he also talks about how x86 is at a big disadvantage for getting into the consoles due to cost, and that's compared to MIPS and POWER(and ARM).
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
I haven't seen any benchmarks, but based on this one youtube video I found (heh) the HD 3000 struggles badly in DS.
The HD 3000 is Intel IGP on SB, which is still not as capable as the XB360's GPU. Llano or better, or any AIB higher than a Radeon x450 aught to be.

x86 CPUs use decode hardware so they can function as if they were one of the more modern architectures. This has xtor overhead.
So does every other processor, save for some GPUs and other super-simple application-specific ones. The Cell and Xenon are no exceptions. The xtor overhead is actually pretty small, too, and has sometimes been lower than the overhead of RISC decoders. Bandwidth with is x86's issue, since every instruction's length has to be determined, then bytes shifted for the next instruction. Making that superscalar is not simple. RISC CPUs have a much easier time of it, even with support for smaller modes, like MIPS-16 or Thumb.

If they are PC devs, it isn't surprising. GPGPU is a lot more difficult to code for then x86 too, doesn't mean it isn't profoundly faster at the tasks it is well suited for.
No, if they have done anything but arcade/console development, because that and HPC have basically been the only areas where in-order has stayed around for long periods of time, in the presence of complicated programs. Anyone with any experience beyond that will hate in-order for CPUs, and for good reason.

Do you anything resembling something approaching a viable example of that? All of the ones you listed are entirely different architectures. OoO has advantages in terms of they are much easier to deal with as you can throw out lousy code and they deal with it much better.
You can also throw good code at it and have it handle it much better. The simple fact is that you can't handle more than a small handful of outstanding cache misses without OOOE. It's been tried, and can they can give paper specs, but it never works out in practice. You can't take code that fundamentally needs to branch to a non-PC-relative location (or a large relative offset), or that fundamentally needs to fetch a calculated address, and not stall all the time with in-order. OOOE also allows execution and completion while waiting for L1 hits to known addresses couldn't be loaded for consistency reasons, and to keep doing some work while going to L2. They could get away with not bothering while clock speeds were going up up and up in the 80s and 90s, but the 00s made a trick from 1964 necessary all over again.

Do you know anything about CPU design? x86 is so bad they use decode hardware to translate x86 to uOps which they do to mimic a POWER style layout for their chips because x86 was such a shockingly bad architecture, floating point wasn't even native. x86 design isn't modern, and it isn't remotely close to approaching superior for gaming or any other FP intensive tasks.
Can you find a modern application-type RISC CPU that does not have and use decode hardware to convert instructions to uops (hint: you'll be looking for awhile, even if you might find one or two)? They basically all do it, today. Not having a somewhat decoupled back-end is only used for tiny uC-style CPUs, and thanks to Moore's Law, it's getting less common by the year even there.

Not merely that, but what ISAs include FP as requirements? PPC doesn't, that's for sure. It's also extra on ARM (even ARM64, with them allowing it as optional for the R-profile--designed by committee madness, IMO). MIPS, too. Oh, those horrible ISAs! :rolleyes: But, wait, what common ISA does? Wait for it....wait...wait...x86! IEEE754 FP became a de facto requirement for Pentium compatibility, and is a stated requirement for x86_64.

When x86 still had separate FP chips, so did RISCs, for the most part, if they even supported it at all. Being commonly-integrated started happening around the same time, and for the same reason: enough xtors could be put on a chip to do proper IEEE 754 at decent speeds, emulate their old proprietary decimal types fast enough (Decimal, with a captial-D, being a notable exception, of course), and everybody converged to using that one standard for FP (also, Intel was the one that had the smarts to not give the problem just to their engineers, which ended up giving birth to IEEE 754).
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
The mistake was making the console so expensive. MS beat Sony because it's console was cheaper. Period. Coming out earlier helped, and so did Kinect, but it was really about being cheaper.

The cost of the console is tied to the cost of the components. The Cell cost $230, by far the most expensive aspect of the PS3 other than the BluRay drive. So yes, including the Cell was a major mistake that contributed to the exorbitant ~$900 cost to manufacture the PS3, while not allowing the PS3 to have superior graphics overall to the 1-year-older Xbox360. Basically, the Cell was a total waste of $. They could have gone for the exact same higher clocked 3-core dual-threaded PowerPC version of the 360's CPU and the PS3 would have lost nothing in performance.

The rumors suggest they are both getting a 6670.

The same rumours suggest PS4 will have dual-graphics via AMD APU and a dedicated GPU. That means PS4 could have cross-fire capability. A8-3870K (HD6550D) + HD6670 ~ HD6770 level of performance. This hasn't been official confirmed but if Sony goes AMD APU + dedicted GPU route, it should have a stronger graphics sub-system than the Xbox720, which is probably going to spend $ on Kinect 2.0 instead of having faster graphics.

Because it makes sense for a developer who is making a game for both consoles to keep it consistent. GT5 isn't one of them, but most of the second party games look pretty decent (MGS4, GoW3, Uncharted games, etc).

Yes, but BenSkywalker here keeps touting the Cell's capability and yet even Sony's exclusive titles hardly look better than Xbox360's exclusives. Actually, as far as I know all modern Sony exclusives can't even run at 1080P, having crippled texture quality and resolution:

Uncharted 3 = 896x504 in 3D
GT5 = 1280x720 (QAA) in 3D
God of War III = 1280x720 (MLAA)
Metal Gear Solid 4: Guns of the Patriot = 1024x768 (2xAA, temporal)

In other words, Cell's superiority is just smoke and mirrors.

If Sony is using PD, it'll be because of reasons other than technical, ie: AMD gave them a godfather offer.

No, it'll be because Sony has finally seen the light that modern CPU architectures, including even Phenom II, AMD A8, are far superior in gaming performance than the 1 PowerPC core / 7 PPE Cell abomination that not even once throughout PS3's life has proven its $230 worth.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
You claim you know more about coding then John Carmack, which games have you released on the systems? He says Cell handily bests an i7.

You seem to love posting baseless opinion with no facts to back up your claims. The same guy admitted they didn't properly code the PC game. Even he knows Rage was a consolized disaster. Even Doom 3 modded looks better than Rage.

John Carmack on Rage PC coding:
""The release on the PC not working for over half of our customers because of the driver issues that we had really was inexcusable on our part. "It was optimistic naïveté," he continued. We tried to get the drivers to work the way they were supposed to, but we weren't smart about the fact that most people wouldn't have these drivers if everything had gone right." Despite the game's problems with graphics drivers affecting more than half of Rage's player base, Carmack still has a soft spot for the PC platform: "PC, when its working right, is far, far better." ~ Source

or here as of August 2012:

"In addition, Carmack revealed that the BFG version of Doom 3 will run with 60Hz on consoles in 3D mode. This basically means that Doom 3: BFG will be a 30fps game for both X360 and PS3. The PC version, on the other hand, will offer true 120Hz 3D stereoscopic experience, meaning that PC gamers will enjoy 60fps at high resolutions. Carmack admitted that RAGE lacked dynamic shadows and lights, something that had a huge impact on it. Last but not least, Carmack admitted that PCs are superior to consoles." ~ Source

Oops....

Source where he says exactly that Sony's Cell can outperform Intel Core i7 in gaming? This is coming from the same John Carmack that said the overall performance of a modern PC is 10-20x faster than the PS3/360 consoles?

Dude just admit it here and now that you are a hardcore PS3 fanboy. This is what's it's really about. For 6 years you keep repeating the same story. You should probably lock yourself in a room when PS4 has graphics 2-3x better and uses an AMD or some newer OoO PowerPC architecture because you will have been wrong on both of your outrageous claims:

1) OoO CPUs are worthless, but OoO is used because of inefficient x86 compilers/code
2) The Cell > AMD/Intel CPUs for games.

Seriously, I sometimes wonder if you even learned anything at all since Pentium 4 days about how modern CPUs actually work. If the Cell architecture was worth keeping, Sony would. It's obviously outdated and never lived up to the hype. Its performance advantage was never realized in the PS3 since theoretical CPU performance is worthless as many people have stated here but you continue to ignore.

PS4 won't use the Cell because:

"The abandonment of the Cell architecture would thrill the many game developers who have struggled with the complex chipset, but it could also be viewed as the admission of a mistake.
Cell was the pet project of PlayStation creator Ken Kutaragi, who dreamed that the chip—a "Power Processing Element" married to eight "Synergistic Processing Elements"—would make the PS3 the most impressive gaming console ever. He spoke of a home equipped with multiple devices that were powered by Cell, all of them linking to each other to increase the computational power driving any of the devices. Cell was not the revolution Sony hoped and hyped that it would be. It also never managed to make the PS3 appear to be significantly more powerful than the year-older Xbox 360. "

http://kotaku.com/5889410/playstati...sources-say-which-leads-to-some-wild-theories

Cell architecture = failure in every regard, cost, performance, ability to extract that performance through efficient programming, performance/watt, performance/$.

If in 6 years not a single developer on the planet could actually harness the power of the Cell to utilize its 4-5 generations of performance ahead of a Core 2 Duo (your claim), then it was a marketing flop/failure. If in 6 years after all the games are tallied, it's very difficult to conclude that PS3 has convincingly better graphics than a 1-year-old competitor, that also shows us that the Cell was a marketing failure/flop. It doesn't matter if the Cell is theoretically 1 TRILLION times faster than a Core i7 on paper. The only thing that matters is real world performance and on that metric, the Cell was a disaster.
 
Last edited:

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Not merely that, but what ISAs include FP as requirements? PPC doesn't

You guys keep brining up PPC, so maybe I should stick to talking about the 80286, which clearly means that x86 is in order explicit. POWER not only requires general FP but also SIMD. PowerPC is legacy and has been for years.

In other words, Cell's superiority is just smoke and mirrors.

A CPU is not a GPU. Look those terms up.

The Cell cost $230

Inaccurate, ignorant drivel. Sony owned its' own fabs, they paid for Cell development and are not licensing it. Actual cost given the die size at launch would have been closer to $50.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Dude just admit it here and now that you are a hardcore PS3 fanboy

I understand what I'm talking about, so we are coming from two profoundly different areas on this topic. You want to bash the PS3? How about the segmented RAMs overhead for moving data from one pool to the other and how many cycles you waste versus having a full, or at closer to, UMA? How about we discuss the amount of memory wasted by Sony's OS and how that negatively impacts developers and limits the capabilities of what they are able to do and in some cases has rather severe impacts on game performance. We could discuss the utilization of such high latency RAM to pair with the CPU, particularly one that has vector units that are so sensitive to latency.

What I'm not going to do is be some ignorant inbred hillbilly and try to pick on them for an area where they are *very* strong.

Its performance advantage was never realized in the PS3 since theoretical CPU performance is worthless as many people have stated here but you continue to ignore.

Theoretical performance isn't worthless, it simply doesn't exist in a vacum. Development ease of use is a consideration, one that Cell didn't take into consideration. That doesn't change what it is capable of doing nor what it has done.

Cell architecture = failure in every regard, cost, performance, ability to extract that performance through efficient programming, performance/watt, performance/$.

Cell was much cheaper for Sony then x86, performance/watt and performance/$ utterly obliterate x86 still, again I point you back to the most powerful computers in the world.

A FYI that you seem to be profoundly ignorant on- Cell is not the PS3.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
You guys keep brining up PPC, so maybe I should stick to talking about the 80286, which clearly means that x86 is in order explicit. POWER not only requires general FP but also SIMD. PowerPC is legacy and has been for years.
Freescale's e200, FI, is Power 2.03 compatible, and has no FP. PowerPC is what the majority of non-server CPUs, with 32-bit support, are dubbed, including the CPUs in the Cell and Xenon. The IBM ones have model names prefixed with PPC, thus allowing the name to live on. It's as convoluted and complicated as you would except from a company as large and byzantine as Big Blue.
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
Do you know anything about CPU design? x86 is so bad they use decode hardware to translate x86 to uOps which they do to mimic a POWER style layout for their chips because x86 was such a shockingly bad architecture, floating point wasn't even native. x86 design isn't modern, and it isn't remotely close to approaching superior for gaming or any other FP intensive tasks.

http://www.top500.org/

Cell, Sparc and nVidia dominate. x86 is a great general purpose processor, it sucks for highly optimized code.

So...in a discussion about a processor's gaming ability, linking to a website about supercomputers is relevant...how? :confused: Nevermind the fact that several computers utilizing Intel's Xeon processors and even an AMD Operton are on the top 10 list for June.
 
Last edited:
Status
Not open for further replies.