Discussion [TweakTown] Guerrilla dev: PS3's Cell CPU is by far stronger than new Intel CPUs

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hitman928

Diamond Member
Apr 15, 2012
5,394
8,288
136

Even desktop chips nowadays, the fastest Intel stuff you can buy is not by far as powerful as the Cell CPU, but it's very difficult to get power out of the Cell. I think it was ahead of its age, because it was a little bit more like how GPUs work nowadays, but it was maybe not balanced nicely and it was too hard to use. It overshot a little bit in power and undershot in usability, but it was definitely visionary

Interesting perspective. I completely disagree, but interesting. I think in a way it's true with the Cell having the different types of compute units, especially the SPEs which made it very powerful (on paper) for SIMD instructions compared to CPUs at the time but I don't see how it could be considered any more powerful than a modern APU.
 

Mopetar

Diamond Member
Jan 31, 2011
7,948
6,245
136
I think most 30 fps next-gen games should be 30 fps because of the GPU, so the PC version will see significantly better framerates.

Well there were some like Fallout where the engine is so old that the frame rate is tied into a bunch of other stuff and increasing it too much starts to result in bizarre behavior. Of course that same engine is such an unoptimized mess that it was hard to get the frame rate that high without a beastly rig.

I guess if the console games aren't really optimized around the CPU then it's a perfect excuse to bump up the resolution if you can't even get past 70 FPS at lower resolutions anyway.
 

arandomguy

Senior member
Sep 3, 2013
556
183
116
Well thankfully the Xbox Series X news seemed to mention 60 fps or higher but we'll see how much that guideline/target is adhered to.

I feel your problem there is that you are (unknowingly?) conflating PC with Windows 10 there. I'm primarily a Linux user, and I see a lot of optimization potential for PC gaming on OS level.

We're discussing different things. I was referring to developer (game) side optimizations and design rules.

I understand that whole argument that developers will just get lazy and won't optimize as hard, but there's so much of an increase in resources from the CPU side that it's hard to imagine that happening right away.

I guess I'm on the more pessimistic side of this.

Take for instance the physics discussion in this thread. That is a design aspect that is comparatively easy to leverage which can saturate the new hardware's capabilities.

Actually that will make porting games from console to PC way easier. Zen 2 on console to Zen 2 on PC. Done and dusted.

Note that PC gaming performance will always be "a console generation" ahead of consoles due to the PCs rapid iteration and modular nature. PCs and consoles will be similar for all of one year and then *BOOM* Zen 3 and RTX-3080s will hit the shelves. Also, PCs should have better GPUs and cooling, therefore faster running components and better gaming performance.

The issue isn't getting the games running on the PC. Also I can see this acting as an unintended consequence in that it doesn't cost much more port (as in running) relative to how much to optimize (run well, PC specific tuning). So yes everything will get ported as it'll be easy to recoup those costs but it doesn't need to run well (less rate of return).

While I still see high gen to gen growth rates for GPUs I don't feel the same for CPUs. Core count and therefore aggregate core performance will likely scale up but not so much per core performance. Just using the Xbox Series X figures we can conservatively conclude that Ryzen 3 on a per core basis is well over 3x that in the current consoles. We simply are not going to see anywhere near that wide of a per core gap barring some fundamental tech breakthrough within the next 6 years.

I think most 30 fps next-gen games should be 30 fps because of the GPU, so the PC version will see significantly better framerates.

But you're right, it's gonna be much harder to have very high frame rates, and that should be a good thing. PC gamers have been complaining since the PS3 era that PC games are held back because of consoles. If a developer fully uses the CPU to create a 30 fps game, that should be a very interesting game regarding its logic. And if the PC version is only able to run at 45 fps, that's not too bad with freesync

I'm not so sure about the GPU. What we are already seeing and will see more coming back is "smarter" use of GPU resources via dynamic quality adjustments. Next gen consoles will likely much more heavily leverage those techniques so that effective GPU capability is higher than the raw improvements.

I will concede that the how much FPS argument (and really this is an age old argument) is going to be a preference thing and so not everyone will share the same concerns. It's just that personally for me now that I've not only been used to a consistent 60 for years and years now and even beyond it that the motion quality at 45 (variable refresh or not) much less 30 leaves much to be desired.
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
AMD people used to mock AMD diehards...
I want to understand then, how even Atoms from some years ago could play PS3 ports just nice?
If you would please read what is in the OP, you wouldn't ask this question, that I'm sure you've intended as a very funny one.
 

Adonisds

Member
Oct 27, 2019
98
33
51
By far stronger? How is a 65nm cell supposed to be a much better general purpose cpu than a 14nm+++ 9900k with similar die size that can use more power because is in a pc?

Do these people believe in magic?
 

soresu

Platinum Member
Dec 19, 2014
2,727
1,931
136
By far stronger? How is a 65nm cell supposed to be a much better general purpose cpu than a 14nm+++ 9900k with similar die size that can use more power because is in a pc?

Do these people believe in magic?
I think it was referring to the SPE part of the "Cell Broadband Engine" architecture.

FLOPS wise I think it was ahead of its time, but not the easiest thing to program for.

I agree though, it seems unlikely that the last shrink variant would compare well against the latest AVX512 enabled cores, has anyone got a FLOPS count for one of them?
 

DrMrLordX

Lifer
Apr 27, 2000
21,710
10,986
136
Theoretical max fp32 performance for Cell was ~230 GFlops. In fp64, 15 GFlops. That's with 8 SPEs active and not being used by anything else.

A 3900x:

aida64temp.png

And that's just an AVX2 chip.
 
  • Like
Reactions: soresu

soresu

Platinum Member
Dec 19, 2014
2,727
1,931
136
Theoretical max fp32 performance for Cell was ~230 GFlops. In fp64, 15 GFlops. That's with 8 SPEs active and not being used by anything else.

A 3900x:

View attachment 14407

And that's just an AVX2 chip.
Not to mention only the 12 core model, EPYC and TR3 must be monstrous.

At a guess that would put TR3 32C at about 4 TFLOPS FP32, assuming that SP is double the DP figure.

Noice!

I wonder how well they would do with software OpenGL or Vulkan work....
 
Last edited:

Jorgp2

Junior Member
Dec 19, 2018
21
11
81
I don't get how there would even be a question of how weak the CELL was.

It's entire design philosophy was to be three things cheap, low power, and good enough. Then there was the fact that Sony originally planned to use the CELL for both graphics and compute, like they had on both their previous consoles.

They achieved the cheap and low power requirements by going with an in-order core with the ability to use the SPEs for simple code, with the PPE handling the main game logic and the SPEs handling the rendering and vector math.

People like to compare the CELL to conventional CPUs at the time, when the designers of the CELL Itself knew that they would never be able to compete with the big two in the desktop space which could make large dies for their own process nodes at any cost. Yes x86 CPUs didn't have the floating point throughput to match the CELL, but you should be comparing the CELL to GPUs at the time which actually are designed to do the same tasks as the CELL.

But at the end of the day even after Sony was forced to add an actual GPU to the final design, the 360 was a much better machine. It's CPU was more powerful, and its GPU was both more powerful and actually introduced new technology.
 

NTMBK

Lifer
Nov 14, 2011
10,248
5,045
136
I don't get how there would even be a question of how weak the CELL was.

It's entire design philosophy was to be three things cheap, low power, and good enough. Then there was the fact that Sony originally planned to use the CELL for both graphics and compute, like they had on both their previous consoles.

They achieved the cheap and low power requirements by going with an in-order core with the ability to use the SPEs for simple code, with the PPE handling the main game logic and the SPEs handling the rendering and vector math.

People like to compare the CELL to conventional CPUs at the time, when the designers of the CELL Itself knew that they would never be able to compete with the big two in the desktop space which could make large dies for their own process nodes at any cost. Yes x86 CPUs didn't have the floating point throughput to match the CELL, but you should be comparing the CELL to GPUs at the time which actually are designed to do the same tasks as the CELL.

But at the end of the day even after Sony was forced to add an actual GPU to the final design, the 360 was a much better machine. It's CPU was more powerful, and its GPU was both more powerful and actually introduced new technology.

No, the PS3 was always going to have a GPU, according to one of the designers of the Cell (David Shippy), in the book he wrote. But it was going to be an internal Sony design. When that team failed, Nvidia were brought in.
 
  • Like
Reactions: Adonisds

insertcarehere

Senior member
Jan 17, 2013
639
607
136
No, the PS3 was always going to have a GPU, according to one of the designers of the Cell (David Shippy), in the book he wrote. But it was going to be an internal Sony design. When that team failed, Nvidia were brought in.

The Hubris to believe that they could have designed a better GPU than the companies who design, make, and sell GPUs for a living.

PS3 would've been so much better as a more conventional CPU + G80-derived GPU than what we got in the end.
 

Jorgp2

Junior Member
Dec 19, 2018
21
11
81
The Hubris to believe that they could have designed a better GPU than the companies who design, make, and sell GPUs for a living.

PS3 would've been so much better as a more conventional CPU + G80-derived GPU than what we got in the end.

I believe it shouldn't have been conventional, it should have been a unified memory architecture like the 360.

But by far I see the biggest drawback with the HD generation consoles is their small memory allotment. It forced developers to either get creative or cut back their games.
Especially on the 360 which relied heavily on compressed textures.
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
I believe it shouldn't have been conventional, it should have been a unified memory architecture like the 360.

But by far I see the biggest drawback with the HD generation consoles is their small memory allotment. It forced developers to either get creative or cut back their games.
Especially on the 360 which relied heavily on compressed textures.
If you find this place worse than reddit, why do you bother talking to us, lowly plebs?
 

Ottonomous

Senior member
May 15, 2014
559
292
136
If you find this place worse than reddit, why do you bother talking to us, lowly plebs?
Is this a misquote? But in all honesty, rejecting reddit and even anandtech is part of the new countercultural mainstream, pretending that you're enlightened, in the know, because you don't subscribe to something that has mass acceptance. Essentially hipsterism
 
  • Like
Reactions: Adonisds and lobz

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
  • Like
Reactions: Ottonomous

NTMBK

Lifer
Nov 14, 2011
10,248
5,045
136
The Hubris to believe that they could have designed a better GPU than the companies who design, make, and sell GPUs for a living.

PS3 would've been so much better as a more conventional CPU + G80-derived GPU than what we got in the end.

They designed their own custom graphics hardware on the PS 1 and 2.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
It bears repeating that Cell lacked both Out of Order Ex, and Branch Prediction entirely.

Essentially the only portion usable whatsoever for GP computing was the single PowerPC core w/512k cache.

Having it tied to Rambus was just a hilarious extra bit of fun.

I feel like we shouldn't point and laugh too much, but even an era-correct Athlon X2 is a better all around CPU than the Cell ever was.
 

Nothingness

Platinum Member
Jul 3, 2013
2,503
901
136
It bears repeating that Cell lacked both Out of Order Ex
The lack of OoOE for the SPE was not a real issue. After all these were always considered as beefed up DSP programmed in assembly language.

and Branch Prediction entirely.
The PPE had (limited) branch prediction. The SPE had branch instructions with explicit hints.
Ref: https://www.cc.gatech.edu/~hyesoon/spr11/lec_cell.pdf

Too bad the PS3 only have 256 MB of RAM. If it had more I would have tried SPEC 2006 just for the fun :) That is if my PS3 with Linux still works after all those years...
 

f2bnp

Member
May 25, 2015
156
93
101
I'm fairly certain that the PS3 was initially going to have a GPU designed by Toshiba but it ended up having all sorts of issues and the RSX was apparently introduced at a somewhat late stage of development. Weird design either way, but man that article is just straight up weird.
 

soresu

Platinum Member
Dec 19, 2014
2,727
1,931
136
Theoretical max fp32 performance for Cell was ~230 GFlops. In fp64, 15 GFlops. That's with 8 SPEs active and not being used by anything else.

A 3900x:

View attachment 14407

And that's just an AVX2 chip.
Not to mention only the 12 core model, EPYC and TR3 must be monstrous.

At a guess that would put TR3 32C at about 4 TFLOPS FP32, assuming that SP is double the DP figure.

Noice!

I wonder how well they would do with software OpenGL or Vulkan work....
Confirmed Ryzen TR 3970X is 4 TFLOPS FP32:
9268_08_amd-ryzen-threadripper-3970x-zen-2-processor-review.png

Interesting that Intel has such a lead in 64b IOPS though, more than double with less cores at the top of the graph - perhaps that is another avenue being worked on for Zen3 or 4.
 

Jorgp2

Junior Member
Dec 19, 2018
21
11
81
Is this a misquote? But in all honesty, rejecting reddit and even anandtech is part of the new countercultural mainstream, pretending that you're enlightened, in the know, because you don't subscribe to something that has mass acceptance. Essentially hipsterism
I came to check out any news about Cascade Lake's launch, found a thread of AMD shills bashing people looking to buy them.

Literally trying to find any excuse to call us idiots.


Insults and accusations (such as "shills")
is not allowed.

AT Mod Usandthem
 
Last edited by a moderator:

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
I came to check out any news about Cascade Lake's launch, found a thread of AMD shills bashing people looking to buy them.

Literally trying to find any excuse to call us idiots.
While I can see how the objective performance, platform longevity and general usefulness of currently available hardware my have hurt your feelings, I don't really see how calling the majority of this forum and the vast majority of the tech press and reviewers worldwide AMD shills would ever change that. Thankfully, as much as we don't need any excuses to call Cascade Lake X pointless, you don't need any excuses to buy it either. Of course, if you're looking for reassurement for listening to sentiment over facts, I'm sure you can find the proper eco chamber for that - possibly even on reddit :)
 

soresu

Platinum Member
Dec 19, 2014
2,727
1,931
136
I came to check out any news about Cascade Lake's launch, found a thread of AMD shills bashing people looking to buy them.

Literally trying to find any excuse to call us idiots.
I'd only call someone an idiot if they bought an inferior product without the expected price drops that a superior product tends to bring to the market.

For example Athlon X2 was inferior to Core2 Duo, but the release of C2D caused huge price drops to AX2 models which made it great value for money at the time.

The main point is that competition is good for the consumer.
 
  • Like
Reactions: scannall

MajinCry

Platinum Member
Jul 28, 2015
2,495
571
136
Pure single thread performance is not the sole metric to go for. The Cell's SPUs are vector units, essentially dedicated AVX cores. They work great at crunching floats, but good luck making the workloads threadsafe.

Then there's the whole draw call performance thing. Consoles have had APIs as performant as Vulkan since their inception. Try running a game with 1,500 draw calls on a Windows PC back in the days of the Gamecube. You wouldn't need an FPS counter as you'd be able to just count them on your own. The Gamecube, however, would do it just fine at 60fps minimum with ample processing time to spare. Here's a good wee read on the subject.

Theoretical maximums might be higher on the latest and greatest CPUs in certain scenarios, but consoles win by an order of magnitude when comparing draw calls in anything other than the Mantle-derived rendering APIs.
 
  • Like
Reactions: NTMBK

Jorgp2

Junior Member
Dec 19, 2018
21
11
81
I'd only call someone an idiot if they bought an inferior product without the expected price drops that a superior product tends to bring to the market.

For example Athlon X2 was inferior to Core2 Duo, but the release of C2D caused huge price drops to AX2 models which made it great value for money at the time.

The main point is that competition is good for the consumer.

I've been considering moving to an HEDT platform ever since the original Skylake-X and TR platforms launched.

Right now Cascade Lake is a great product, better than 2nd gen TR, and cheaper than 3rd gen TR.

The Skylake architecture is still competitive, I do not understand why certain people have to bash other based on their preference.

This one guy keeps staring that people need to find excuses to buy Cascade Lake, except you actually only have excuses to buy the 3950x. This one guys also seems to have followed me to another post on this forum, and is now attacking me here.

I've actually had people tell me I don't need an HEDT platform, and reccomend parts that explicitly require an HEDT platform