Sony Moving Away From Cell Processor for Playstation 4?

Queasy

Moderator<br>Console Gaming
Aug 24, 2001
31,796
2
0
One of the common complaints about the Playstation 3 among developers is that it is difficult to program games for it due to the 'exotic' Cell processor from IBM. According to Impress Watch technology writer Hiroshige Goto, Sony has listened to those complaints and that there are signs that instead of re-working the Cell processor into something more programmer friendly, there are signs that they are moving towards a "PC-like multi-core setup."



Goto also predicts that all three console manufacturers will release the next generation of their consoles in 2012 with the next generation of hand-helds coming before that.
via andriasang
 

Kromis

Diamond Member
Mar 2, 2006
5,214
1
81
Wait so...if they're going to "re-work" the Cell processor, then it's still going to be a Cell processor, only easier to work with, right?
 

Queasy

Moderator<br>Console Gaming
Aug 24, 2001
31,796
2
0
Wait so...if they're going to "re-work" the Cell processor, then it's still going to be a Cell processor, only easier to work with, right?

According to Goto that's what they were originally shooting for but have apparently scrapped that idea and are going to a "PC-like multi-core setup" instead.
 
Mar 11, 2004
23,432
5,833
146
Wait so...if they're going to "re-work" the Cell processor, then it's still going to be a Cell processor, only easier to work with, right?

No offense, but you're reading ability sucks. :D

It says they're not planning on reworking Cell to make it more dev-friendly (which makes sense, as I believe IBM has said they're done with Cell, so Sony would have to do it themselves or pay IBM a bunch to have them do it, and since Sony no longer owns the factories or really has any reason to want Cell to become popular still, I see no reason to do that), instead would likely go with a CPU that's already more developer friendly.

I believe there was an article that said Sony is strongly considering IBM's newest Power CPU, that can do like 16 (or was it 32) threads.

I'm curious what Larrabee's failure is going to do, there was rumors that Intel and Microsoft were cozying up to using it in the next Xbox. Of course there's rumors about everything so who knows if it was anything beyond someone putting 2 together.

With the way things are looking now, I'd say they should go with a cheap CPU that can handle whatever they need a CPU to handle (just OS?), and then stuff as much GPU power in there as possible, as that's where they'll maximize gaming capability, and also with GPU's being able to be coded for general purpose work, they would be more adept at handling audio and physics code.

The big questions are how much power they're willing to go with (sounds like Microsoft might be aiming to go more low power/efficient/cheap), what level they're hoping to achieve (do they want to try to go for ray-tracing? push for physics? 3-D?), and cost. I think we'll see a more modest step, but maybe we'll see shorter console lifecycles (I think a 4 year one would be good for a $250-300 console, and if they don't do a major rework of the components with each new cycle, then it shouldn't take much for developers to take advantage of the new system).
 
Last edited:

erwos

Diamond Member
Apr 7, 2005
4,778
0
76
Arguably, the biggest limitation that consoles this generation had wasn't the CPU, it was the amount of RAM. You had better believe that Microsoft and Sony would _kill_ to get the next big MMORPG on their consoles, and lots of RAM would be key for that.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Do you have the link you guys are talking about? I don't see them saying anything at all like the comments in this thread suggest. I see mention of multiple different plans being options at the moment, but that's it. I will say that the enhanced prior gen format seemed to work well for Nintendo this generation, and I'm sure that will be a consideration from Sony.

Arguably, the biggest limitation that consoles this generation had wasn't the CPU, it was the amount of RAM.

For what design goal? Every console ever released you can say RAM was the limiting factor, given a certain set of circumstances. I'm not saying more RAM would have hurt by any stretch of the imagination, but it entirely depends on what you are looking for. I understand you said arguably, just pointing out that that can always be said.

With the way things are looking now, I'd say they should go with a cheap CPU that can handle whatever they need a CPU to handle (just OS?), and then stuff as much GPU power in there as possible, as that's where they'll maximize gaming capability

If they use the PC development mindset then that would be the best route, but why would they jump on a bandwagon that is on fire with all the wheels falling off and is already almost empty? I don't see that being a very wise direction for them to take(that puts them in the position of always being inferior to PC games).

I think we'll see a more modest step, but maybe we'll see shorter console lifecycles (I think a 4 year one would be good for a $250-300 console, and if they don't do a major rework of the components with each new cycle, then it shouldn't take much for developers to take advantage of the new system).

We are well into diminishing returns, check out the most advanced demos you can find for a 5870 and see what it can do that a 2900xt can't- it isn't much(although the speed difference is considerable). Shortening the life cycle of the consoles will fractionalize the market too much- both in terms of developer support and customer base. Right now the PS2 is selling 200K units per week- that is a business that all of the console makers would absolutely love to emulate. The ROI on the PS2 was absolutely staggering because of its' longevity. I would expect console makers to start to head towards a 6-8 year life cycle ASAP; it appears MS is already planning to this generation. I wouldn't be surprised to see them attempt to move this out to closer to a decade as soon as they reasonably can. The limiting factor on visuals is going to depend on development budget more then technology in the not too distant future.
 

arod

Diamond Member
Sep 26, 2000
4,236
0
76
Its going to be very interesting to see what the next gen consoles have inside.... I think both will look much differently than they do now. I do think the 720/ps4 will have more in common than the 360/ps3 do.
 
Mar 11, 2004
23,432
5,833
146
If they use the PC development mindset then that would be the best route, but why would they jump on a bandwagon that is on fire with all the wheels falling off and is already almost empty? I don't see that being a very wise direction for them to take(that puts them in the position of always being inferior to PC games).

I'm not really sure what you mean. Correct me if I'm wrong, but GPUs are much better at processing games than CPU, aren't they? Why wouldn't you make the most efficient use of what you're aiming for? What exactly is the alternative here?

We are well into diminishing returns, check out the most advanced demos you can find for a 5870 and see what it can do that a 2900xt can't- it isn't much(although the speed difference is considerable). Shortening the life cycle of the consoles will fractionalize the market too much- both in terms of developer support and customer base. Right now the PS2 is selling 200K units per week- that is a business that all of the console makers would absolutely love to emulate. The ROI on the PS2 was absolutely staggering because of its' longevity. I would expect console makers to start to head towards a 6-8 year life cycle ASAP; it appears MS is already planning to this generation. I wouldn't be surprised to see them attempt to move this out to closer to a decade as soon as they reasonably can. The limiting factor on visuals is going to depend on development budget more then technology in the not too distant future.

What exactly does the PS2's processors allow that other processors can't do? Why can't they apply the same model with better processors? I have to be completely missing what you're arguing for here.

Why start out losing so much money when there's no sure bet that you're going to ever reach the numbers to break even? Start out lower, which helps both you and the consumer, and then instead of re-inventing the wheel (like Sony's done for the PS2 and PS3), make incremental improvements. You can argue the Sony way is obviously successful, but I'd say its as much due to the DVD and Blu-Ray playback as it is the games, and the fact that Sony decided to not go with some further implementation of what they did with the PS2 processors, and now would seem to be saying the PS3 architecture was not the best idea either would tell me that Sony is saying there's definitely a better way to do things.

What I'm saying is more along the lines of the Gamecube to Wii transition. There was very little learning curve developers as far as processing goes, which allows them to focus on developing other aspects. I'd like for the jump between generations to be bigger than the one Nintendo took (you could almost argue that Nintendo has gotten 8 years out of roughly the Gamecube hardware). There's no reason you can't combine the two.

I'm not really sure what you're talking about, as the whole issue of PC gaming declining is due to consoles increased popularity. Fact is, console makers are still trying to find the best manner of transitioning between generations of hardware. From my viewpoint, the Gamcube to Wii has been the best one yet, and is close to what I'm suggesting. If you go 4 years between hardware, you can still achieve a large increase in power, keep the costs pretty low (without the big gamble on getting a return or profitability years later), make it easy on developers, and if there's a change that you can make (think redesign of the controller such as integrating newer motion sensing and/or wireless capabilities among others) you can do so more easily. The key difference between 3 hardware versions in 8 years versus just 2 is that it lowers the risk for the makers, while keeping the rest mostly equal (developers get 8 solid years of pretty consistent development, costs will be roughly equal, etc).

so does YOUR typing ability ^_^

Hah, wow, yeah, bad grammar.
 

Queasy

Moderator<br>Console Gaming
Aug 24, 2001
31,796
2
0
I'm not really sure what you mean. Correct me if I'm wrong, but GPUs are much better at processing games than CPU, aren't they? Why wouldn't you make the most efficient use of what you're aiming for? What exactly is the alternative here?

GPUs handle the graphics for games. CPUs handle all other chores such as AI and physics.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
I'm not really sure what you mean. Correct me if I'm wrong, but GPUs are much better at processing games than CPU, aren't they? Why wouldn't you make the most efficient use of what you're aiming for? What exactly is the alternative here?

GPUs are much better at processing graphics and physics, other tasks not so much. AI and things such as advanced sound are much better off running on a CPU then a GPU, stalls on a GPU are too expensive to make them a good match for that type of code.

What exactly does the PS2's processors allow that other processors can't do? Why can't they apply the same model with better processors? I have to be completely missing what you're arguing for here.

It isn't about what it can do anymore, although it was back when it first came out. What I'm talking about is the timeline of how long the platform was making money for Sony(it still is a decade later). At this point while the PS2 is obviously selling a fraction of the newer generation its margins are extremely high and the R&D start up cost on the platform have long since been covered. Having a platform selling six figures on a weekly basis after a decade on the market is something any console maker would be very pleased to have I'm certain, it's just good business sense.

Why start out losing so much money when there's no sure bet that you're going to ever reach the numbers to break even?

You could make this same argument about any tech related device. Obviously the more you spend the greater your risk, however in some cases the higher the potential reward. In Sony's case the decission to add BRD to the PS3 was one that was obviously very expensive for them both in terms of dollar amounts and also in terms of pushing them back a year later then the 360 despite the fact that they were out a year before them in the previous generation. However, this choice led to a decisive victory for their format and avoided a format war with HD-DVD assuring Sony royalties for many years off of the platform.

and the fact that Sony decided to not go with some further implementation of what they did with the PS2 processors, and now would seem to be saying the PS3 architecture was not the best idea either would tell me that Sony is saying there's definitely a better way to do things.

This had a lot to do with KK who is no longer with Sony. He always wanted to start the next big thing. His original goal, as he stated it, was to have no GPU in the PS3 and use the network connection on the machines to share processing power in sort of a distributed/cloud network to offer power beyond what was reasonable with a single machine. Obviously that was delusional at best and a large portion of the idea was scrapped. Ken always was looking to bring some big revolution to the industry and he would spend billions of dollars to do it, he is no longer a concern.

What I'm saying is more along the lines of the Gamecube to Wii transition. There was very little learning curve developers as far as processing goes, which allows them to focus on developing other aspects. I'd like for the jump between generations to be bigger than the one Nintendo took (you could almost argue that Nintendo has gotten 8 years out of roughly the Gamecube hardware). There's no reason you can't combine the two.

This I absolutely agree with. This line of thinking would have them using a modified Cell and a newer generation nV GPU combined with more RAM and a faster BRD drive with larger HD, to me that sounds like it would be the best solution(and it would give us BC without having to worry about it being phased out). I'm sure this is one of the options on the table, the R&D would be lower then going with another new architecture, it solves BC which helps avoid part of the fractionalization of your market and eases concerns about the size of the launch library(not eliminates, but reduces).

I'm not really sure what you're talking about, as the whole issue of PC gaming declining is due to consoles increased popularity.

What I'm talking about is why would you emulate the platform you bested? After proving you have the better model as far as consumers are concerned, you change your approach and emulate the loser, it doesn't make a lot of sense.

From my viewpoint, the Gamcube to Wii has been the best one yet, and is close to what I'm suggesting.

From a business standpoint the PS1->PS2 transition was far better. While the Wii has done exceptionally well, it isn't close to the dominance of the PS2. It has done a great job in attracting casual players, but it has been very far behind attracting the far more lucrative(per capita) core gamers. Sony and MS both clearly dominate that particular segment without any real competition from Nin. The PS2 managed to grab both sides of the market, that is what I think you will find all of the console makers ideal goal is.

The key difference between 3 hardware versions in 8 years versus just 2 is that it lowers the risk for the makers, while keeping the rest mostly equal (developers get 8 solid years of pretty consistent development, costs will be roughly equal, etc).

How does it lower risks though? On a realistic basis the Wii is in essence a GameCube, a failed platform, and after a redesign it takes off. What we are seeing at this point though, is that the Wii is falling off much faster then the others. If Nin follows the same approach next generation, who is to say that it won't end up like the GC again?

You also have typical sales trends to deal with. Sales peaks for hardware tend to happen in the year ~4 time frame and then gradually fall off. The backside of the consoles life cycle is where you make your real money. R&D has been covered, the cost to produce is way down and your tie rate is going up markedly(the amount of games versus hardware you are selling results in significantly higher profits then early in the consoles life cycle). If they expand their life cycle out to 8 years, and can maintain interest throughout that amount of time, they significantly increase the amount of time that the platform is making money for them.

I can think of one console maker who pushed for the short console cycle, Sega- and it helped kill them. Consumers didnt' have faith that they would support their hardware for any length of time. If consumers get that impression of your platform, you are going to pay for it.

GPUs handle the graphics for games. CPUs handle all other chores such as AI and physics.

I'd wager heavily that physics will be on the GPUs next generation, GPUs are significantly faster then CPUs at it and I'm sure that even Nintendo will be using a GPU capable of handling physics calcs at that point($30 graphics cards can now and we still have a while).
 

Raduque

Lifer
Aug 22, 2004
13,140
138
106
So, does this mean no backwards compat with the PS3? Now we're going to have 3 sony consoles? PS2, PS3 and PS4. Sony should figure out how to make them all interlocking and use a single output dock or something.

.

What I'm saying is more along the lines of the Gamecube to Wii transition. There was very little learning curve developers as far as processing goes, which allows them to focus on developing other aspects.

Well, the Wii is basically an overclocked Gamecube in a different form-factor. So, of course it's going to be easy to dev for if you can dev for the 'Cube.
 

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
It's hard to believe Sony would do this to developers, when using a Cell with more general-purpose cores would solve cross-platform development issues while also allowing full backwards compatibility and shared PS3 / PS4 development for many titles.

A modified Cell requires CPU design costs, but I'd guess switching to some other CPU requires higher system design costs and certainly much higher software design costs -- with Cell 2.0 they'd start with fully mature libraries then just enhance them to use the new cores.

But it's Sony, so who knows?
 

Dacalo

Diamond Member
Mar 31, 2000
8,778
3
76
So, does this mean no backwards compat with the PS3? Now we're going to have 3 sony consoles? PS2, PS3 and PS4. Sony should figure out how to make them all interlocking and use a single output dock or something.

I don't know why people are still bitching about this. Do you recall SNES being able to play NES games and Genesis playing Master system out of box? How about N64? Seriously?

A new console is just that, a new console for the new generation of games. If you want to play last gen or prior, go buy that console at a discounted price. If there is backward compatibility, it's a GREAT perk, but it's not a standard.