I'm not really sure what you mean. Correct me if I'm wrong, but GPUs are much better at processing games than CPU, aren't they? Why wouldn't you make the most efficient use of what you're aiming for? What exactly is the alternative here?
GPUs are much better at processing graphics and physics, other tasks not so much. AI and things such as advanced sound are much better off running on a CPU then a GPU, stalls on a GPU are too expensive to make them a good match for that type of code.
What exactly does the PS2's processors allow that other processors can't do? Why can't they apply the same model with better processors? I have to be completely missing what you're arguing for here.
It isn't about what it can do anymore, although it was back when it first came out. What I'm talking about is the timeline of how long the platform was making money for Sony(it still is a decade later). At this point while the PS2 is obviously selling a fraction of the newer generation its margins are extremely high and the R&D start up cost on the platform have long since been covered. Having a platform selling six figures on a weekly basis after a decade on the market is something any console maker would be very pleased to have I'm certain, it's just good business sense.
Why start out losing so much money when there's no sure bet that you're going to ever reach the numbers to break even?
You could make this same argument about any tech related device. Obviously the more you spend the greater your risk, however in some cases the higher the potential reward. In Sony's case the decission to add BRD to the PS3 was one that was obviously very expensive for them both in terms of dollar amounts and also in terms of pushing them back a year later then the 360 despite the fact that they were out a year before them in the previous generation. However, this choice led to a decisive victory for their format and avoided a format war with HD-DVD assuring Sony royalties for many years off of the platform.
and the fact that Sony decided to not go with some further implementation of what they did with the PS2 processors, and now would seem to be saying the PS3 architecture was not the best idea either would tell me that Sony is saying there's definitely a better way to do things.
This had a lot to do with KK who is no longer with Sony. He always wanted to start the next big thing. His original goal, as he stated it, was to have no GPU in the PS3 and use the network connection on the machines to share processing power in sort of a distributed/cloud network to offer power beyond what was reasonable with a single machine. Obviously that was delusional at best and a large portion of the idea was scrapped. Ken always was looking to bring some big revolution to the industry and he would spend billions of dollars to do it, he is no longer a concern.
What I'm saying is more along the lines of the Gamecube to Wii transition. There was very little learning curve developers as far as processing goes, which allows them to focus on developing other aspects. I'd like for the jump between generations to be bigger than the one Nintendo took (you could almost argue that Nintendo has gotten 8 years out of roughly the Gamecube hardware). There's no reason you can't combine the two.
This I absolutely agree with. This line of thinking would have them using a modified Cell and a newer generation nV GPU combined with more RAM and a faster BRD drive with larger HD, to me that sounds like it would be the best solution(and it would give us BC without having to worry about it being phased out). I'm sure this is one of the options on the table, the R&D would be lower then going with another new architecture, it solves BC which helps avoid part of the fractionalization of your market and eases concerns about the size of the launch library(not eliminates, but reduces).
I'm not really sure what you're talking about, as the whole issue of PC gaming declining is due to consoles increased popularity.
What I'm talking about is why would you emulate the platform you bested? After proving you have the better model as far as consumers are concerned, you change your approach and emulate the loser, it doesn't make a lot of sense.
From my viewpoint, the Gamcube to Wii has been the best one yet, and is close to what I'm suggesting.
From a business standpoint the PS1->PS2 transition was far better. While the Wii has done exceptionally well, it isn't close to the dominance of the PS2. It has done a great job in attracting casual players, but it has been very far behind attracting the far more lucrative(per capita) core gamers. Sony and MS both clearly dominate that particular segment without any real competition from Nin. The PS2 managed to grab both sides of the market, that is what I think you will find all of the console makers ideal goal is.
The key difference between 3 hardware versions in 8 years versus just 2 is that it lowers the risk for the makers, while keeping the rest mostly equal (developers get 8 solid years of pretty consistent development, costs will be roughly equal, etc).
How does it lower risks though? On a realistic basis the Wii is in essence a GameCube, a failed platform, and after a redesign it takes off. What we are seeing at this point though, is that the Wii is falling off much faster then the others. If Nin follows the same approach next generation, who is to say that it won't end up like the GC again?
You also have typical sales trends to deal with. Sales peaks for hardware tend to happen in the year ~4 time frame and then gradually fall off. The backside of the consoles life cycle is where you make your real money. R&D has been covered, the cost to produce is way down and your tie rate is going up markedly(the amount of games versus hardware you are selling results in significantly higher profits then early in the consoles life cycle). If they expand their life cycle out to 8 years, and can maintain interest throughout that amount of time, they significantly increase the amount of time that the platform is making money for them.
I can think of one console maker who pushed for the short console cycle, Sega- and it helped kill them. Consumers didnt' have faith that they would support their hardware for any length of time. If consumers get that impression of your platform, you are going to pay for it.
GPUs handle the graphics for games. CPUs handle all other chores such as AI and physics.
I'd wager heavily that physics will be on the GPUs next generation, GPUs are significantly faster then CPUs at it and I'm sure that even Nintendo will be using a GPU capable of handling physics calcs at that point($30 graphics cards can now and we still have a while).