[ArsTechnica] Next-gen consoles and impact on VGA market

Page 16 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
:rolleyes: So would you say an Atom can be compared by performance per MHz to Ivy Bridge? No. Why not?

Not sure what you are arguing about. The specific language is pulled directly from a presentation by H. Peter Hofstee, Ph. D., Architect, Cell Synergistic Processor Element, IBM Systems and Technology Group, Austin, Texas.

The Cell and Xbox360 CPU are both based on IBM PowerPC instruction set architecture. In other words, they are PowerPC based-CPUs, which is what I said and you keep arguing for no reason. The Tri-Core Xenon CPU in the 360 has 3 cores, which are just slightly modified versions of the PPE in the Cell processor used on the PlayStation 3.

The architecture / CPU speed of Xenon is slower than the PowerPC G5 design, again stated in the H. Peter Hofstee slides. That's not me making this stuff up, but it's from the Cell presentation.

Secondly, I already said that the Cell was very power inefficient. This is an inherent limitation of being based on the PowerPC CPU architecture with 7 SPEs.

This is history:

"IBM exited the 32-bit embedded processor market by selling its line of PowerPC products to Applied Micro Circuits Corporation (AMCC) and focused on 64-bit chip designs, while maintaining its commitment of PowerPC CPUs toward game machine makers such as Nintendo's GameCube and Wii, Sony's PlayStation 3 and Microsoft's Xbox 360 both use 64-bit processors."

So yes both the Tri-Core Xenon and the Cell are based on a PowerPC CPU architecture with Xenon having 3 of those cores and the Cell having just one of them. The PowerPC CPU architecture that underlies those designs is inferior in overall performance to the PowerPC G5 CPU used in Apple products around that time.

Furthermore, the PowerPC CPU architecture of that time was very inefficient and slow:

"In 2005 Apple announced they would no longer use PowerPC processors in their Apple Macintosh computers, favoring Intel produced processors instead, citing the performance limitations of the chip for future personal computer hardware specifically related to heat generation and energy usage, as well as the inability of IBM to move the 970 (PowerPC G5) processor to the 3 GHz range."

These are all just facts you guys are trying to argue against.

And no the G5 PowerPC CPU was not the fastest CPU at that time already. By extension, that means the underlying cores in the PS3's Cell and Xenon 360 CPU are significantly weaker than any modern Intel Core i7 series CPU for games.
 

Ancalagon44

Diamond Member
Feb 17, 2010
3,274
202
106
$400 cheaper then the closes BluRay play that was still inferior due to it not having Cell. $400 cheaper.
$400 cheaper.
$400 cheaper.
$400 cheaper.
$400 cheaper.

Having a Cell doesnt automatically make it better, anymore than the PS3 having a Cell doesnt automatically make it better than the 360 or (s******) a modern gaming PC.


Got news for you - Carmack says sony made a mistake.

In Vector code, which is where Cell kills POWER.
I thought we were talking about Cell vs modern x86?

EDIT:

But wait, theres more!

The God Of Programming himself couldnt get Rage to run better on PS3! Oh noes! There goes your argument. And remember, Rage was released in what, 2010? 2011? So its not like Carmack didnt have time to get to know the architecture.
http://www.techpowerup.com/forums/showthread.php?t=100584
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
That cost is wrong, covered this repeatedly.

No, you simply ignored all the manufacturing costs and all the ancillary costs which I listed, and as I expected you are focused on development costs. $7, $7, $7. :sneaky: Ya, OK, Thanks for proving you have no idea about how the semi-conductor industry works at all and that your knowledge is strictly from a programming perspective. I didn't know it cost $0 to pay for the wafers at a foundry, $0 to test them at the foundry, $0 to ship them and assemble them inside a PS3, $0 costs to test the working units inside a PS3, etc.

Maybe you should convince NV and AMD know that it cost them $7-8 of real cash flow per Tahiti XT/Kepler GK104 per each HD7970/GTX680 sold. Good one.

It had less RAM and a slower GPU.

Yes, thanks for proving further that the Cell was a waste of $ since the other components in PS3 were more limiting. In other words the Cell was too expensive, and power hungry and didn't offer any performance advantage for games over a Core 2 Duo, and as a package couldn't outperform Tri-Core Xenon + R500 in the Xbox360 after 6 years of programmers learning how to optimize the Cell to its maximum potential.

Sony is an IP owner for Cell, they make the processor themselves. The R&D cost associated with that is less then $7 per chip.

Like I said, goes 1 ear, leaves the other. Using your faulty logic, it costs NV $7-8 to manufacture a GK104 and sell it in GTX680. How much it cost to actually sell the Cell CPU to Sony is not what it cost to develop the design alone. That's basics that you can't even get right. I suggest you enroll in some business courses in school to understand how manufacturing, technology and semi-conductor companies make money and what underlying costs they have. Since understanding how companies function and make cash flow is my line of work, I do not need to explain myself since you can't even grasp the basic idea of cost of goods sold as well as ancillary costs outside of R&D. The cost to actually deliver the chip inside the PS3 was not $7. You are delusional if you believe that or just ignorant to how a technology/business company actually functions. Time to leave your programming cubicle and pick up some business books.

$400 cheaper then the closes BluRay play that was still inferior due to it not having Cell.

BluRay capability has nothing to do with the Cell processor. Completely irrelevant to the point being made that the Cell was too expensive, inefficient to code for and not much faster for games than a Tri-core Xbox360 CPU, much less a Core 2 Duo CPU.

That is wrong.

Nope. At low resolutions a system becomes both CPU and GPU limited, exactly the same way for PCs. The only debate is which component is the more limiting, depending on the game. Either way your argument has failed on both accounts:

1) IF the consoles are primarily GPU limited, then the PS3's Cell didn't aid in graphics;

2) IF the consoles are both CPU and GPU limited at both resolutions, the Cell's superiority didn't allow it to post higher minimum framerates in many CPU limited situations, such as Blighttown in Dark Souls, etc.

So you have not proven how the Cell was beneficial overall given its expensive cost and power inefficiency.

Cell was going to be the graphics processor in the original PS3, I have noted that would have been moronic against 2005 GPUs.

It was moronic to use the Cell in the first place. Sony would have gotten a faster console had they gone with a more modern version of AMD's GPU and reused the exact same Tri-core CPU from Xbox360. Instead, they spent more $ on a barely faster CPU and paired it with a slow GPU. Fail 2/2.


Yup, he said PS3 is horribly inefficient to code for and much slower than modern PCs. You fail to read those quotes, but keep posting your opinion. 100s of other professionals in the industry, including Carmack, agree that PS3's processing power is exhausted. You say the Cell is faster than modern x86 CPUs. That contradicts that idea that the Cell's power is exhausted. Does not compute.

In Vector code, which is where Cell kills POWER.

Useless for real world gaming performance of PS3 console. This hardly showed up in real games against the 360. Thus, irrelevant.


A random person's research paper vs.

The person who designed the Cell: ~ 80W of power.

PS3 uses 240W of power vs. 180W in 360, yet doesn't have better graphics. GeForce 7950 Mobile used less than 60W of power, thus the difference is mainly attributable to the Cell = fail.

1 Core i7 Nehalem is faster in games than the entire Tri-Core PowerPC-based Xenon CPU. 4 Core i7 3770K has a 77W TDP with an APU. Cell and Tri-Core PowerPC CPUs were slower overall than the PowerPC CPU used in G5 by Apple. Apple abandoned PowerPC achitecture for Core 2 Duo in 2005, citing superior performance of Intel's CPUs, superior performance/watt and superior power consumption.

Since,

Xbox360 Tri Core, Cell in PS3 < PowerPC G5 processor < Core 2 Duo < 1 Core i7 Nehalem, i7 3770K is faster than all of these processors, while consuming less power then the Cell did. :thumbsup:

Any chip must be produced. Sony owns the IP for Cell. The cost to acquire this IP was $7.

The cost to manufacture / produce a semi-conductor chip is not a "Sunk Development Cost" as you seem to allude to. The cost of the Cell's IP is only 1 aspect of what the real net cash outlay for each Cell chip that went inside PS3 was. Is the cost to manufacture GK104s out of a 300mm wafer $7-8 per chip? I didn't think so.

That was an estimate based on what a comparable CPU would cost them if they bought it from another company. Sony owns Cell.

Incorrect. Someone has to pay for manufacturing: wafer costs, yields, testing of CPU chips, sorting them based on clock speeds, defects, ship them to a factory, assemble them inside a PS3, test the console to make sure the CPU works inside the console. All of these are costs Sony had to pay for per each console sold = $230 at launch. These costs in aggregate can be summarized as manufacturing costs for the Cell.

The manufacturing cost per each Cell chip fell $37.73 by November-December 2009:

isuppi_teardown_ps3.jpg


Those are real direct costs to Sony per each Cell chip by end of 2009. Your assessment that the Cell only costs Sony $7 in cash flows is incorrect based on lack of understanding how technology or semi-conductor businesses work.

The actual programmers you quoted said that POWER is better under ideal circumstances, not even the entire Cell processor, just the light alternative of it.

That's not what they said.

Oles Shishkovstov: You can calculate it like this: each 360 CPU core is approximately a quarter of the same-frequency Nehalem (i7) core. Add in approximately 1.5 times better performance because of the second, shared thread for 360 and around 1.3 times for Nehalem, multiply by three cores and you get around 70 to 85 per cent of a single modern CPU core on generic (but multi-threaded) code. Bear in mind though that the above calculation will not work in the case where the code is properly vectorised. In that case 360 can actually exceed PC on a per-thread per-clock basis.

1) He said on a per-thread per clock basis, not overall performance.
2) Ideal vectorized code is meaningless since it's again just theoretical performance. You love talking about theoretical performance. The only thing that counts is real world performance. Theoretical performance is good for marketing and geeky discussions. In the real world using "generic code" (i.e., translated: modern code that's easy to optimize), the Tri-Core 360 CPU is 70-85% of the speed of 1 Core i7 CPU.

Now if 1000 people spend 1000 years writing ideal vectorized code for the Cell, it should outperform Core i7 CPU on a per core, per clock basis. Such ideals are meaningless since that's not realistic, cost effective or has been reproduced in real world games on PS3.

Your claims went from ludicrous to absurd: so now just a fraction of the Cell is better than the entire modern x86 processor under ideal circumstances? What are those ideal circumstances, does the programmer have to work on Mars to achieve those results for 10 years? They were never achieved in the real world, which means a theoretical ideal is meaningless. A PowerPC G5 has not outperformed a modern x86 Core 2 Duo processor in any real world game as far as I am aware. The Cell is computationally slower to a modern Core i7 CPU and Intel makes the fastest CPUs on the planet for games. The Cell won't be in PS4 because AMD's modern x86 CPUs are even faster than the Cell.

Do I think it would be a mistake for Sony to abandon Cell? Yes.

I am glad you don't work for Sony then because their engineers and management team have moved on to the year 2012, and you are still stuck in 2006 fairy tales dreams of a programmer who can only think in theoretical FLOP terms and vectorized ideal code.

The only reason people seem to think it will that I have been able to find is Kotaku.

No, it's because every major developer probably begged Sony to ditch the Cell and never use such a terrible design again. They also were very vocal against the Cell from the beginning. Hopefully, Sony's management wakes up. Can't go wrong with any choices really: x86 AMD Fusion or Intel Core i3 CPU would all vastly outperform the Cell SPE setup for next generation games. No matter what Sony chooses, even a more modern PowerPC CPU-derivative with actual cores and none of the SPE non-sense is going to be miles faster.

No, it wasn't. Cerb stated it wasn't a direct derivative, Cerb was 100% correct. It is based on the POWER architecture, it isn't a direct derivative.

What are you even arguing about. I said the Tri-Core CPU in the 360 and the Cell are based on the PowerPC core, with some modifications in the form of SPEs for the Cell and so on. What derivative did I say it's based on? I don't recall naming a specific PowerPC #?

Again, as simple as can be:

The Cell Processor contains 9 processors on a single chip. One is a conventional PowerPC processor, with standard level one (32+32K) and level two (512K) caches and transparent direct access to system memory. To conserve chip space and power this PowerPC is simpler than other common processors. It does not provide hardware support for branch prediction or out of order execution. This makes it perform worse than one would expect given its clock speed (3.2 Ghz in the Playstation 3). The expectation is that the PowerPC will be used in a supervisory role and the majority of processing will be delegated to the SPEs.


The main 3 computation cores in the 360 and the Cell's single core are both based on an IBM PowerPC CPU architecture of that time, which happens to be slower than the PowerPC CPU architecture of the G5 Apple processor. You are putting words into my mouth that I said it was an exact derivative of "what"? All I said is it's a derivative of a PowerPC CPU architecture of that time. That's just a fact. What you can't seem to grasp for 16 pages now is that the PowerPC CPU architecture of that time was vastly inferior in gaming performance to AMD's or Intel's x86 CPUs on a per core per thread basis in the real world (i.e., not ideal circumstances that only IBM/Sony engineers pushing Cell dreamed up of). The main reason Sony went for this design is the management team wasn't technically knowledgeable enough to understand the flaws in the Cell CPU architecture/design. The same people probably would have chosen the Bulldozer FX-8150 8-core CPU against an Ivy Bridge Core i3 if you told them the FX8150 is a real 8-core CPU vs. a dual-core HT "slower" Intel i3 processor. You would have thrown a bunch of fake marketing slides, showed superior floating point performance of the FX8150 and sold the same Sony execs BS of how an 8-core CPU with immense floating point capabilities would beat a dual-core Intel CPU for games.


You didn't understand that article. The POWER architecture IBM developed is still very much alive on their server offerings for large businesses and continues to evolve, but the specific Cell architecture (i.e., PowerPC computational core supported by SPEs) for the consumer space is very much dead. So, no Cell in PS4! Don't worry, since you are so smart, maybe you can take the Cell out of the PS3 and mod it into the PS4 to get better graphics next generation. Maybe get a better after-market cooler so you can overclock the Cell since it uses so little power to begin with.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Not sure what you are arguing about. The specific language is pulled directly from a presentation by H. Peter Hofstee, Ph. D., Architect, Cell Synergistic Processor Element, IBM Systems and Technology Group, Austin, Texas.

The Cell and Xbox360 CPU are both based on IBM PowerPC instruction set architecture. In other words, they are PowerPC based-CPUs, which is what I said and you keep arguing for no reason.
No, the CPU portions of the Xenon and Cell were practically identical.

I was arguing the fact that the PPC970/G5 are not what's in those game consoles, and the differences between them are 90% the difference between an Atom and just about any high-performance x86 CPU (the other 10% being fast memory ops with the Atom, since x86 can't hide that, and being made to run slow for lower cost/power). The Atom is a good comparison CPU for the PPE, since it is also a narrow in-order, made so to be small, cheap, and low power.

"That PowerPC core design is what was used in the Apple G5," is a lot more specific wording than mentioning an ISA.

Secondly, I already said that the Cell was very power inefficient. This is an inherent limitation of being based on the PowerPC CPU architecture with 7 SPEs.
But it's not at all an inherent limitation of that, for the PPE. It's a limitation of making it a very deeply-pipelined processor designed to reach very high speeds so that the SPEs can pull of very high FLOPs. It's the same sort of problem that made the P4, and now BD, lackluster, but taken to a greater extreme--to reach those clocks while consuming fairly little maximum power, and not costing a whole lot. Having to work with bigger binaries does matter, but that's not going to be a make-or-break problem (x86-64 faces the same sort of thing v. IA32). Aside from instruction size, the other other negative I can think of is hash TLBs, but that tends to be one of those things OS people complain about, but they make it work well enough for users and application devs that it's invisible in practice.

The PPE is limited by the choices of those that designed it. If they had made it for MIPS, ARM, or even x86, it still would have sucked. Since IBM was heavily involved, it happened to be their ISA that got used. PPC CPUs, such as the 7400 series, have been quite power efficient. The G4, FI, was an exceptionally efficient CPU, prior to Intel's Atom. Today, there's the PPC470, though its TDP varies by implementation, of course.

Now, the SPEs, yes. 256KB is anemic (that's putting a lot of pressure on memory, just like too few registers), the bus wasn't too fast at small SPE to SPE transfers (maybe a point-to-point network would have been better?), and not having a unified address space in the 21 st century is just dumb (give the SPEs full MMUs, and map the local stores to a special virtual address range!). I can see 256KB as a 1st-gen space-saving thing, and the bus as a good idea on paper that would disappear in the 2nd gen, but separate memory spaces was just wrong, and was definitely one of the technical features that helped kill the Cell.

Furthermore, the PowerPC CPU architecture of that time was very inefficient and slow:

"In 2005 Apple announced they would no longer use PowerPC processors in their Apple Macintosh computers, favoring Intel produced processors instead, citing the performance limitations of the chip for future personal computer hardware specifically related to heat generation and energy usage, as well as the inability of IBM to move the 970 (PowerPC G5) processor to the 3 GHz range."

These are all just facts you guys are trying to argue against.
You're making a connection that doesn't exist. Because the P4 was hot and underperforming, you don't consider x86 inherently bad, do you? Of course not, because the problem wasn't x86 at all, but the specific CPU. What IBM did with the PPC970 would be akin to Intel shrinking Willamette and making it run faster, without the related improvements brought by Northwood and then Prescott, to reach higher speeds and not suck when running at them. And, it needed such improvements, for similar reasons, if it was going to keep on getting faster. Worse, the Cell showed that IBM knew how to do it--they weren't blind-sided, or anything, like Intel was when they wanted >=4GHz Prescotts.

Apple was expecting 3GHz by 2004, and who knows what by 2005. IBM didn't deliver. Apple was a small fry, and nobody else but them cared enough for it to be worth it to IBM to redesign the CPU to reach higher clock speeds. There weren't many generations of the core to keep up. They gave it a shrink, and then said, "screw this, we need to do stuff that makes us money."

That doesn't make it a bad CPU, and doesn't make it completely outdated (especially since it was either new at the time, or yet to be released, as it concerns the XB360). It does show how IBM has gotten its reputation as one of the coldest soulless corporations, even as viewed by other cold soulless corporations.

And no the G5 PowerPC CPU was not the fastest CPU at that time already.
Good != fastest. It wasn't even too bad in 2005, despite not reaching the clock speeds Apple needed it to.
http://www.barefeats.com/macvpc.html
But, for mainstream use, x86 had already won, even during the G5's few months of glory, prior to the Athlon64's release. Apple just needed IBM to stiff them, so that they would be forced to come to terms with that reality.

Edit:
I don't recall naming a specific PowerPC #?
http://forums.anandtech.com/showpost.php?p=33915362&postcount=345
"That PowerPC core design is what was used in the Apple G5"
Apple G5 = PPC970 series.

http://forums.anandtech.com/showpost.php?p=33923255&postcount=370
"The core found in the Cell and Xenon was not a direct derivative of any particular Power- or PPC-series core" ... "Yes, it was." A particular core would be something like a PPC970, Cortex-A9, Nehalem, Sandy Bridge, Ivy bridge, Dothan, etc.. The two CPUs are instead similar in the same way all x86_64 CPUs are similar. The PPE seems to have more in common with what was at the time an in-the-works Power6, than the Power4, or PPC970/G5.

That's just a fact. What you can't seem to grasp for 16 pages now is that the PowerPC CPU architecture of that time was vastly inferior in gaming performance to AMD's or Intel's x86 CPUs on a per core per thread basis in the real world (i.e., not ideal circumstances that only IBM/Sony engineers pushing Cell dreamed up of).
How many games were multithreaded, at the time? Not many, since you had to buy server hardware to do anything more than use the P4's HT, at the time. Maybe your, "at the time," is different. My, "at the time," is assuming around the latest time frame that MS could have changed their minds, without delaying it too much beyond Sony's original planned release. So, maybe 2003, when AMD had Bartons as their best, and Intel had Northwoods. The current performance-oriented PowerPC CPU of that time was quite a good performer.
 
Last edited:

cplusplus

Member
Apr 28, 2005
91
0
0
Having a Cell doesnt automatically make it better, anymore than the PS3 having a Cell doesnt automatically make it better than the 360 or (s******) a modern gaming PC.

Actually, having the Cell did make it a better Blu-Ray player than anything that was out at the time by a lot (and better than most Blu-Ray players today, arguably) because it has allowed it to receive every single Spec update that Blu-Ray has undergone.

There's a very legitimate argument to made that that Blu-Ray drive is what made the console as expensive as it was, not the Cell. Replace that drive with a DVD drive, and you shaved something like $100 of the manufacturing cost, if I remember correctly. Pass that $100 to consumers, and you would have ended up with the low end system costing $400, exactly the same as the 360.
 

Ancalagon44

Diamond Member
Feb 17, 2010
3,274
202
106
Actually, having the Cell did make it a better Blu-Ray player than anything that was out at the time by a lot (and better than most Blu-Ray players today, arguably) because it has allowed it to receive every single Spec update that Blu-Ray has undergone.

There's a very legitimate argument to made that that Blu-Ray drive is what made the console as expensive as it was, not the Cell. Replace that drive with a DVD drive, and you shaved something like $100 of the manufacturing cost, if I remember correctly. Pass that $100 to consumers, and you would have ended up with the low end system costing $400, exactly the same as the 360.

Are you saying there doesnt, or couldnt, exist a single other CPU that could play games and receive updates to the Blu Ray spec?
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
You didn't understand that article.

The article explicitly stated they were working on the next Cell for the PS4. If that pans out to be true or not remains to be seen, but I didn't think that article was too technical for anyone....

and not having a unified address space in the 21 st century is just dumb

Would have come with a fairly large increase in latency, anyway I can think of it anyway, LS's big edge over traditional cache is the latency advantage. Seems like just pushing for 1MB LS would be better.

Are you saying there doesnt, or couldnt, exist a single other CPU that could play games and receive updates to the Blu Ray spec?

At the time?

The PC driving everything doesn't have to be a powerhouse, as long as you have a graphics card that can accelerate Blu-ray 3D's MVC codec. This means that if you're running one of the GeForce cards listed above, your PC will only require a modern dual-core CPU.

If you're going to try to use a GeForce card that's not on Nvidia's supported list (a card that isn't equipped with Blu-ray 3D MVC decode acceleration) you'll likely need a CPU with more power than a dual-core processor. As you will see in the benchmarks, a triple-core Athlon II at 3 GHz is highly stressed during Blu-ray 3D playback. We wouldn't recommend anything less than an Athlon II X3 440 for a software decoding solution.

http://www.tomshardware.com/reviews/blu-ray-3d-3d-vision-3d-home-theater,2636-7.html

The chip they say struggled, the A2 3GHZ was $459 in 2007. The PS3 came out in 2006.

Edit Woops, was wrong, the processor that Tom's had struggling was a tri core version, the particular model didn't come until 2010 although it was ~$100 at the time. The $459 2007 processor couldn't handle it, guess x86 is so good they can fail for significantly more money and they are still the best.......

Edit2Went looking for the Intel side, Cyberlink says the minimum requirement is a Core2 Duo E6750- the PS3 would have only had to be delayed by a year, x86 pwnage, only a year late to the game :p

http://www.cyberlink.com/prog/support/cs/product-faq-content.do?id=2576
 
Last edited:

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
For the most part it is no use continuing this discussion, I know what Cerb's issues are with Cell, and he has valid points, the other people involved have no clue what we are even discussing. Couple things real quick though-

Like I said, goes 1 ear, leaves the other. Using your faulty logic, it costs NV $7-8 to manufacture a GK104 and sell it in GTX680. How much it cost to actually sell the Cell CPU to Sony is not what it cost to develop the design alone. That's basics that you can't even get right.

The PS3 could have gone with no processor then?

Let's get real then. You should be able to handle some simple math.

Cell- 234 million transistors 120mm squared @65nm

Core 2 Duo- 291 million transistors 145mm squared @65nm

Chips don't appear from magic pixie dust. They are produced. The largest differentiating cost difference all else being equal is die size. Cell is *smaller* then the Core 2 Duo on the same build process.

You have two options, point out who was going to give Sony chips *below cost*, or explain how the $7 additional cost of acquiring the Cell IP would allow them to get a big upgrade on the GPU. The PS3 needed *SOMETHING* for a processor. As I covered in my last post, the Core 2 Duo that was available when the PS3 launched would have already failed at handling one of the trivial media tasks that Cell handles without issue(you know, all that manly x86 power can't be bothered with things like math). So the Cell has less transistors and a smaller die then the Core2Duo and Sony could make it themselves. Now unless you can explain exactly what business model Sony could have used to put in another processor that was remotely close in performance *and saved money* over Cell then just drop the subject, I did my research.

BluRay capability has nothing to do with the Cell processor.

Look at the post above. Top of the line 2006 Core 2 Duo for 3D BluRay playback without help, fail. Cell, no problem. If you would do a hint of actual research as to how these things actually work, you probably wouldn't trip up so much on the basics.

A PowerPC G5

Do yourself a favor, drop that line of discussion. You can try, as pathetic as the attempts are, at putting forth the arguments Cerb is capable of. It is hard to even respond because honestly it is so stupid to read it gives me a headache. But trying to say that these chips are G5 based is just making you look like you have never heard the expression PC before.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
What?

More Lies? I think I saw $76.

Besides, it already had a GeForce GPU.

Wow, if that processor had ever been mentioned that would have been a good point. If someone creates a time machine, goes back and changes Tom's article from the 440 to the 425, you will have that post waiting for them, good show :)

BTW- Read Tom's article, you need a GT2xx GeForce series part to accelerate 3D Blu Ray, the PS3 certainly doesn't have that.

Edit BTW- I pointed out Carmack didn't like Cell when I first brought up the fact that he said it was doing things the i7 couldn't. That was actually part of my point, he doesn't like the processor but still calls it like he sees it.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Would have come with a fairly large increase in latency, anyway I can think of it anyway, LS's big edge over traditional cache is the latency advantage. Seems like just pushing for 1MB LS would be better.
Maybe, but I don't see where the latency would be much, if any. Local accesses would be local and direct, and possibly used pages should be manually loaded (optimized hot code would get as much indirection as possible removed). It wouldn't be making it a cache, and the only CAM involved would be an MMU that would only be needed for 'far' locations, which are already tens to hundreds of cycles out. The main benefits would be initial development and debugging time, which could lead to quicker high quality optimized loops running well on the SPEs, and programmer morale. A future iteration definitely would need larger memories, though.
 
Last edited:

Ancalagon44

Diamond Member
Feb 17, 2010
3,274
202
106
Wow, if that processor had ever been mentioned that would have been a good point. If someone creates a time machine, goes back and changes Tom's article from the 440 to the 425, you will have that post waiting for them, good show :)

BTW- Read Tom's article, you need a GT2xx GeForce series part to accelerate 3D Blu Ray, the PS3 certainly doesn't have that.

Edit BTW- I pointed out Carmack didn't like Cell when I first brought up the fact that he said it was doing things the i7 couldn't. That was actually part of my point, he doesn't like the processor but still calls it like he sees it.

An Athlon II X3 never cost anywhere near $459. My point is, that is a complete lie that it ever cost anywhere near that much. I had an Athlon II X4 - ie the quad core version - and that cost only $99. So, for $59, at consumer prices, you could buy a CPU capable of decoding Blu Ray.

Nevermind the fact that the more capable xbox GPU could probably do it because it possesses unified shaders.

And nevermind the fact that a modern Bobcat CPU - an 18W CPU! - can decode 1080p blu ray in real time.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
I mentioned the A2, the one that came out in 2007 was the A2 x2-

http://www.anandtech.com/show/2177

That cost $459 and wasn't out in time for the PS3 launch. I noted that in my follow up edits, the parts you are talking about didn't come out until the 2009-2010 time frame, that would have had the PS3 delayed three to four years.

And nevermind the fact that a modern Bobcat CPU - an 18W CPU! - can decode 1080p blu ray in real time.

3D Blu Ray, not quite the same. A trivial task for Cell, one that none of the x86 CPUs from that time era can handle on their own. I'm curious as to what the explenation for that is, I honestly should have thought of looking that up earlier, such a small thing for the PS3 to handle but beyond the capabilities of 2006 x86 to cope with.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Cerb- Don't have a lot of time, I'm wondering what you are thinking for on die memory layout?

Something like a universal store that all cores had shared access to? Maybe 4MBish eDRAM that operates in the abstract like a shared L3?
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Do yourself a favor, drop that line of discussion. You can try, as pathetic as the attempts are, at putting forth the arguments Cerb is capable of. It is hard to even respond because honestly it is so stupid to read it gives me a headache. But trying to say that these chips are G5 based is just making you look like you have never heard the expression PC before.

The only person who should drop the discussion is you. You haven't disproven a single one of my points. I never once said these chips are based on the same architecture as the PowerPC G5 because I clearly mentioned multiple times they are slower than the PowerPC G5 core. I am not going to spell out the exact codename for the PowerPC G5 since it's obvious if you look for it. What you are doing as always is attacking a small part of a sentence/discussion so that it appears you are intelligent when you nitpick it, but really all that does is wastes 2-3 min of my time to link to the obvious information which I speak off and once that's settled, we always end up back to all of my points that you haven't disproved yet.

You can say PowerPC G5 processor because that's what normal people called it back then. Anyone in this thread knew exactly what I meant when I mentioned this processor:
http://support.apple.com/kb/sp96

Now that this nomenclature for the G5 processor is settled, we are back to the original statement that the Cell's processing power was inferior to that of the PowerPC processor in the Macintosh compute of that time. Intel has abandoned the PowerPC G5 processor in the Apple citing poor performance, performance/watt, and overall power consumption. Now the Cell, by virtue of using just 1 simpler PowerPC core than the PowerPC G5 processor has somewhat solved some of the consumption issues, but at the expense of even worse performance. Now we are back to the same spot: The Cell was a very slow CPU. Again, all you did here is waste my time and we are back to the Cell presentation where it was already stated that the Cell was slower......

It's pretty obvious you have not provided any rebuttal to anything else I've said either. I'll add even more obvious points that weaken your argument even further.

1) The Cell has about 1/10th the transistor count of a modern $1000 Core i7 3960X CPU. You say that it outperforms modern Core i7 CPUs for games. That implies that the Cell is at least 10x as powerful per transistor than the world's fastest consumer CPU; all that from a 2006 CPU designed on 90nm node? D: Oh really, not a single company in the world has recognized this yet and Intel is a 100B+ market cap company? That's all based on Intel's marketing I suppose....

2) If the Cell cost $7, 2 Cells is $14 incremental cost according to you. If you say the Cell was extremely power efficient, then Sony could have easily put 2 $14 Cells into the PS3. You also say that the Cell was 4-5 generations ahead of Core 2 Duo, which means 2 Cells would be twice as fast or 8-10 generations ahead of Core 2 Duo. If it was so cheap to double the performance 8-10 generations ahead, why didn't Sony keep 2 Cells and just add a discrete GPU? Illogical argument.

The only thing that was 100% proven in this thread is that you are a huge PS3 fanboy (and by extension a Cell fanboy). It's pretty hard to have a reasonable logical discussion with a person who hates PC gamers as a group, thinks the Cell is a God's gift against the world's top CPU makers: AMD and Intel, and who diverts the argument into nomenclature PowerPC G5 CPU derail and instead of focusing on the actual topic and main points being made. Notice how you did the exact same thing in the Dark Souls thread? You really don't know how to present your side in a coherent manner with actual facts other than stating your opinions as facts with no 3rd party support. That's not sufficient since you are not an expert in the matter.

BTW what makes you think that 3D Blu-Ray capability on the PS3 directly related to the Cell and not GeForce 7 GPU? Nvidia added support for Blu-ray disc TrueTheater 3D playback with a driver update. Actually, PS3's RSX ability to support 3D Blu-Ray acceleration was discussed as early as 2008. It appears 3D blu-ray was was added later to GeForce 7 series:

http://www.cyberlink.com/prog/support/cs/product-faq-content.do?id=2577
NVIDIA:
Minimum: GeForce 7600 GT series, GeForce 7800 GTX 512 series, GeForce 7900 GX2 series, GeForce 7900 GTX series, GeForce 7950 GX2 series.

Furthermore, you claimed that a Core 2 Duo would not be able to play 3D Blu-Ray content at all. This is again false:
Intel (Recommended):
Pentium EE 840 3.2 GHz, 955 3.4 GHz or 965 3.73 GHz, Pentium D 945 3.4 GHz, 950 3.4 GHz or 960 3.6 GHz, Core Duo T2500 2.0 GHz, T2600 2.16 GHz, or T2700 2.33 GHz, Core 2 Duo E6300 1.8 GHz, E6400 2.13 GHz, E6600 2.4 GHz, E6700 2.66 GHZ or X6800 2.93 GHz, T7200 2.00 GHz, T7400 2.16 GHz, T7600 2.33 GHz, Core 2 Quad Q6600 2.4 GHz, Core 2 Extreme QX6700 2.66 GHz, or X6800 2.93 GHz
http://www.cyberlink.com/prog/support/cs/product-faq-content.do?id=2576

See how you again stated an opinion and then I had to go out of my way waste time to prove the obvious mistakes and then we are back to the exact same spot: Cell's CPU gaming performance. You have provided nothing concrete on this topic still.

Regardless a CPU's ability to play 3D-Blu Ray content has little to do with its ability to play videogames well. FX8150 series can play 3D-Blu Ray content but is hardly a faster CPU for games than Core i3. The 2 tasks are not interchangeable to imply a direct correlation. So again, 3D blu-ray capability (besides not proven to be a result of the Cell vs. GF7 GPU) is simply a red herring to the discussion about the Cell's performance specifically for games against modern x86 CPUs. It also does not in any way prove that the Cell is faster for games than a modern x86 CPU which itself can easily play 3D Blu-Ray movies.

Some of your support for superiority of the Cell is so unfounded because you are drawing direct correlations to aspects of PS3's features that have nothing to do with gaming performance in the first place, only to try and come up with any argument you can out of thin air to support the notion of how amazing the Cell CPU is in general. It really shows the level of bias that's present in your argument when things are being made up just to prop up your side. That reveals that you are not impartial to the topic and have already made up your mind about the Cell despite what anyone here tells you to the contrary.

Your other general claims regarding how CPUs work are so contrarian to modern views on CPUs, most of us should have figured out from the beginning that you read the book on how the Cell was designed before you go to sleep at night and daydream about the mythical Cell 2.0 reincarnation. The fact that you think that OoO CPUs are worse than in-order CPUs, that theoretical (floating performance) is directly correlated to real world gaming performance, that an over-complicated Cell CPU design does not contribute to programming costs and inefficiencies that many developers have constantly complained about, that manufacturing and testing costs are not real costs to a technology/semi-conductor company, that a 2006 CPU with 1/10th the transistor size has better overall performance, performance/watt than a modern 6-core Core i7 flagship from the world's leading consumer CPU maker, that the PS3's 60W of load power consumption penalty vs. the 360 is made up, that you cannot incorporate weather and night effects in Forza 4 because of technical limitations of the console, and that x86 compiler code is the devil is when you should have stopped all your technical contributions on the matter at hand in this thread.
 
Last edited:

cplusplus

Member
Apr 28, 2005
91
0
0
And BTW, what allows 3D Blu-Ray on PS3 is not the Cell but GeForce 7 GPU. Nvidia added support for Blu-ray disc TrueTheater 3D playback with a driver update. This is why PS3 did not support 3D blu-ray from the beginning.

http://www.cyberlink.com/prog/support/cs/product-faq-content.do?id=2577
NVIDIA:
Minimum: GeForce 7600 GT series, GeForce 7800 GTX 512 series, GeForce 7900 GX2 series, GeForce 7900 GTX series, GeForce 7950 GX2 series.

PS3's RSX ability to support 3D Blu-Ray acceleration was discussed as early as 2008.

Some of your guys support for superiority of the Cell is so unfounded because you are drawing direct correlations to aspects of PS3's features that have nothing to do with the Cell in the first place, only to try and come up with any argument you can out of thin air to support the notion of how amazing the Cell CPU is. It really shows the level of bias that's present here when things are being made up just to prop up your side.

The PS3 didn't officially support it at the beginning because there was no official format for 3D Blu-Ray movies until December 2009.
 

Ancalagon44

Diamond Member
Feb 17, 2010
3,274
202
106
I mentioned the A2, the one that came out in 2007 was the A2 x2-

http://www.anandtech.com/show/2177

That cost $459 and wasn't out in time for the PS3 launch. I noted that in my follow up edits, the parts you are talking about didn't come out until the 2009-2010 time frame, that would have had the PS3 delayed three to four years.

Do you even read the links you post?

The one from 2010 - 2010! - says that for 3d Blu ray, you need at least an Athlon II 440. An Athlon 2. The one you cite for pricing, from 2007, has an Athlon X2 6000+ for $459. Do you understand the difference between the two CPUs? Nevermind the time.

And in any case, this is pointless. Yes, the Cell can decode 3D blu ray content. Good for it. However, you have not proven that another PPC CPU, or similar CPU available at the time in 2005-2007, could not have done so. You have merely proven that you would have needed a good x86 CPU. What if they had chosen a different route? Maybe a dual core PPE with a specialized vector processor? Maybe asked Nvidia to add suitable logic to their GPU?

You make it sound as if Cell was the only possible route - by the sheer number of modern devices capable of doing so, clearly it isnt. There could have been a lot of different design decisions they could have made that would have been better from a development point of view while still allowing for the playback of 3d Blu ray content.
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
I didn't read the whole thread but this is not entirely correct RS
"Your own claims are so ridiculous, you should just stop posting entirely in this thread before you completely ruin it. The fact that you think that OoO CPUs are worse than in-order CPUs is when you should have stopped all your technical contributions on the matter."
Not always OoO CPUs are superior to in order cpus, it all depends on the underlying program.
 

Protomize

Member
Jul 19, 2012
113
0
0
The bottom line, in terms of performance in gaming, is the GPU's performance is a more important factor than the CPU's. Argument over. Now, let's all have a large pizza pie on me?
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
The PS3 didn't officially support it at the beginning because there was no official format for 3D Blu-Ray movies until December 2009.

GeForce 7 GPUs supported 3D blu-ray later on;
Core 2 Duo supported 3D blu-ray playback;
3D Blu-Ray playback has nothing to do with gaming performance, so using it to show Cell's superiority vs. modern x86 CPUs that can play BluRay just as easily is a red-herring (see FX8150 3D BluRay vs. Core i3 and then compare their gaming performance. It's meaningless to compare 3D blu-ray playback. If in 5 years from now some future CPU could run 100 separate 3D Blu-Ray vidoes on 100 displays and not slow down, does it mean it can run tessellation, HDAO, Depth of Field, contact hardening shadows, global illumation lighting model in games? There is no correlation.

===================================

This is the best part: What about all those Flops?
Ever since the announcement of the Xbox 360 and PS3 hardware, people have been set on comparing Microsoft's figure of 1 trillion floating point operations per second to Sony's figure of 2 trillion floating point operations per second (TFLOPs). Any AnandTech reader should know for a fact that these numbers are meaningless, but just in case you need some reasoning for why, let's look at the facts.
http://gprime.net/board/showpost.php?p=89258&postcount=3

The Full Anandtech article explaining why current consoles have slow CPUs and thoughts of developers on the CPU choices:
"Regardless of the reasoning, not a single developer we've spoken to thinks that it was the right decision."
http://gprime.net/board/showthread.php?t=5989

----

I am down for that Protomize :)

Jaydip, good point on the above, but specifically for gaming performance you don't want to be stalling the pipeline of a CPU. This is why modern CPUs have branch prediction, OoO execution, HT, etc. The Cell has a 21-stage pipeline.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Cerb- Don't have a lot of time, I'm wondering what you are thinking for on die memory layout?

Something like a universal store that all cores had shared access to? Maybe 4MBish eDRAM that operates in the abstract like a shared L3?
No, just address x through x+2MB-1 accounting for the stores (no additional memory, just MMU/DMA tricks), where x is a known/fixed offset, so plain flat C pointers can be used for initial development, and loads and stores that aren't direct, which pass through the MMU, are cache-coherent (guarantee the offset in HW, and the OS could just protect segments of those address blocks as needed, likely not adding any meaningful translation latencies). A good compiler made for it aught to optimize away in-store operations as direct non-coherent ones (cache-coherency will often add latencies), as well, so much of the code won't need to be further worked with by humans (that part has actually worked out OK, for certain uses, from what I know). Patch it through virtual memory and using them inside a multitasking OS becomes much easier. Allow HW multitasking, copying that 2MB+registers out to that process' RAM, and then back, at any time (make certain things like loop starts or special barriers work as checkpoints), and then you've then got a modern computer system.

It's hard to fault Hofstee for several aspects of the Cell, as an embedded processor with flexible DSPs. Nobody could rightfully predict that we'd hit clock speed and power walls like we did (the most important aspects of the Cell were pretty well set in stone by 2002). Some did, but it wasn't a sure thing, either way. If we had gotten kilocycle processors, FI, high speed with low TDP wouldn't bad for embedded CPUs; since we didn't, a Cell 2 or 3 would certainly have a very different PPE. Decreasing the TDP each generation while increasing throughput would help make up for many other faults (basically the same kind of generational improvements we see in GPUs). Some things, like a narrow in-order CPU with minimal to no speculation, are actually fine for an embedded real-time system, which was often mentioned outside of Sony's marketing. Hofstee clearly had goals that weren't the PS3, and speculation and non-determinism can very much be things you don't want. But, best efforts and batch processing are quite good for modern games.

However, there was one huge fault, which often seems to be made by hardware guys that are a bit out of touch. It's happened over and over again. And that is requiring manual memory management, for a processor that has to run complicated programs. Yes, the best programmers can do their best code managing caches manually (3.1). But that misses the point that in writing millions of lines of code, only some thousands of those lines will actually warrant that kind of attention, and even the best programmers are not fast at creating such code. Requiring it will leave the compute units idle when they might be able help, and require a great deal more development time, just to make sure they can be useful for a task. Also, the reality is that most programmers who will have to do low level work aren't the best, and hardware that can't handle that is itself broken. NVidia understands that with CUDA, FI, and have been only increasing its abstraction capabilities, over the years. Such manual memory management is requiring premature optimization, basically.

The overhead looks terrible to hardware people, quite often, but it can be made up for in improved development time and processes so many times over that if it can be virtualized, the proper way to do it is to virtualize it, and then allow bypassing for better performance. If you have a prototype working well enough that you're sure it can do the job, then going in with all assembly/intrinsics and doing it all manually is definitely worth it. If you think it might, but can't find out a reasonable probability, without going all-out, you could be wasting many hours only to find an unexpected bottleneck, instead of just a few hours testing a few things out, using profiling results to gauge how much better a hand-optimized version should be.

P.S. I could really go for a good Sicilian, with plenty of olive oil in the sauce.
 
Last edited:

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Do you even read the links you post?

Heh, before you even replied I had edited my post realizing I had looked up the wrong part, it was noted before you even hit reply(I then went and checked up the Intel part).

However, you have not proven that another PPC CPU, or similar CPU available at the time in 2005-2007, could not have done so.

2006, the CPU had to be in mass production in 2006. All of x86 failed. PPC obviously could, that is what Cell is. Could SPARC or MIPS parts? I'm sure they could have. For the cost of what Sony was paying for Cell? No. MIPS makes low end embedded processors and high end supercomputer parts. SPARC only has high end server parts(think *WAY* more expensive then x86). IA64 parts from Intel almost certainly would have been able to, but they cost more then a couple PS3s at the time(they are ~$1K and up chips).

GeForce 7 GPUs supported 3D blu-ray later on;

nVidia say you are wrong-

http://www.nvidia.com/object/3d-vision-system-requirements.html

No, the 7 series does not support 3D Blu Ray playback. Neither did the 8 series, nor the 9 series.

Actually, PS3's RSX ability to support 3D Blu-Ray acceleration was discussed as early as 2008. It appears 3D blu-ray was was added later to GeForce 7 series:

Wow, that link is discussing getting 3D hardware support in Linux. It isn't approaching technical, they aren't talking about the, at the time, imaginary 3D Blu Ray spec :)

If in 5 years from now some future CPU could run 100 separate 3D Blu-Ray vidoes on 100 displays and not slow down, does it mean it can run tessellation, HDAO, Depth of Field, contact hardening shadows, global illumation lighting model in games? There is no correlation.

There actually is a correlation, but it is too complicated for me to bother explaining it to you. I'll give you a hint, math isn't some magical ability.

That Anand article was pulled because it was laughably inaccurate. He also only asked PC developers what they thought of the new architectures.

Yes, the best programmers can do their best code managing caches manually (3.1).

I am not dismissing nor arguing with your general points, although I think you went a little overboard with how many transistors you want to add(SPUs are <7mm at 28nm, I'm thinking you are doubling per functional unit not including SRAM/cache obviously). This I think is a major point, obviously we are pushing thermal and fabrication limits pretty hard already, and we don't have much longer to go on the path we are taking on the hardware side. Yes, there could be a massive breakthrough, but our pace is already slowing considerably. The architectural changes that Cell brought in the abstract is where everyone is heading that wants high performance computing in general(Intel's highest performing part is now a tiny in order core used in large quantities, of course AMD and nVidia are taking the same approach).

So here is my question, where do you think the line should be drawn? Obviously Cell was fairly far on one side of the equation, although not as bad as GPGPU was at the beginning, but looking at it from a hardware versus software perspective, how much die space should be spent making developers lives easier? Obviously Intel focused on this for so long they became a complete non factor in the HPC space, and their solution to that was to create something far more like Cell then the i7.

Cell allows the best code to be run faster then any mainstream CPU still, and the design is half a dozen years old. Obviously at this point a CPU like Cell can be avoided as GPGPU has come a long way and offers even better performance. Perhaps that is an environment you would rather see? A dumbed down processor easy to deal with like the i7 with a PITA GPGPU alternative for when you needed more compute power? Or, would a hypothetical Cell2, which honestly even with your design tweaks should be able to have 4x PPU and 16 SPUs and be comparable in size to the original(using 28nm obviously) with a GPU focused to graphics be more to your liking?

The way the SPUs are handled now, with a few tweeks, are already set up to allow the best developers to get the best performance possible. How far away from that should the bar be moved? Do we aim for mediocrity, or is that too high? I understand, Cell makes it take the best doing their best work, but how far over should we move things to compensate?

Edit- Thought this was funny, the rumored 3850 that people are talking about for the PS4?

http://www.hardwarecanucks.com/foru...-a8-3850-apu-review-llano-hits-desktop-9.html

Funny that is supposed to be the next generation, fails a trivial task without help that the weak old Cell processor from years gone by does without issue(i3 also failed).
 
Last edited:

-Slacker-

Golden Member
Feb 24, 2010
1,563
0
76
BTW what makes you think that 3D Blu-Ray capability on the PS3 directly related to the Cell and not GeForce 7 GPU? Nvidia added support for Blu-ray disc TrueTheater 3D playback with a driver update. Actually, PS3's RSX ability to support 3D Blu-Ray acceleration was discussed as early as 2008. It appears 3D blu-ray was was added later to GeForce 7 series:

http://www.cyberlink.com/prog/support/cs/product-faq-content.do?id=2577
NVIDIA:
Minimum: GeForce 7600 GT series, GeForce 7800 GTX 512 series, GeForce 7900 GX2 series, GeForce 7900 GTX series, GeForce 7950 GX2 series.

Furthermore, you claimed that a Core 2 Duo would not be able to play 3D Blu-Ray content at all. This is again false:
Intel (Recommended):
Pentium EE 840 3.2 GHz, 955 3.4 GHz or 965 3.73 GHz, Pentium D 945 3.4 GHz, 950 3.4 GHz or 960 3.6 GHz, Core Duo T2500 2.0 GHz, T2600 2.16 GHz, or T2700 2.33 GHz, Core 2 Duo E6300 1.8 GHz, E6400 2.13 GHz, E6600 2.4 GHz, E6700 2.66 GHZ or X6800 2.93 GHz, T7200 2.00 GHz, T7400 2.16 GHz, T7600 2.33 GHz, Core 2 Quad Q6600 2.4 GHz, Core 2 Extreme QX6700 2.66 GHz, or X6800 2.93 GHz
http://www.cyberlink.com/prog/support/cs/product-faq-content.do?id=2576

See how you again stated an opinion and then I had to go out of my way waste time to prove the obvious mistakes and then we are back to the exact same spot: Cell's CPU gaming performance. You have provided nothing concrete on this topic still.


Considering the amount of kidney busting ownage this benskywalker guy keeps coming back for, it's baffling how he keeps getting more an more arrogant and impatient with replies, like he's had enough of schooling us mere mortal peasants on the intricacies and greatness of a 6 year+ old, 200 million transistors cpu when compared to a lowly, modern, 2 billion transistor x86 cpu.





Tell us more, ben.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Tell us more, ben.

Why don't you try clicking on his links and actually reading them-

Blu-ray 3D disc without Hardware Acceleration
The playback of high definition DVD titles requires higher CPU speed. The following is a list of the recommended CPU requirements to play Blu-ray 3D Disc titles:

Intel (Minimum):
Intel Core 2 Duo E6750 2.66 GHz or above.

Intel (Recommended):
Intel Core i5 650 3.2GHz or above.

AMD (Minimum):
AMD Phenom X4 9450 2.10GHz or above.

AMD (Recommended):
AMD Phenom X4 945 3.0GHz or above

It isn't really a debate, the guy linked a forum about getting OpenGL support running under Linux as an example of BR3D- he has *no clue* what he is talking about :)

2) If the Cell cost $7, 2 Cells is $14 incremental cost according to you.

Was going to ignore most of your shockingly ignorant rants, but someone else spoke up and said you had reduced their IQ, so why not smack a few other points around.

A console needs a CPU.
A console needs a CPU.
A console needs a CPU.
A console needs a CPU.
A console needs a CPU.
A console needs a CPU.
A console needs a CPU.

You get that point now? After your linking of a thread on getting OpenGL running under Linux, and you saying that was about BluRay3D(watch out NASA!) I understand you have a hard time when things are stated plainly and explicitly once. Much like you commenting about how the PS3 was only $400 cheaper and how much it took to get that into your head.

Any CPU is going to have a cost.
Any CPU is going to have a cost.
Any CPU is going to have a cost.
Any CPU is going to have a cost.
Any CPU is going to have a cost.
Any CPU is going to have a cost.

That enough times? I hope so. CPUs aren't free. With Cell, Sony was making it themselves. That means that they were getting the processor at cost. Therefore, the premium they paid for Cell, over cost, was less then $7 per chip. It took me less then two minutes to explain this to a seven year old and he got it, so maybe after a few more weeks we will succeed here too.

CPU cost to produce is mainly a function of die size.
CPU cost to produce is mainly a function of die size.
CPU cost to produce is mainly a function of die size.
CPU cost to produce is mainly a function of die size.
CPU cost to produce is mainly a function of die size.

You have stated, repeatedly, that Sony would have been better off going with a x86 based CPU and with the money they saved they could have gotten a better GPU. Cell is *smaller* the Core 2 Duo, hence has a lower cost but they are close enough I'll even call it even. If Intel would have given Sony Core 2 Duos *at cost* that would have saved Sony $7 per console sold to spent on a faster GPU. There is the budget you are looking at going with something besides Cell when Intel would agree to sell Sony chips at cost(I believe that would be roughly four years after Hell hit absolute zero).
 
Last edited:
Status
Not open for further replies.