A game that crashes at all is considered a disgusting embarassment to the console community. A new PC game that doesn't crash is called Carmack's last game(and that is pretty much it). PC games ship in a state that would be laughed at as an alpha by any console manufacturer(outside of MS in the odd case of Morrowind- that game was so horrificly badly coded it almost felt like a PC game after only a few patches). Consoles standards are leagues beyond anything PC games can approach outside of id.
I fully appreciate your point. I rarely pick up PC games as soon as they come out for just this reason, there is always a patch. Typically ports are much more stable than straight PC games, I appreciate that too. What I meant is that if we had a game, for the PC, that ran 15-20 FPS all the time on the latest and greatest hardware, it wouldn't be acceptable. Even 15-20 FPS on minimum requirements is unacceptable. I had a really hard time playing through Halo2 for just this reason, and I was really stunned about how many fans thought it was the greatest thing since sliced bread.
Also, I was referring more to the small die size of the Xenon than the cell. Sony paid plenty for that baby, so we will just have to see how it performs. I don't like that they use simple DMA and seem to share the bus, but that is definitely their call. Also, when I referred to the cache being slow, I was referring to its latency and how many way set associative it is. For example, the Pentium 4's went from 1 meg of L2 cache to 2 meg, the latency went up. This is because of deepening each set, they simply doubled the physical number off them. Inside each "set" are a number of fully associative chuncks of memory that can all be looked at simultaneously, the size of data is how many "way" associative it is. Another example, one you will know quite well due to your quite extensive background, is the cache on the P3/celeron in the Xbox. The reason it would be called a celeron that it only has 128K L2 cache. The reason it is called a P3 is because that is still the more expensive 4-way set associative cache instead of the 2-way cache that a celeron would normally enjoy. Associative memory is expensive and gets more and more so the bigger it gets. I am guessing that the caches are only 2-way set associative on the cell and xenon simply because while they are small for the number of cores (particularly the xenon), they are fairly large as far as total die size is concerned. Thus, a lot of money could be dumped on the cache... although the cache might be something like 8-way, which would be nice and competitive with current PC processors.
Calling the Pentium D the latest and greatest from intel is a bit of a misnomer, as it was more band-aid fix so that AMD was not the only player in the DC market. This is indicated by their poor, from an engineering standpoint, method of accessing main memory and how much they didn't think that the market would go there (DC) for a long time and that single core performance would carry them on. Obviously, that bubble has burst. But it would be a mistake, imho, for the xenon to do the same thing that the D does and give only one processor at a time access to MM (shared bus). Also, bandwidth really pertains to how much data can travel at any one time in and out of a processor, but the real gains come from lower latency. That much bandwidth does the processor little good if after a wrong branch it needs data or instructions from main memory. Once developers start making use of all the cores this will be especially important, as the cache will be split many ways, making L2 cache misses that much more frequent. Also, it is important that in-order code can take up a lot more space than in order code, loops aren't for lines and 5-6 instructions, they are 4 lines/instructions*how many times the loop must execute, if I remember in order right. No riding a pointer around, right? Right. This may make the somewhat low L1 cache (I am sure 16K instruction/16Kdata) that much tighter as instruction size grows. I could right machine code in OoO in about, well, 6-7 lines that had a loop that would go around adding one to an integer until that integer equalled the absolute value of the desired data. That same machine code with in order code would quickly grow.... hmm.... the more I think about it, the less I like in order altogether

Sounds simple, but man, I can only imagine what that code would like when you compiled/ran it and what the hardware would be doing. Thanks for pointing out how you might simply lay out all the code rather than branch in an deep pipelined environment - I suppose if that is what gives you the best consistent performance that is what you would do. You're right, I am younger, it just seems like bass-akwards way of doing it. I can see how it would work though, and with some BP and some heavy hinting in the code you could probably eke out some decent performance saving branches when you had to. In console land, this will be doable, I keep thinking of how this would work for PC game devs and I would just shake my head in pity, they would go so long and be so over budget, if they didn't make the next The Sims they would all have to find new places to work
I realize that Blu-Ray hasn't gone the same way as HD-DVD, I only am hoping they come up with a solution like DVD that allowed it to go out to a TV but look really crappy if you tried to capture it with a VCR, etc. Lets just say that I am bitter as none of the big screens that I help people purchase last year have HDMI and I am going to look foolish for recommending them, even when HDMI didn't even show up on the mainstream radar at that point. At this point, I cannot recommend a spending more than $300 on a TV without DRM compliant input.
Code can be written in a such a way to minimize branching, but it is going happen, espcially in applications like physics and collision detection. I guess that is why devs are lucky to have another processor or processor array to feed that too and that it is something that is easily threaded off.
Economics does play in, and there is much money to be made in the console business, that is for sure. For me, it is hard to imagine how consoles manage to live for so long. It is hard for me to envision this, the masses flocking to their consoles and snapping up games, but then again I went from a genesis to a decent computer and have never really looked back. Sure, I own a gamecube (just shipped it back to nintendo yesterday, actually, to get one with digital out, today I have to order the cables) to see how 540P *edit, only 480P, sadly...* looks. I always liked the cubes graphics, especially when it was considered the underdog of the consoles by many.
If you are really interested in learning the finer nuances of 3D then I would start with Computer Graphics Principles and Practice(Foley and Van Damm) and when you finish with that you will quickly figure out where to pick up the rest.
This isn't going to fly over my head, is it? I didn't fare well in linear, and was only in the middle of the pack in calc

They are both still painfully fresh though
