dullard: While I completely agree that clustering of low-end machines (ie Xeon up until now) has become a new dominant market in high-performance computing, I think it's
way too early to predict the death of enterprise computing. "High-end" x86 has certainly stole a lot of the 2- to 8-way market for workstations and small servers from high-end RISC, but the clustering of these systems is not a solution to everything. No offense, and with all due respect to x86's high-performance at a cheap price, but I really don't see how you can say that you're going to get higher performance from a 2-way Xeon system over an 8-way HP PA-8700 system...that may be true for you but I can't say its true in general. At the opposite end, I do most of my programming work on 1GHz P3 Linux boxes at the CS department (because of their large monitors

), but ssh into the "antiquated" uniprocessor 450MHz US-II Sun boxes, which recently have offered twice the performance for some machine learning and neural network/digit recognition programs I've been writing. I know more than a few EEs who won't touch x86 for VLSI placing and routing, not only because the software is not available for x86, but it can't offer the same level of floating-point performance.
After all, people have been predicting the demise of the mainframe for twenty years, and that's an even smaller niche market than enterprise computing...but last year half of IBM's revenues, amounting to around
$40 billion, came from mainframe sales and services. And the mainframe market has been
increasing by 10% per year since 1999. Even as clustering pushes higher-end enterprise systems into a smaller corner, it's a very lucrative market that will be filled. And IBM, HP, Compaq and others are certainly able to lead the lower-end market using Xeons and Itaniums (and who knows, maybe Sledgehammer)...the Sun Fire 15K that you linked is overpriced (arguably like all Sun systems

), considering that HP is building a custom
McKinley-based supercomputer with nearly 20 times as many CPUs, 6.4 times as much memory, and over 400 times as much disk space for "only" 8 times the cost.
Regardless, even if clustering does eventually completely take over high-performance computing, it's not going to happen overnight, or even in a few years (again, consider how long mainframes have stuck around). If there's money to be made in the enterprise market in the forseeable future, there's got to be someone to fill it.
edit: Clustering has certainly been a godsend for the computationally-intensive environment. I do part-time work for a high-energy physics group that is designing a particle detection system for CERN's Large Hadron Collider. The collision simulation dataset tests that I run would take in excess of
2- to 4-months on a 1GHz P3 machine, and I often have 10 dataset tasks in the pipeline

. We have a 100 CPU cluster (mostly 1GHz P3 linux boxes) using a distributing computing environment. Because of the computing-intensive nature and low-IO requirements of the tasks, we get a linear speedup for our simulations.
But, correct me if I'm wrong, I don't see how clustering can offer the same 10,000+ IO transactions/sec performance that high-end enterprise and mainframe systems can offer.