Apple is an epic IT bubble. Not exactly the first we had of those. People quickly forget after a burst.
It is also along time since the dotcom bubble bursted in 2000.
With Jobs gone, I predict that they'll be able to smoothly ride the ideas he had in the pipe for st least another 3 years. After 5 years, though, I expect it to look like MS w/ Ballmer.
Are either of those companies going pay Intel to keep making Itanium? HP paid Intel hundreds of millions of dollars just to keep it moving.
If you can't run Oracle, Sybase, MS SQL (ha ha), etc on Itanium you are going to really narrow the customer base incredibly quickly.
How many RAS features does the EX line really lack compared to Itanium?
That depends on how much legacy software you might need to support, and/or if you're stuck in an old mainframe ideology.
For fault tolerance of a small number of high-bandwidth servers (if the server count is going to be large anyway, this doesn't apply), when latency is also an important factor, you really need poisoning and mirroring support. Now, currently the application has to be able to make some use of it, but several Oracle and IBM products can do that.
Beyond that,
if you're building new, it's basically marketing. Computers are cheap. High speed networking isn't cheap, but not it's
too expensive, and software can do most of the logical redundancy work. Back when computers were expensive, hot swapping CPUs, RAM, and cards made sense. Today, that means you really need to think about a redesign, so that you can shut a whole server down if you need to, without system downtime.
Basically, there are two extremes of fault tolerance: (a) built it so it won't be likely to fail unexpectedly, or (b) assume it's going to fail, and prepare for likely failure modes. Old big computer systems were close to A, while communication systems have historically been closer to B. Fault detection on buses and in the core can't be handled externally, so at a very low level, there is no replacement for a quality chip (hardening, ECC everywhere, logic fault prevention where possible, etc.). Once beyond that basic level, though, it's all a matter of choosing what's more necessary, and what's more efficient. Today, computers are cheap, so redundancy by way of more servers, more PSUs, more drives, bigger badder SANs, etc., are generally the better way to go, except in a few cases where local bandwidth is simply the only way to get a lot of work done (IE, when more cores and memory bandwidth > more servers).