• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

What is the strong point of Itanium processors?

What type of applications would an Itanium setup be better suited for then X86, or other RISC based CPU's?

I've never really learned much about the CPU.
 
Originally posted by: Stefan
What type of applications would an Itanium setup be better suited for then X86, or other RISC based CPU's?

I've never really learned much about the CPU.
-Top-notch FP
-High Performance Computing
-Big Iron applications
-Need to run applications running on HPUX, OpenVMS

Really, in the high-end market, unless you absolutely need Sun, the choice is only between Power5 and Itanium 2.

 
Itanium2 is a great CPU for "big data" number crunching, as are the custom CPUs found in Cray's X1. These sorts of machines often have dozens or even hunderds of CPUs per system (not a cluster). The downside is they're harder to program for. Because of the funky VLIW design of the Itanium2, it's hard to debug at the assembly level and most of the performance boosts come via careful data packing and heavy use of Intel's compiler optimizations.

SGI sells a line of machines called the Altix:
http://www.sgi.com/products/servers/altix/
They run SuSE Linux and can handle 512 CPUs per system (1024 and 2048 CPUs with an experimental kernel). NASA has a 10,240 processor Altix cluster, it's actually 20 altix systems, each with 512 CPUs.

Sun still makes good equipment, but it's middle of the road... not powerful enough to really be a supercomputer like some of the SGI, IBM, and HP gear, but is also overkill for just a database server. Sun has been moving to Opteron, so they may have a future in lower end systems. Time will tell. Sun still makes very sweet, powerful servers, but their prices are pretty high. Buying Sun equipment today is like buying a Land Rover when all you really need is a Ford Expedition.
 
Originally posted by: halfadder
Itanium2 is a great CPU for "big data" number crunching, as are the custom CPUs found in Cray's X1. These sorts of machines often have dozens or even hunderds of CPUs per system (not a cluster). The downside is they're harder to program for. Because of the funky VLIW design of the Itanium2, it's hard to debug at the assembly level and most of the performance boosts come via careful data packing and heavy use of Intel's compiler optimizations.

SGI sells a line of machines called the Altix:
http://www.sgi.com/products/servers/altix/
They run SuSE Linux and can handle 512 CPUs per system (1024 and 2048 CPUs with an experimental kernel). NASA has a 10,240 processor Altix cluster, it's actually 20 altix systems, each with 512 CPUs.

Sun still makes good equipment, but it's middle of the road... not powerful enough to really be a supercomputer like some of the SGI, IBM, and HP gear, but is also overkill for just a database server. Sun has been moving to Opteron, so they may have a future in lower end systems. Time will tell.

What exactly is "big data"? is it really large numbers, or is it floating point numbers that have many 0's after the decimal and are used to make very precise calculations? Or a combination of both?
 
Originally posted by: Stefan
What exactly is "big data"? is it really large numbers, or is it floating point numbers that have many 0's after the decimal and are used to make very precise calculations? Or a combination of both?
Either or both. Big Data is sort of a nickname in the High Performance Computing world. One of the examples I've seen was atmospheric / weather simulation, where each run of the simulator starts with dozens of gigabytes of data, processes it into dozens of terabytes of intermediate data, and eventually ends up with several gigabytes of final data. Each run can take hours or days, often swamping a 128 node cluster or a 64 processor supercomputer.

Another example that was explained to me was virtual vehicle crash test simulation, where a detailed model is loaded into the supercomputer (maybe a few GB of data). Then the model is run though thousands of different scanerios, each with thousands of different combinations of parts and modifications. The intermediate results are often millions or billions of different large datasets. These are then sorted and examined and the output shows the best results, worst results, and some different combinations / tradeoffs. These sorts of runs can take days on large supercomputers.
 
Another example of "Big Data" that is easier to visualize is servers for Anandtech. Thousands of people post messages, read articles on Anandtech, and Anandtech needs powerful servers to do that. Itanium may turn out to be good in this case.

However, Itanium may be even more than necessary and they may not be able to spend so much money, so they go for price/performance balanced servers like for exmple Opterons of Xeons.

Remember NASA's 10,240 Itanium 2 1.5GHz system? NASA said they will use that to better simulate their spacecraft design with the system. That requires tremendous amounts of processing power and that's the kind that Itanium is used for.

The Itaniums are also used for CFD, or Computational Fluid Dynamics. I don't know the specifics of it, but you can say they are used for simulating wind tunnels for cars, simulating the physics required when testing their design.
 
Back
Top