• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Purpose of AMD 64???

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
I think you're a little confused (or maybe I am).

The benefit you're saying IA 64 RISC processors have over IA 32 x86 processors (and X86-64 processors) has nothing to do with bits. The benefit you're talking about is the difference between RISC and x86. There's just two things better about 64-bit CPU's than 32-bit CPU's (in the most stripped down definition of 32 and 64 bit)... 64-bit processors can natively address more memory, and they have a much larger dynamic range. That's ALL. Any other advantages you're talking about are purely related to the difference between RISC and x86... no matter how many bits wide their GPR's are.

So... if your point is that instead of extending the x86 architecture to 64-bits, AMD and Intel should have adopted a 64-bit RISC architecture... I won't argue with you about that, because a 64-bit RISC processor probably does perform better than a 64-bit x86 processor. But keep in mind... if they were to swtich to IA64 you wouldn't be able to use any of your old software. I mean you could... but you'd take a HUGE performance hit to do so. Would it have been better to adapt the IA64 and Itanium to run 32-bit x86 code? Possibly... but x86-64 is here, and the transition is being made almost seamlessly. No doubt there will be some Windows bugs and driver issues to start with... but for the most part, it will be a seamless transition, and that's what consumers need. You can't come at them and tell them "if you want a new computer, you have to buy ALL new software, and you CAN'T use any of your old software."
 
Then if IA64 is better then why hasnt it taken the place by storm, is cost or what, why the slow adoption in the server market.

Im sure both opteron and IA64 both have thier weakness, but x86-64 seems more logical place to go.
 
Originally posted by: clarkey01
Then if IA64 is better then why hasnt it taken the place by storm, is cost or what, why the slow adoption in the server market.

Im sure both opteron and IA64 both have thier weakness, but x86-64 seems more logical place to go.

From everything I've heard, it's not cost effective. There are very few scenario's where IA64 bit is a necessity and a 32-bit x86 processor is useless.

The again RDRAM was miles better the DDR but it never took off, iv answer my own question, ah well ;-)

You're right... cost is an issue 98% of the time. There are some enthusiasts who absorbed the increased cost in order to get the maximum performance from their P4, and there are some companies who could justify the increased cost. But Average Joe doesn't need to spend 100% more on RAM for a 15% increase in performance.
 
Originally posted by: TheCadMan
Yes i'm arguing RISC vs x86. I was refering "True IA64" as being a 64 bit RISC processor.

It's a simple answer then. It's not cost effective to switch, even if it is better. The benefits just don't outweigh the expenses... which is why Intel has been rather unsuccessful at getting the Itanium into more servers.

Simply put, an Intel processor will not do the trick, because it does not have the necessary benifits of a 64 bit processor, and neither does the AMD opteron 64.
This sentence specifically is what confused me... the Opteron IS a 64-bit processor... the most basic definition of a 64-bit processor is a processor with 64-bit wide registers, and it definately has 64-bit wide registers.

I wasn't aware you were using "64-bit" and "RISC" interchangeably.
 
Originally posted by: clarkey01
so all things on price being equel, on a level playing field, most users would opt for Itanium over AMD64 ?

It's not a level playing field... the Itanium can't run the software you're use to using worth a crap.
 
yeah i know 32-bit emulation runs like a celeron on a hangover

Im a little blind to the IBM side of things, but whats there take on it all, where do they stand ?

cheers
 
When i mean IBM, i mean thier direction of their CPU'S, plsu i keep hearing about a 64-bit CPU called Alpha or somthing, can someone please give me a an ideal ?

thanks
 
Originally posted by: clarkey01
yeah i know 32-bit emulation runs like a celeron on a hangover

Im a little blind to the IBM side of things, but whats there take on it all, where do they stand ?

cheers

I would assume they stand next to AMD seeing as how AMD and IBM worked very closely to perfect SOI for the Athlon-64.
 
Originally posted by: Acanthus
Originally posted by: Falloutboy525
well AMD64 is the future. Intel was planning on IA64 for desktops with Tejas but we learned resently thier cancelling that program. my guess? Intel will rush into production a X86-64 dual core pentium-M, while they develop something that can compete with whatever amd has in the pipeline for the k9.

although I have to admit a 2.5ghz dual core pentium M sounds might good to me. and considering a Pentuim M is a Laptop chip and they are keeping the power running thru it low for mobile ussage if they beef the voltage up I bet Dothan would run at 2.5ghz now.

Tejas was NOT IA-64, it was a pentium 4 on .065 process, with 1200fsb, and 2MB of L2, further improved hyperthreading, and was to scale to 5ghz.

well according to this it was alot more than that
Text
 
Few points noted while scanning:

1. IA64 is the ISA for Itanium. IA32e is the Intel version of AMD's x86-64/AMD64. Sun SPARC processors, which I think the OP is referencing when he mentions working with IA64 for years, is a 64-bit architecture but is not IA64.

2. x86-64, if I remember correctly, has a special mode for 32-bit code that uses the extra 8 GPR's for 64-bit mode. A simple recompile should yield some improvement for 32-bit code and more mature compilers could bring more.

3. AMD64 is not "required" for transitioning to 64-bit desktop processing. Emulation is another method and both Apple and Intel have used that method to provide binary compatibility. AMD's choice in extending the x86 ISA is simply the easiest way, not the best nor prerequisite.

4. Adding 32-bit extensions to a 64-bit processor is like adding two more tires to your passenger car. If your CPU is the first of its evolutionary path, it's a universally stupid and redundant design. If your CPU is a direct evolution of a previous 32-bit version, then it already has "32-bit extensions" unless your engineers completely screwed up or pulled a fast one.

5. If I remember correctly, the K8 design is essentially a beefed up Athlon core. The original pipeline with 10 stages or so are still in the K8 but tweaked. Two extra stages and a memory controller round out the major differences. Adding 64-bit functionality to the original pipeline seemed to have been a simple extension of data paths and small modifications to the earlier stages. Given that scenario, the Opteron can't really help but run 32-bit code as well as 64-bit code.

6. Sun is having problems for the same reasons Apple is having problems but magnified. Sun traditionally designs the entire computer sysytem from the CPU to the applications to the keyboard and monitor as well as everything in between. They used to have the best and the fastest systems, but the costs associated with development and support have slowed upgrade cycles. PC's have caught up not because they are fundamentally better but because the influx of money from millions of customers have allowed brute force to drive development.

6. Intel may try to rush a Pentium-M core with x86-64 extensions. However, I sincerely doubt those extensions are already in the Banias core which means a minimum of about 2-3 years. Might as well just shove them into the next generation core.

7. Relatively speaking, additional GPR's does nothing to hit rate. Hit rate is actually a figure you'd like as close to 100% as possible. More GPR's allows less loads and stores, which are essentially wasted clock cycles and memory transactions when an instruction block requires more registers than there are available. x86 has a very small register file, which means excessive memory traffic.

8. Banias has between 14-20 pipeline stages. This is derived from the remarks of "between a Pentium 3 and a Pentium 4."

9. If anyone has a beef with long pipelines, please remember that the P5 had 5 stages and even if the P6 had stuck to that 5 stage limit, it would never have reached 1 GHz until 90nm at the least. Extending pipelines has traditionally been a one step back 1.5 steps forward philosophy.

10. Alpha was a Digital corporation processor that ran extremely well when it first came out. The company was eventually bought out by Compaq, which continued development on the Alpha family. If I remember correctly, Intel eventually bought or licensed the Alpha family. AMD actually licensed the bus design of the Alpha EV6. The Alpha processor was a RISC design that did what Intel is doing now: extending the pipeline to a then unheard of 10 stages. It was the first design to reach 500MHz.
 
The only reason it is not cost effective is because of the quantity the chips are made in. If intel made as many P4's as they did itaniums, they would probably be comparable in price. The itanium is a failure because its use relies on many code compilers, extensive knowledge of mathematics, and a strong programing background(fortran is a plus). This elimanates close to 98% of the mainstream computer users. SGI computers typically are best suited for DCC, it has realtime capabilities you can't event touch with a normal PC. Sun and IBM are best for MCAD and FEA. These architectures are extremely scalable. Just imagine a scalable gaming PC with multiple RISC processors and GPUs; the technology is there. I have heard Alienware is trying to implement this same technology thats been around for years into the mainstream market.
 
Ah ok, x86 vs RISC(or other). It basically boils down to this: Software Developers who work down on the Hardware level hate x86. It's very complex and contains old crap they need to deal with, but would rather not. RISC and other architectures do not have these same issues.

OTOH, x86 is the architecture of choice for most Users. It has the largest Software base, Hardware base, it's very powerful, and it's cheap.

The advantages for AMD and Users with going to x86-64 are that all the Software made for x86-32 will function without any issue(seamlessly) and that porting that Software to x86-64 is rather easy, which will cause the pool of 64bit Software to grow rather quickly for x86-64(much more quickly than it will for other architectures). This doesn't mean that all Software available on Itanium, Sun, IBM, or other corps 64bit architectures will also be made available on x86-64 though. Itanium and other architectures are much more highly specialized than x86, making them excellent for certain tasks, but lacking for others. x86 is not specialized though, it is very good for a very broad range of tasks, making it the choice of those who do a broad range of things.

I think you saw "64bit", then looked at the price of an Opteron system then concluded you could have your cake and eat it too. Unfortunetly it doesn't work that way, certain tasks/Software will always require certain Hardware. That's why an Itanium costs so much and Opteron so relatively little.
 
Originally posted by: TheCadMan
The only reason it is not cost effective is because of the quantity the chips are made in. If intel made as many P4's as they did itaniums, they would probably be comparable in price. The itanium is a failure because its use relies on many code compilers, extensive knowledge of mathematics, and a strong programing background(fortran is a plus). This elimanates close to 98% of the mainstream computer users. SGI computers typically are best suited for DCC, it has realtime capabilities you can't event touch with a normal PC. Sun and IBM are best for MCAD and FEA. These architectures are extremely scalable. Just imagine a scalable gaming PC with multiple RISC processors and GPUs; the technology is there. I have heard Alienware is trying to implement this same technology thats been around for years into the mainstream market.

No, it is produced in low quantity because the Market for it is too small to warrant higher production. It costs so much because the amount of processors that can be sold is too small.
 
Originally posted by: clarkey01
The again RDRAM was miles better the DDR but it never took off, iv answer my own question, ah well ;-)

where did you read that? rdram was only about 1-5% faster then ddr and costs nearly the same. the reason y it never took off was because a mobo supporting rdram costs waaay more then one that just supports ddr. therefore, consumers found that the price to pay for an rdram mobo isnt worth the load of money.
 
Originally posted by: sandorski
Originally posted by: TheCadMan
The only reason it is not cost effective is because of the quantity the chips are made in. If intel made as many P4's as they did itaniums, they would probably be comparable in price. The itanium is a failure because its use relies on many code compilers, extensive knowledge of mathematics, and a strong programing background(fortran is a plus). This elimanates close to 98% of the mainstream computer users. SGI computers typically are best suited for DCC, it has realtime capabilities you can't event touch with a normal PC. Sun and IBM are best for MCAD and FEA. These architectures are extremely scalable. Just imagine a scalable gaming PC with multiple RISC processors and GPUs; the technology is there. I have heard Alienware is trying to implement this same technology thats been around for years into the mainstream market.

No, it is produced in low quantity because the Market for it is too small to warrant higher production. It costs so much because the amount of processors that can be sold is too small.

that and have you ever seen the size of a Itanium thier huge! can't be cheap at all to manufacture even if they were in the volume of a p4
 
Originally posted by: Mik3y
Originally posted by: clarkey01
The again RDRAM was miles better the DDR but it never took off, iv answer my own question, ah well ;-)

where did you read that? rdram was only about 1-5% faster then ddr and costs nearly the same. the reason y it never took off was because a mobo supporting rdram costs waaay more then one that just supports ddr. therefore, consumers found that the price to pay for an rdram mobo isnt worth the load of money.

No no no no no no no. RDRAM cost more than twice as much as DDR back in the day. It probably still costs astronomically more today.
 
Originally posted by: SickBeast
Originally posted by: Mik3y
Originally posted by: clarkey01
The again RDRAM was miles better the DDR but it never took off, iv answer my own question, ah well ;-)

where did you read that? rdram was only about 1-5% faster then ddr and costs nearly the same. the reason y it never took off was because a mobo supporting rdram costs waaay more then one that just supports ddr. therefore, consumers found that the price to pay for an rdram mobo isnt worth the load of money.

No no no no no no no. RDRAM cost more than twice as much as DDR back in the day. It probably still costs astronomically more today.

Much more than twice as much. I remember it being 10x as much! :Q
 
hm..last i read at pcstats.com (which was prob a yr ago) was taht rdram now costs nearly the same as ddr. it was just taht the mobo was way expensive. well, the price probably went down because there was little performance increase over ddr. i'll try to find you guys the same article and benchmarks i saw before.
 
Originally posted by: clarkey01
The again RDRAM was miles better the DDR but it never took off, iv answer my own question, ah well ;-)

Actually no. DDR is a parallel setup whereas RDRAM is series. Having 4 DIMMs improves performance because more banks can be accessed at once; hence the creation of dual channel memory setups.
Having 4 RIMMs installed would just kill the memory latency because all data has to pass through all of the RIMMs.
 
I was going to point out that Sun does not use IA64 technology, but Sahakiel beat me to it.

But I do have issues with:
6. Sun is having problems for the same reasons Apple is having problems but magnified. Sun traditionally designs the entire computer sysytem from the CPU to the applications to the keyboard and monitor as well as everything in between. They used to have the best and the fastest systems, but the costs associated with development and support have slowed upgrade cycles. PC's have caught up not because they are fundamentally better but because the influx of money from millions of customers have allowed brute force to drive development.

Sun's upgrade cycles are usually slower than that of x86 based PCs because there isn't a reason to upgrade them as much. They're a solutions company, and when you spend $1.5million USD on a machine, you don't want a newer one coming out the next week. 😉

But customers don't upgrade often, so Sun makes money on support contracts. I think Sun machines have a 5 year life.

unrelated:
sparc4u processors can run 32bit applications natively. In fact, the sparc4u in the ultra 1 defaulted to 32bit instead of 64bit. Apple's G5 also has no issues with 32bit applications.
 
So essentially this thread is a rant about how one person doesn't like the fact that we didn't migrate to RISC-based processors to get our 64-bits. The market is by and large tied to x86. Get over it.
 
"Simply put, an Intel processor will not do the trick, because it does not have the necessary benifits of a 64 bit processor, and neither does the AMD opteron 64"
Hmmm... Itanium is an Intel processor...

I guess that one of the reasons you prefer Itanium is the floating point operation speed. And the other is the capability to access large memory spaces. Well, One o them have nothing to do with the instruction architecture (the floating point speed). The other is present on every other 64-bit processor.
As Intel embraces the x86-64 instruction architecture, applications will arrive for the x86-64. But the possibility exists that using a more expensive processor (Itanium) that might not be better overall (but certainly is better in the desired regards), the total cost is lower. When a deadline is fixed, more specific computing power can be the difference between success and failure. Opteron is not yet there (unfortunately for AMD).

Calin
 
Back
Top