Purpose of AMD 64???

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Sahakiel

Golden Member
Oct 19, 2001
1,746
0
86
Itanium is priced as high as it is only because of its target market. Itanium chips are geared towards high-end applications. CPUs designed for that market carry high premiums. That is the only reason Itanium carries such a high price tag. The inclusion of a massive L2 and L3 cache is entirely due to the target market.

Itanium's lackluster adoption is more closely related to its architectural philosophy. VLIW design is a relatively new technology and somewhat of a mix of vector and superscalar philosophies. Itanium is a somewhat strange architecture.

If you look closely at modern x86 processors, you will find a RISC processor coupled with a translator. Most designs use hardware, but Transmeta decided to go with a software compiler.

x86, as far as I know, is only moderately decent at running a broad range of software. The only reason x86 processors perform relatively well as compared to other processors with different ISAs is because Intel and AMD took a brute force approach and scaled clock speeds as high as possible. Intel and AMD have long histories in semiconductor fabrication and have been able use that experience to tweak x86 processors faster than other designs.

RDRAM is technically superior to DDR DRAM in much the same way PowerPC architectures are superior to x86. For the same reasons, DDR DRAM runs as well or better; more investment. Market momentum and cost are very important factors in technology, as evidenced by the demise of Beta. The majority of end users care only about the short term, and history is rife with examples of short-sided views leading to pitfalls later on.

I'm not too familiar with Sparc designs, but I am quite sure the PowerPC specification started out with 64-bit code. If I remember correctly, PowerPC was developed as a software specification. It gave no guidelines for the actual hardware. The implementation of 32-bit data paths in the old PowerPC chips was probably due to market forces. 64-bit code was probably very rare and/or unneeded, so software emulation would have sufficed.
 

argion

Junior Member
May 19, 2004
3
0
0
This is my first post on these forums, so please be gentle...

Although a computer engineer by degree, I don't design cpu's for a living, but I do try to keep up. I have noticed often in this thread that the IA64 vs AMD64 comparison is being equated to a RISC vs CISC (or X86) debate. The question of which one is best has been going around since these two architectures were developed. Such a debate can be done if there was a clear distinction of these architectures within modern processors. However, with the improvements in technology the lines between RISC and CISC have blurred and one can not argue that an improvement of the x86 instruction set sets us back 20 years.

Here are some points:

1. IA64 is NOT a RISC processor. As someone pointed out it utilizes EPIC/VLIW. Even though EPIC takes the best of RISC and CISC, it is not a new iteration of RISC technology but I completely new CPU architecture. Up until the first Itanium came out, any architecture design and instruction set simulations made the use of an "ideal" cpu as their testbed. This meant a cpu that had no flaws and was 100% efficient. Unfortunatly in real world that is not the case and this explains some of the early pains Intel had with the processor as what they had on paper did not translate to real world performance. Furthermore to maintain x86 compatibility they decided to emulate the x86 instruction set within the processor. This implementation did lack however, and that is why one noticed that x86 bit applications run slower on a 1GHz Itanium than on a native x86 Intel or AMD processor.

2. x86 processors these days do not resemble the CISC architecture of the past. Actually, you could probably get away with classifying them as RISC processors instead. AMD cpu's since the K5 and K6 days (after the NexGen aquisition) have used among other things a Micro-op approach to breaking down and reassembling instructions which is almost typical of a RISC based cpu. Basically it is a very efficient x86 emulation taking place within a RISC environment. Almost every other RISC optimization can be found in today's processors as well. Given this fact, The RISC vs CISC(x86) is a mute point.

3. The x86-64 implementation (AMD64) is not just a 32bit implementation with a couple of addons. It is a true 64bit processor when you look at the architecture specifics and how the registers were increased etc. It just has the benefit of being backwards compatible since it includes the 32bit instruction set. No translation/emulation needs to take place between one another and therefore we see 32bit apps running more or just as efficiently which is not the case on the Itanium.

4. As another poster pointed out, the K8 generation of AMD processors is not only a cpu instruction set upgrade. A few other technologies were implemented such as:

Alpha Roots - Based on one of the original 64bit processors, it actually shares common elements with that architecture. Even though Alpha has not been adopted widely, everyone agrees that it is a good design (unlike the IA64)

Hypertransport - A significant improvement over previous system bus architectures. It can be used not only to interconnect elements on the motherboard for example, but can be integrated within the cpu itself (as seen with the Opterons)

Integrated Memory Controller - AMD needed something to counter the increase in FSB capabilities Intel processors had at the time. In doing this they skipped a step and bypassed FSB completely and tied it all in the processor. How can this not be moving ahead of the curve when Intel itself has said that they will be going this direction with future processors of their own.

Dual Core Capability - This line of processors was designed from the beginning to go dual core. As such, all the connectors are in place and the layout and architecture of the design are optimized for such an operation. Intel on the other hand will have to retrofit the PIII core with these capabilities.

5. What does PIII have to do with anything? With Tejas and the the succesors of the P4 core out of the way, Intel will be using the Pentium-M's as their basis for all future designs. These processors are based on the PIII so in a sense one could make the argument that Intel is taking us at least 4-5 years back. I believe however that the PIII overall was a better implementation than the P4. A lot of work has been done on the original core with the advent of the M line of processors. It was not build however to support 64bit or dual core operations. All these things will still have to be fitted around it and this poses a whole other set of problems. Intel has a big R&D department however and with all its resources focused on this project I believe that in time they will come out with a good solution.


I know I sidetracked a bit making all these comments but I wanted to make one more that goes beyond the technical merits of one CPU or another. In the end the consumer decides what will make it or break it in the market and we will push back if something is being forced upon us. Intel tried to do that with RDRAM for example and in some part now with their BTX case design and all the socket changes they keep on making. Trying to force a standard down our throats without proving the benefits of that standard and asking us to pay 4x and more the amount of money for something that performs the same as existing technology will get the appropriate responce from the majority of users. There are always going to be the ones that want the latest and greatest regardless, but these people are a small percentage of the whole in my opinion.

If for anything, Intel did not push enough with the Itanium series of processors. They made it clear from the beginning that this was a high end processor that would not trickle down to the desktop for a long time. This did leave a gap and AMD took advantage of it. Not only did it extend the current architecture but in doing so, came out with a solution that threatens the purpose of the iA64 itself in the higer end server market. And it did this using existing standards and at "normal" prices and that is why their K8 line of processors is geting the enthusiastic reception it has so far.

Not only did the AMD64 processor not take us back 20 years but it brought us forward to the point where Intel now has to copy it because it knows that by not doing so they can not compete. Even Intel's move to cut the P4 line will set them temporarily back but it will provide with much more gain in the future. If they had stayed the P4 course they would still be going back in time.

If for anything, the purpose of the AMD64 is the only one that has been clear so far; to provide high end performance at reasonable cost to the user while at the same time creating and expanding the technology envelope. That is far more than can be said for other cpu designs out there.

-J

If you have read this essay you are more patient than I thought :). Always looking for some constructive criticism and corrections to any technical erros I might have made.
 

klah

Diamond Member
Aug 13, 2002
7,070
1
0
Originally posted by: TheCadMan
I was just wondering if anyone here knew of any uses of the AMD 64 technology that would persuade me to spend the extra $1000 over the dual xeon system i'm considering building as well.

Where are you getting these prices?

------------------

http://techreport.com/reviews/2004q2/opteron-x50/index.x?pg=1

The benchmarks speak volumes. For single-processor systems, the Opteron 150 looks like the fastest x86 CPU on the planet. In a multiprocessor configuration, the Opteron 250 scales up very well, even without the benefit of an optimal memory configuration, a NUMA-aware OS, or 64-bit extensions.

By contrast, Intel's dual Xeons are a little bit disappointing. They perform relatively well in CPU-bound apps like 3D rendering programs, which are also largely well optimized for SSE2. But in memory-bound applications where dual Xeons ought to do well, like video encoding, the Xeons' slow bus and RAM hold them back. One has to wonder what Intel is hoping to accomplish by saddling its workstation-class processors with older, slower technology. Even a single Pentium 4 benefits greatly from additional bus and memory bandwidth. Surely a pair of Xeons on shared bus ought to have this same advantage. Intel's apparent willingness to forego such enhancements in favor of adding ever-larger on-chip caches to the Xeon is puzzling.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
argion:

That had to be the most interesting "first post" that I've ever read on these forums. Welcome to AT, people like you make these boards great. :)
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
Excellent first post, argion.

I would like to second the statement a previous post mentioned about the Itanium being naturally expensive to produce. I am not sure of the die sizes of the current iteration vs the current P4s, but I remember it being significantly larger, which would have correspondingly lower yield rates and thus higher cost.
 
Apr 25, 2004
58
0
0
I knew posting this would bring out a good debate. Another question whats the difference between the Althon FX series and the equivalent opteron? i.e. FX53-Opteron 150
 

argion

Junior Member
May 19, 2004
3
0
0
Originally posted by: TheCadMan
I knew posting this would bring out a good debate. Another question whats the difference between the Althon FX series and the equivalent opteron? i.e. FX53-Opteron 150

The FX line of processors is supposed to be the creme de la creme of the AMD cpu line. That being said the Opteron 150 and the FX-53 are the same cpu with the exception of the multiplier being unlocked on the FX.

Off course the FX-53 has been out for some time now as the enthusiast part while AMD was probably stocking up on the 150's. I would expect an FX-55 to be coming out sometime in the Q3. By all accounts it will be the first AMD cpu to come out using the 90nm SOI technology. Also the 939 socket will be announced during computex in June so I would not be surprised if the FX-55 was announced then as well. If not, the 939 pin version of the FX-53 will be released.

-J
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: Sahakiel
I'm not too familiar with Sparc designs,

SPARC started out at 32bit I believe. Atleast, 3 of the 5 sparcs I have (3x sparc4m, 2x sparc4u) are 32 bit only. ;)
 

chsh1ca

Golden Member
Feb 17, 2003
1,179
0
0
Originally posted by: Sahakiel
RDRAM is technically superior to DDR DRAM in much the same way PowerPC architectures are superior to x86. For the same reasons, DDR DRAM runs as well or better; more investment. Market momentum and cost are very important factors in technology, as evidenced by the demise of Beta. The majority of end users care only about the short term, and history is rife with examples of short-sided views leading to pitfalls later on.
RDRAM is higher latency, unless your memory application relies on raw bandwidth and not latency, DDR is technically superior. I'm not sure I'd call either solution (RD or DDR) superior, just different. A 128bit 100MHz bus moves the same amount of data in a given second as a 64bit 200MHz bus, they are merely different approaches. The same applies to PPC vs X86.

Originally posted by: klah
Originally posted by: TheCadMan
I was just wondering if anyone here knew of any uses of the AMD 64 technology that would persuade me to spend the extra $1000 over the dual xeon system i'm considering building as well.
Where are you getting these prices?
His active imagination, which appears to be disconnected from reality.
 
Apr 25, 2004
58
0
0
The difference in my configurations was $1000 not the differences in processor price. Since the opteron is relatively new the old components i already have are not compatible, so i would have to invest in new ones. Thats where the price difference came from. In fact the opteron processors themselves are actually cheaper. Some components don't work with others, i'm pretty sure i'm not imagining that.
 

chsh1ca

Golden Member
Feb 17, 2003
1,179
0
0
What did you have that won't work on the new system? As far as I'm aware, for server boards, you might have to get DDR RAM, instead of RDRAM or PC-133 (assuming you had either before) but $1000 of RAM is like 4GB ECC here, so it should be more RAM if you're in the US.
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: Mik3y
Originally posted by: clarkey01
The again RDRAM was miles better the DDR but it never took off, iv answer my own question, ah well ;-)

where did you read that? rdram was only about 1-5% faster then ddr and costs nearly the same. the reason y it never took off was because a mobo supporting rdram costs waaay more then one that just supports ddr. therefore, consumers found that the price to pay for an rdram mobo isnt worth the load of money.

If rambus continued on its original roadmap no one would be using DDR today. (for intel)
 

clarkey01

Diamond Member
Feb 4, 2004
3,419
1
0
Im hella confused, sum say RDRAM was cheaper, the same, ten times more exspensive, gee i got the idea DDR was more affordable and that why RD never took off
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
RD-RAM started out slightly more expensive, then DDR prices dropped like crazy and RD-RAM stayed high. Now both are expensive again.
 

sandorski

No Lifer
Oct 10, 1999
70,678
6,250
126
Originally posted by: aka1nas
RD-RAM started out slightly more expensive, then DDR prices dropped like crazy and RD-RAM stayed high. Now both are expensive again.

"slightly", not even close. When it first came out it was 10x the cost(some people paid $2k for RDram where DDR was $200 for the same amount), only within the last year or so has it even come close to "slightly" more expensive.
 

Calin

Diamond Member
Apr 9, 2001
3,112
0
0
Originally posted by: sandorski
Originally posted by: aka1nas
RD-RAM started out slightly more expensive, then DDR prices dropped like crazy and RD-RAM stayed high. Now both are expensive again.

"slightly", not even close. When it first came out it was 10x the cost(some people paid $2k for RDram where DDR was $200 for the same amount), only within the last year or so has it even come close to "slightly" more expensive.

I remember at a time RDRAM being about 900 USD for 256MB. Not "slightly more expensive" at all...

Calin
 

DrMrLordX

Lifer
Apr 27, 2000
22,706
12,663
136
One thing I don't seem to have noticed in this thread(pardon me if I'm blind) is any mention of the heat produced by Itanium/Itanium 2 due to power consumtpion. From what I have heard, these processors consume a lot of power and generate a lot of heat. It's difficult to justify setting up large server farms based on Itanium 2s simply because your performance/power ratio is rather low with an Itanium solution(as compared to using clusters of cheaper, lower-power cpus).

Intel wanted Itanium to be the driving force that would promote industry-wide adoption of IA64, and it failed to do so for reasons that have little or nothing to do with the merits/flaws of IA64. Even if x86-64 might be technically inferior, and even if IA64 might well be able to serve as the instruction set for desktop/notebook/workstation processors, we'll probably never see it in anything other than the Itanium. x86-64 is championed by Opteron/Athlon 64 which are innovative cpus that are being widely adopted due to the technical merits of the cpus. As has been said before in this thread, Opteron/Athlon64 runs 32-bit code incredibly well. It's almost as if x86-64 is a pointless afterthought(at least in this point in time).

As far as RDRAM prices are concerned, please keep in mind that the original batches of RDRAM chips avaialble for the PC market competed with SDR DRAM, not DDR DRAM. RDRAM was sold as the RAM solution of choice for the Pentium III in such steller chipsets as the Intel i820 and i840(SDRAM was hobbled on these platforms due to the MTH, VIA's offerings had poor memory controller performance, and the i815 wasn't available until later in the PIII cycle). RIMMS were expensive to start for various reasons(probably due to a shortage of manufacturers) and were not adopted rapidly. The PIII was not the right cpu to use with RDRAM. The P4 was, at least for awhile. That processor was what sold RIMMS, prompted the manufacture of more RIMMS, and inevitably brought RDRAM prices down.

RDRAM can not be said to have been technically superior to SDR or DDR DRAM, simply because it was pointless except during a short period time when it was the RAM of choice for P4s. It only achieved that distinction because P4s were designed, from the ground up, to need the massive amount of memory bandwidth that only RDRAM could provide(at the time). Dual DDR configs(Granite Bay, SiS 655) killed it by offering similar performance with P4s at a significantly lower cost and at lower latencies.

I don't think you'll find many people here who have much good to say about RDRAM, and for good reason. Damn you RAMBUS and your JEDEC shennanigans!
 

VIAN

Diamond Member
Aug 22, 2003
6,575
1
0
I think, why wouldn't you want to continue the x86 architecture. x86-64 makes so much sense to me. Use the existing x86 and put some crap on top for 64 while fixing previous issues that x86 had. I don't see what the problem is, I think A64 is great.