• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

If AMD never introduced 64-bit instruction set . . .

tynopik

Diamond Member
how different would things have turned out?

Intel surely would have introduced 64-bit at some point . . . right?

I know that after AMD announced AMD-64, Intel went to Microsoft about their own 64-bit instruction set but MS told them they had already ported Windows to a 64-bit system for them (Itanium) and weren't interested in doing it again, which killed Intel's effort.

Basically,
1. how much longer would we have had to wait for 64-bit on the consumer side
2. how different would Intel's instruction set have been? Would there have been any tangible benefits (cleaner, easier to compile against, better performance)? Were there any major deficiencies in AMD's instruction set because they weren't as good as Intel/were rushed/etc?
 
My guess is Nehalem would have been IA64 leaving Penryn/Wolfdale as the legacy x86 CPU to compete with AMD. AMD would get MS to remove the 4GB limit in Vista/7 then try to convince the world that x86+AWE>IA64 until the 09 settlement which AMD would get a IA64 license then we would be wondering today if AMD's upcoming IA64 CPU would make them competive again. Someone at HP would been like we could've taken x86's spot during that transistion if we hadn't killed Alpha.
 
isnt AMD whole athlon core based off of alpha tech anyways that turned into a64-bulldozer etc

Not AFAIR. More like the Athlon developer team (most ex-NexGen I believe) included some Alpha people and they used the Alpha's point-to-point bus etc. but not the microarch. Not sure if Hypertransport isn't based on that tech too.
 
Not AFAIR. More like the Athlon developer team (most ex-NexGen I believe) included some Alpha people and they used the Alpha's point-to-point bus etc. but not the microarch. Not sure if Hypertransport isn't based on that tech too.

well the top engineer from dec that worked on the alpha arch was brought over when dec went under and lead the athlon team with his co workers from alpha.

that pretty much means dec continued work under AMD after they were let go from dec

the sad part is intel bought most of there tech and never used any of it and locked it all up.Im pretty sure the athlon 64 was what dec would of had if they never went under


from wiki

In August 1999, AMD released the Athlon (K7) processor. Notably, the design team was led by Dirk Meyer, who had worked as a lead engineer on multiple Alpha microprocessors during his employment at DEC. Jerry Sanders had approached many of the engineering staff to work for AMD as DEC wound down their semiconductor business, and brought in a near-complete team of engineering experts
 
Last edited:
Without AMD, intel would have steered the pc industry into one too many Rambus type potholes and as a result there would be many more macs and many, many more ARM devices being used as primary pc's. So ironically, I think Intel has to thank AMD for helping it not get totally blindsided and washed out of its proprietary uA's entirely.
 
Ah ok, so whereas Intel never used any of that tech, AMD did for a while but since the original K7 and Athlon64, they seem to have haemorrhaged most of those people they took on. Same with the ex-ATI'ers it seems.

I know it's hard to be competitive with Intel on a tiny fraction of their R&D budget (and at times AMD did very well) but for the last few years AMD has not been doing too well. Don't think the merger was good for them or ATI.

Certainly Trinity had better pay dividends. Looking at the direction of both Bulldozer and GCN, both of which were obviously designed for Fusion (Bulldozer's modular design, GCN's focus on gCompute) and both of those decisions are hurting AMD. I'd estimate that something like 30-50% of GCN's transistor budget is focused on gCompute (depending on how much the improved tessellation costs) and yet the leaner Kepler designs will mostly outperform and outsell them while costing less to make. And Bulldozer costs more to make than Stars while performing worse.

But will even future Fusion designs be able to benefit fully from this? Because unless they can eliminate something, a Bulldozer-type module plus a GCN-type part will most likely use more transistors and have duplicated parts like x87 FPU plus gCompute.
 
how different would things have turned out?

Intel surely would have introduced 64-bit at some point . . . right?
They would have had do. Consumer software would not have put up with PAE (we had enough of that crap with 64K windows in DOS, thankyouverymuch), and IA64 was not going to cut it. Intel could very well have screwed themselves over big time if AMD had not been competitive. It's not just that MS and AMD agreed about x86_64, but that their solution to 64-bit was a good one, and also satisfied the rest of the x86 universe, including Linux and Unixes.

Basically,
1. how much longer would we have had to wait for 64-bit on the consumer side
I would guess until 4-8GB was affordable (2007-8), because Intel wanted to push IA64, which nobody with any common sense wanted. That assumes, however, that (a) AMD died or gave up on x86, and (b) no one else would try to pick up the shiny pot of gold in the middle of the room that was IA32.

2. how different would Intel's instruction set have been? Would there have been any tangible benefits (cleaner, easier to compile against, better performance)? Were there any major deficiencies in AMD's instruction set because they weren't as good as Intel/were rushed/etc?
Good question. Hard to answer, without knowing what Intel had been cooking up. If Intel had, even w/o AMD competing, realized their hubris was harming them, they would have come up with something other than IA64, and likely had something else in the works, anyway.

My guess is Nehalem would have been IA64 leaving Penryn/Wolfdale as the legacy x86 CPU to compete with AMD. AMD would get MS to remove the 4GB limit in Vista/7 then try to convince the world that x86+AWE>IA64 until the 09 settlement which AMD would get a IA64 license then we would be wondering today if AMD's upcoming IA64 CPU would make them competive again. Someone at HP would been like we could've taken x86's spot during that transistion if we hadn't killed Alpha.
There's just two problems with that: Itanium and Opteron.

- Too many registers: check. Too many registers will make renaming effectively difficult. Register renaming has been an excellent performance-enhancing feature, making good use of small register sets, since 1967. Not only that, but you usually don't even need more than IA32's <=4 GPRs, and x86_64's added GPRs take care of almost every case but hot software-unrollable computing loops.

- Software MMU: check. Software page walks are slow, and way too much code is spent doing what the HW should be doing, instead. ALpha had this, too, but Alpha had enough good things going for it to make up for it.

- Absurdly large code: check. Bigger code means worse I$ performance, all else being equal; and making a bigger I$ increases penalties from cache misses.

- Software scheduling of in-order pipelined superscalar execution: check. Software scheduling makes even bigger code, with one or more profile-guided paths for decent performance (not much software out there on PCs and servers gets PGO, unless the performance is unpredictably bad), and necessarily will create more and longer stalls in branch-filled code than will fully hardware branch prediction and OOOE, because to handle all combinations of currently-available data, you need ever-more distinct code paths. Hardware data prefetching was also just starting to get good enough around the time IA64 was in the works, which compliments OOOE quite well.

VLIW should stay far away from general purpose CPUs. IA64 isn't VLIW in a strict sense, but the software scheduling and instruction bundling give it enough bad features of VLIW to lump it in with more pure VLIW.

- Poor IA32 performance. Itaniums have not had good IA32 performance, and that performance matters thousands of times more than the native performance, even today. You think they could have made a quad core x86 coprocessor that used less power than SB, getting similar IA32 performance, and still had enough physical room and TDP left over for IA64? I doubt it.

The reason IA64 didn't completely fail is that HP supported it, and it only. So, if you ran Alpha, PA-RISC, used Tandem stuff, etc., you needed to either move to Itanium, or start over from scratch, if you ever needed to upgrade. IA64 would cause nothing but consumer backlash. Even today, IA32 performance matters. Why would you want a new ISA who's chips perform poorly w/o PGO, and who's x86 performance sucks? Screw that. Big bad Itanium server CPUs still have bad performance.

- Finally, why would Intel have brought out the Pentium-M, made the Core, the Core 2, Nehalem, and so on, if there had never been an Opteron? We wouldn't even have Nehalem. We don't even know if Intel would have gotten a fire lit under their asses, instead of trying to push more Netburst and IA64. The K8 was probably the best thing that could have happened to Intel in the early 00s.

We could have moved to ARM, MIPS, PPC, etc., and as long as there was good JIT x86 support in firmware, been alright. If Intel had sufficiently pushed IA64, we likely would have. A nice 64-bit PPC with JIT x86 that could run at least 2/3 the speed of native code, FI, would have easily beaten out Itanium-like desktop chips, and further Netburst derivatives (not unreasonable, especially if the ISA got extensions for JIT in general, or for x86-specific features that were slow when made into PPC).
 
lol @ ia64 nehalem. "EPIC" fail

also

Finally, why would Intel have brought out the Pentium-M, made the Core, the Core 2, Nehalem, and so on, if there had never been an Opteron? We wouldn't even have Nehalem. We don't even know if Intel would have gotten a fire lit under their asses, instead of trying to push more Netburst and IA64. The K8 was probably the best thing that could have happened to Intel in the early 00s.

I don't mean to quibble, but why does everyone think integrating the memory controller was AMD's gift to the world rather than a simple inevitability of VLSI industrial philosophy? If the K8 wins a nobel prize, it should be for keeping a sane pipeline depth rather than trying to play intel's outrageous game, which must have been tempting at first. Once it turned out to be a goddamn rabbit's hole they were probably feeling pretty great.

Of course, this also was around the time that laptop shipments eclipsed desktop shipments and the role of pipeline depth in power efficiency became quite clear. Intel would've eventually changed their course on their own, with or without K8, but it probably would've been a node later and without any of the amusement. Netburst existed at 65nm and the physics lessons were clear. Not a laptop chip.
 
Last edited:
I don't mean to quibble, but why does everyone think integrating the memory controller was AMD's gift to the world rather than a simple inevitability of VLSI industrial philosophy?
I dunno. You should probably quote a message by someone mentioning the IMC, if you want an answer 😛.
 
You said we would have no "Nehalem" without K8, and I interpreted that to be a wide, integrated device from intel inspired by AMD.

Prior to that you were discussing instruction sets. Not much difference between Nehalem and Cedar Mill in that regard.
 
You said we would have no "Nehalem" without K8, and I interpreted that to be a wide, integrated device from intel inspired by AMD.
No. It was a CPU that was far superior to any Netburst series CPU, especially in servers, had the RAS and corporate backing to get into servers, was perceived as good enough that AMD was supply-constrained for quite some time, and was the first commercial hardware implementation if a [mostly] backwards-compatible 64-bit extension to IA32, x86-64. Adding an IMC to enhance a weak spot in their performance, while Intel had no high-performance IMC plans, was a nice coup for AMD, but the feature was an inevitability.

Without AMD coming out with a product that was obviously far more desirable, Intel would have had no reason to get its head out of the sand. They had a shake-up, changed around management, agreed that x86 would rule the world (again), and moved to an aggressive incremental development plan (tick tock), that helps keep them from getting stuck in a rut. If the K8 stayed vaporware (it was awfully late), x86-64 would have been just another interesting specification. If the K8 came out but sucked, who knows.

Prior to that you were discussing instruction sets. Not much difference between Nehalem and Cedar Mill in that regard.
Not just instruction sets, but also the implementations. I'd rather run a program compiled for a i386 on a Core i than on a P4. The K8's technical and commercial success really showed Intel that they needed to change direction. Following demand, instead of trying to create it, being one of the major changes.
 
Last edited:
Was there really any intention to abandon x86 for IA64 in the general space, and did they make or unmake any plans that would suggest it?

Whether K8 was slow or fast, they had the elegant solution and I think someone would have eventually gotten there if AMD hadn't. When you say "just another interesting spec," you make it sound like intel was deliberately looking for something that wasn't backwards-compatible with x86, and later changed their minds when AMD tipped their hand. Craig said (at IDF 2004) they were working on em64t for "years," and when you take that in account with K8's tardiness, some big decisions had to have been made before they ever saw the real world performance of K8.

When you say their head was in the sand, are you referring to instruction set extensions or architectures? They didn't truly and permanently pull their heads out until they realized Tejas was going to need 200 watts and still be slower than Dothan.
 
Last edited:
Was there really any intention to abandon x86 for IA64 in the general space, and did they make or unmake any plans that would suggest it?
From the very beginning IA64 was their eventual goal for all computing. This was back in 97 or so. They were planning on having it enter the consumer market in 6-8 years as a technology that they didn't have to share with any company.
Whether K8 was slow or fast, they had the elegant solution and I think someone would have eventually gotten there if AMD hadn't. When you say "just another interesting spec," you make it sound like intel was deliberately looking for something that wasn't backwards-compatible with x86, and later changed their minds when AMD tipped their hand. Craig said they were working on em64t for "years," and when you take that in account with K8's tardiness, some big decisions had to have been made before they saw the real world performance of K8.
Big point is 1 that it was 100% compatible and a direct translation of AMD-64. Whether it being pushed by Microsoft, or just part of their Cross Licensing agreement with AMD, it was AMD that developed it and they were never interested in developing itself. But if the Opteron was not a quick chip and Intel didn't have to have EMT64, at least as a check box feature to compete with it, then they don't introduce it and it becomes a disabled feature on the next couple processors before they cleared it out of the design. Heck Microsoft wasn't even sold on it, they "delayed" XP-64 till almost the P4 with EMT64 release. Both would have been happy if the Opteron was a failure. Microsoft didn't want to worry about a true consumer 64-bit OS until Vista came out.
When you say their head was in the sand, are you referring to instruction set extensions or architectures? They didn't truly and permanently pull their heads out until they realized Tejas was going to need 200 watts and still be slower than Dothan.
Probably both, AMD getting the agreement with Microsoft and the success of the Athlon meant that they needed to make sure they had to cover their bases, and the ability to clock the Opteron/Athlon 64 up to decent amounts along with other issues like Itanium not picking up in Servers, the problems keeping a competitive 2U compatible CPU, they made them make a move. Really they are very lucky that they gave up on trying to make netburt a laptop CPU, or they would have been in real trouble. Tejas was a last ditch effort to keep Netburst viable. But really had been in development for years. What Intel was worried about was not having a viable performance CPU between the Prescott and Conroe. It's not that they wanted Tejas to work and continue using Netburst. Is that they felt that they had to. Really they did, their worst time in giving AMD market share was during that period.

Northwood and Prescott would have had substantial leads on the K7. If K8 struggled. Intel would have killed EMT64 and probably would have just continued to release Dothan and its future siblings as mobile Pentiums, delayed and or sat on Tejas till they could make it viable and probably spent most time in development trying to figure how to raise clocks on Netburst.
 
Was there really any intention to abandon x86 for IA64 in the general space, and did they make or unmake any plans that would suggest it?
Some high up in Intel (and HP) did clearly believe that tried and true x86 and RISC were headed to a dead end, and they did at least put on a face like the whole company simply knew it to be a truth. After IA64 had been in servers for awhile, it would come to the desktop, with no pesky AMD/IBM/VIA/NatSemi competition. I seriously doubt that such views permeated the company, though. EPIC was too obviously a pipe dream, even if the 1 IPC brick wall had been a reality, but quite a few bright minds said it must become true, and believed it would.

"The thought was it would cascade down to low-end computers because it was believed the 32-bit x86 architecture would run out of gas. (...) The x86 Xeon servers and Opteron grew up faster into the server space than anyone thought they would. Itanium is profitable today. It’s growing market share in big iron architecture. But it certainly didn’t move down into the PC market as we had anticipated in the early 1990s."
http://venturebeat.com/2009/05/08/e...barrett-on-the-industrys-unfinished-business/

"After publicly denying for years that it had any plans to offer 64-bit extensions for its 32-bit processors, Intel made an about-face at IDF (the Intel Developer Forum) in San Francisco last month."
http://www.processor.com/editorial/article.asp?article=articles/p2610/38p10/38p10.asp

Finding forward-looking affirmations from that long ago, is tough. Doubly so with the management-speak where everything 64-bit is Itanium, and everything 32-bit is x86, but no mixing.

When you say their head was in the sand, are you referring to instruction set extensions or architectures? They didn't truly and permanently pull their heads out until they realized Tejas was going to need 200 watts and still be slower than Dothan.
Both. Intel hasn't made many ISA extensions not directly tied to the architecture that will run them. They are two things that can be decoupled, but generally aren't, when it comes to Intel.

Craig said (at IDF 2004) they were working on em64t for "years," and when you take that in account with K8's tardiness, some big decisions had to have been made before they ever saw the real world performance of K8.
AMD announced it in 1999, and the full details were out about three years prior to the Opteron. I suspect Intel had something in the works ever since then, which would definitely qualify as, "years," and certainly would be enough time to implement it.
 
Ok. But they did implement x64 long before they abandoned Netburst... and it was well within their power to introduce Yonah, a faster 32-bit device, on socket 775, rather than Cedar Mill, a slower, high-TDP 64-bit device. Yet they chose Cedar Mill... a 65-95 watt Yonah would've been plenty fast.
 
I believe Itanium was the planned successor, and the limitations of 32b was going to help them move everybody over.

Kinda wish it had happened that way, then they could have spent years and years of PhD time figuring out how to optimize compilers for superscalar architectures. They never really could figure out how to make it superscalar except in a subset of application profiles...wish we had seen resolution of that.
 
It has certainly been an interesting 15ish years. I remember when Intel put the kibosh on non-Intel CPUs on Intel Chipsets(post Socket 7) and all the drama of how that was going to kill off all competition.

AMD decided to extend Socket 7 to Super Socket 7 and from that desperate move to survive it went on from being a lowly also ran to a major player and innovator in the x86 world. In retrospect, I'm not entirely convinced that Intel meant to kill off the competition, perhaps it was helping to make it survive. Meh, I kinda doubt that's the case, regardless, that's how it turned out.
 
Heck Microsoft wasn't even sold on it, they "delayed" XP-64 till almost the P4 with EMT64 release.

I remember that, it was a very sleazy move by Microsoft, to intentionally hold up the release of software that could have benefited Intel's competitor in the market, until Intel was ready to release their 64-bit x86 silicon.
 
It has certainly been an interesting 15ish years. I remember when Intel put the kibosh on non-Intel CPUs on Intel Chipsets(post Socket 7) and all the drama of how that was going to kill off all competition.

AMD decided to extend Socket 7 to Super Socket 7 and from that desperate move to survive it went on from being a lowly also ran to a major player and innovator in the x86 world. In retrospect, I'm not entirely convinced that Intel meant to kill off the competition, perhaps it was helping to make it survive. Meh, I kinda doubt that's the case, regardless, that's how it turned out.

So you're saying Intel is like the god emperor of dune, keeping humanity down for 1500 years in order to save it?
 
Without AMD, intel would have steered the pc industry into one too many Rambus type potholes and as a result there would be many more macs and many, many more ARM devices being used as primary pc's. So ironically, I think Intel has to thank AMD for helping it not get totally blindsided and washed out of its proprietary uA's entirely.

I agree. I think AMD helped keep Intel from getting too comfortable.
 
The article cerb linked about craig was pretty inspiring, and intel's relationship with human development is quite measurable.

Their relationship with AMD is more ironic.
 
So you're saying Intel is like the god emperor of dune, keeping humanity down for 1500 years in order to save it?

I think that you are underestimating Leto II's reign. Could have sworn that it was more like 5k years.

Wikipedia answers the question. 3509 years. Man that would be awesome.
 
The question being about the AMD64 instruction set, and not K8 as a whole...

Sometime after Core2 Intel would have developed a consumer oriented Itanium, with Microsoft's support, that would have existed in parallel with x86 for a while, relegating the latter to the lower end of the market, competing with AMD's gradually better products that, although better than Intel's x86 competition, never gained sufficient traction. Eventually Intel would have managed to shift the market towards IA64, forcing AMD to focus on graphics and mobile parts. In the meanwhile the desktop market would start seeing the benefits of leaving x86 behind, with unforeseen performance at the cost of higher costs due to Intel's monopoly-a monopoly that authorities ended, forcing Intel to license IA64 to a competitor which to everyone's surprise would have been Qualcomm instead of AMD. At that point an asteroid will be in Earth's trajectory, leaving our future in the hands of an oil rig engineer and his team. I'm sorry but really felt the need to add this.

(verbs are probably all wrong but I ain't English and cba proof reading...)
 
Back
Top