AMD after X86

AMDrulZ

Member
Jul 9, 2005
199
12
81
I have been thinking about what will am do after x86 is no more.... surely intel will not allow amd to make itanium clones... How ever IBM and AMD are very close can anyone say PPC... Can you imagine AMD makeing PPC chips... Then it would be a battle between PPC and Intel Itanium.... which do you guy's think will become the market standred????
 

F1shF4t

Golden Member
Oct 18, 2005
1,583
1
71
Lol x86 will not be changed anytimes soon, well for those titanics anyways. And no intel tryed to change to the itanics so that amd cant follow, which just resulted in the amd64 x86 cpus.
 

Stumps

Diamond Member
Jun 18, 2001
7,125
0
0
x86 will be around for many years to come...and I'm sure if it did "dissappear" AMD would come up with a competitive design that could compete with Intel.
 

dexvx

Diamond Member
Feb 2, 2000
3,899
0
0
Oh god, x86. So bloated with legacy crap, its not even funny. As for IA-64, it would've been opened up, I mean technically x86 was called IA-32, so Intel would probably allow AMD to make IA64 clones (due to cross-licensing).

IA-64 removed many of the redundancies and useless legacy crap of the aging x86. The only downside, as many people saw, was that it wasn't really backwards compatible with older systems.

However, that said, even x86-64 isn't completely compatible with some legacy x86, so they're slooooowly and painfully removing legacy support instead of having sweeping changes. Oh well.
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: dexvx
Oh god, x86. So bloated with legacy crap, its not even funny. As for IA-64, it would've been opened up, I mean technically x86 was called IA-32, so Intel would probably allow AMD to make IA64 clones (due to cross-licensing).

IA-64 removed many of the redundancies and useless legacy crap of the aging x86. The only downside, as many people saw, was that it wasn't really backwards compatible with older systems.

However, that said, even x86-64 isn't completely compatible with some legacy x86, so they're slooooowly and painfully removing legacy support instead of having sweeping changes. Oh well.

I agree that IA64 is a better architecture, the problem is that it demands much better codewriting to be effective. It's really going to take a much more concerted effort from software developers with many more man-hours per project, and I don't see that happening for a LOOOOOOOOONNG time (if ever).
 

tatteredpotato

Diamond Member
Jul 23, 2006
3,934
0
76
We're talking PCs here, not Macs, any changes will be evolutionary like amd64, as it makes no sense to introduce a new architecture that has no software support. I dont care how fast it is, if i cant run anything on it, the chip is useless.
 

dexvx

Diamond Member
Feb 2, 2000
3,899
0
0
Originally posted by: BlameCanada
We're talking PCs here, not Macs, any changes will be evolutionary like amd64, as it makes no sense to introduce a new architecture that has no software support. I dont care how fast it is, if i cant run anything on it, the chip is useless.

You really think x86-64 was revolutionary? I'm not going to say it was a cakewalk to produce, but in the ISA world, its just a minor patch. You run into the chicken/egg problem with hardware/software. X86 has an unusually consumer high base, so therefore unusually high support. If anything, IMO x86 is a monopoly in the computing world thats consistently held back superior ISA's and therefore, performance. A 90nm Monecito can smack around a 65nm Woodcrest, if both were operating natively in FP situations. Imagine what a 65nm Monecito can do.

People want a new ISA to get rid of legacy redundancies and yet they are not willing to part ways with their existing solutions. You really can't have it both ways. Unfortunately, for the computing world, I forsee x86 surviving a long time while the architects figure out slowly and painfully to remove the legacy crap thats been stalling us for years.
 

Griswold

Senior member
Dec 24, 2004
630
0
0
Originally posted by: dexvx
Originally posted by: BlameCanada
We're talking PCs here, not Macs, any changes will be evolutionary like amd64, as it makes no sense to introduce a new architecture that has no software support. I dont care how fast it is, if i cant run anything on it, the chip is useless.

You really think x86-64 was revolutionary?

He wrote evolutionary, not revolutionary.

 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Originally posted by: dexvx
Oh god, x86. So bloated with legacy crap, its not even funny. As for IA-64, it would've been opened up, I mean technically x86 was called IA-32, so Intel would probably allow AMD to make IA64 clones (due to cross-licensing).

IA-64 removed many of the redundancies and useless legacy crap of the aging x86. The only downside, as many people saw, was that it wasn't really backwards compatible with older systems.

However, that said, even x86-64 isn't completely compatible with some legacy x86, so they're slooooowly and painfully removing legacy support instead of having sweeping changes. Oh well.

X86 - bloated, or anemic?

A 90nm Monecito can smack around a 65nm Woodcrest, if both were operating natively in FP situations. Imagine what a 65nm Monecito can do.

And switch to integer tests and woodcrest smacks around moncecito, not to mention being much much cheaper to produce. Think Monecito would perform well if they had to cut its cache down to reasonable levels to make it suitable for mass consumer production?
 

sandorski

No Lifer
Oct 10, 1999
70,753
6,320
126
Originally posted by: dexvx
Originally posted by: BlameCanada
We're talking PCs here, not Macs, any changes will be evolutionary like amd64, as it makes no sense to introduce a new architecture that has no software support. I dont care how fast it is, if i cant run anything on it, the chip is useless.

You really think x86-64 was revolutionary? I'm not going to say it was a cakewalk to produce, but in the ISA world, its just a minor patch. You run into the chicken/egg problem with hardware/software. X86 has an unusually consumer high base, so therefore unusually high support. If anything, IMO x86 is a monopoly in the computing world thats consistently held back superior ISA's and therefore, performance. A 90nm Monecito can smack around a 65nm Woodcrest, if both were operating natively in FP situations. Imagine what a 65nm Monecito can do.

People want a new ISA to get rid of legacy redundancies and yet they are not willing to part ways with their existing solutions. You really can't have it both ways. Unfortunately, for the computing world, I forsee x86 surviving a long time while the architects figure out slowly and painfully to remove the legacy crap thats been stalling us for years.

Very few "people" want a new ISA. On one hand you have Software Geeks complaining about it. OTOH, you have billions of people, and your whole source of Income, demanding that their current Software will still work fine. Who's gonna win?
 

dexvx

Diamond Member
Feb 2, 2000
3,899
0
0
Originally posted by: Fox5
X86 - bloated, or anemic?

Both.

Originally posted by: Fox5
And switch to integer tests and woodcrest smacks around moncecito, not to mention being much much cheaper to produce. Think Monecito would perform well if they had to cut its cache down to reasonable levels to make it suitable for mass consumer production?

Yes. Itanium 9M to 3M didnt lose much in the performance department. It solely depends on application, and the targetted application for large caches is servers, where the performance differential is largest. FP wise, I dont think it'll be what the 2M Allendale is to 4M Conroe. The reason its expensive to produce, as noted is the massive cache and 90nm process. Regardless, you're comparing a previous generation (gee 2 year delay on Monecito) to the new Intel NGMA.

Originally posted by: sandorski
Very few "people" want a new ISA. On one hand you have Software Geeks complaining about it. OTOH, you have billions of people, and your whole source of Income, demanding that their current Software will still work fine. Who's gonna win?

Same reason people in the USA won't go metric. FFS, you gotta take at least a leap, otherwise you're going to end up using legacy stuff forever, far longer than you have to. In the short run, its going to hurt. In the long run, its just progression.
 

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
Please give an example of x86 actually crippling performance in the targeted segment, one where there is no alternative platform. Also, as for the bloat and/or anemia, please give examples of how it makes life difficult for software engineers. Thanks.
 

AMDrulZ

Member
Jul 9, 2005
199
12
81
pretty much what i was saying is i think when x86 cannot be improved any more... AMD will go with a PPC architecture with joint developement with IBM and i also think IBM will be fabricating it right along with AMD... remeber AMD is makeing a FAB ar two in new york state that will allow AMD and IBM to develope Manufactureing Processes faster but will also allow easier collaboration on PPC in the future... PPC didn't catch on because apple only sold it in low volumes only 5% of PC market.. now if AMD were to take on PPC it would by that time be at least 40% or maybe even 50% of the global PC market... Not to metion Linux Aready supports PPC so Microsoft would have to make PPC based windows and there is a reasonable x86 replacement...
 

Furen

Golden Member
Oct 21, 2004
1,567
0
0
Originally posted by: AMDrulZ
pretty much what i was saying is i think when x86 cannot be improved any more... AMD will go with a PPC architecture with joint developement with IBM and i also think IBM will be fabricating it right along with AMD... remeber AMD is makeing a FAB ar two in new york state that will allow AMD and IBM to develope Manufactureing Processes faster but will also allow easier collaboration on PPC in the future... PPC didn't catch on because apple only sold it in low volumes only 5% of PC market.. now if AMD were to take on PPC it would by that time be at least 40% or maybe even 50% of the global PC market... Not to metion Linux Aready supports PPC so Microsoft would have to make PPC based windows and there is a reasonable x86 replacement...

x86 can be improved indefinitely, though the rate of improvement is somewhat slow because chips with improvements have to make a sizable part of the market before software support comes. PPC didn't catch on because it had no software support besides Apple's. Now that Apple stopped supporting PPC then we'll only see power on the high end and in consoles. AMD is not supported because it's AMD, it's supported because it is x86. Remove the x86 part and you'll have Itanium redux without all the cash that Intel has thrown into it. Remember that Intel had like 90% of the market when it tried to move us onto Itanium and that didn't make much of a difference.
 

zsdersw

Lifer
Oct 29, 2003
10,505
2
0
I don't know what's coming next.. but everything that's currently available (IA-64, x86-64, etc.) isn't going to be it. What's next may include elements or design philosophies of current standards, but it will be an amalgam designed to address whatever limitations and roadblocks that are encountered in advancing current standards. The basic principle being that a standard isn't replaced until its advancement and enhancement cannot be continued. All of this will be tempered by the ever-annoying (from a pushing-the-envelope standpoint) demands of legacy functionality.
 

dexvx

Diamond Member
Feb 2, 2000
3,899
0
0
Originally posted by: dmens
Please give an example of x86 actually crippling performance in the targeted segment, one where there is no alternative platform. Also, as for the bloat and/or anemia, please give examples of how it makes life difficult for software engineers. Thanks.

Well lets look at the differences between x86 and x86-64, briefly:

64bit: No MMX, 3d-Now! and 16 64bit registers as well as a core 16 128bit SSE/SSE2 register set along with the elimination of the x87 FPU.
32bit: MMX, 3d-Now, SSE, SSE2, x87 FPU collage (all of which are tacked on and not necessary AND overlap) with 8 GPR/FP/SIMD 32bit registers and no core for SSE/SSE2 registers.

So in order to make a fully backwards compatible x86-64 cpu, you need to waste die space decoding all of the legacy stuff, which has very limited uses. Eventually the 32bit portion will be eliminated altogether (which will take a long time, seeing how even 16bit legacy mode still exists), but you will still need to make a processor that is fully compatible for at least the next decade.

Moreover, assembly for 16 64bit GPR's registers with a core 16 128bit SSE/2 registers is quite different than your mundane 8/8/8 register 32bit set. There will be issues as to how to translate the 64bit code backwards for your 32bit clients, usually resulting in massive performance penalties as well as width issues. Now, I wont even get into older MMX/3d-NOW/FPU code running on a native 64bit system and how that'll translate.


Edit: I would like to add that there are *some* programs that are optimized for x86-64 using the full 16 + 16 register set. These are programs that see upwards of 50-100% speed increases, obviously depending on what you're doing. I'm curious to know wtf is the difference between that and IA-64, which generally runs emulated legacy 32bit at half the speed of its native IA-64. Once more programs move to x86-64, I forsee a larger performance discrepancies between that and 32bit.
 

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
Don't worry about the die space and hardware cost, it isn't even expensive at all to maintain legacy support, in the whole scope of things. Also, the whole register space issue is bunk, because registers get renamed, and support for all the register spaces are munged anyways. Just a little bit more logic to figure out the addressing, that is all.

What do you mean by "translate" in your last paragraph? Please elaborate.
 

dexvx

Diamond Member
Feb 2, 2000
3,899
0
0
Originally posted by: dmens
Don't worry about the die space and hardware cost, it isn't even expensive at all to maintain legacy support, in the whole scope of things. Also, the whole register space issue is bunk, because registers get renamed, and support for all the register spaces are munged anyways. Just a little bit more logic to figure out the addressing, that is all.

What do you mean by "translate" in your last paragraph? Please elaborate.

Die space that could be used for other resources?

The P4 has what? 128 registers? Yet the compiler only sees the 8 GPR's. So basically you have no control over what register does what beyond the scope of the 8 and let the magic of the CPU try and manage a virtual set of 128 using what the compiler thinks is 8? From the description of that, it seems like a total waste not being able to manage what you have versus what you can do.

I meant, what happens when your legacy software is running MMX code, or some outdated code on x86-64 in 64bit native mode? Doesn't that mean a performance penalty for the decoder trying to run something that is non-native?
 

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
Given the option of ripping out legacy support to regain that dinky little bit of space and running the risk of losing all backwards compatibility, the choice is obvious.

As for register management, making the physical rf non-visible is not a waste. In fact, the hardware will do a much better job than any compiler, excluding the use of predicates and static hints (which is not out of the question for x86). For basic assignments, there is nothing better than dynamic renaming, which by the way can take into account any optimizations done in the front end, such as macrofusion, idioms, etc.

In regards to decoder "nativity", there is no such thing as native mode, all code is treated the same. The only caveat is how much complexity is added to the frontend decoders to support new instructions. But given the importance of x86-64 performance in the near future, there is no reason to back off from an aggressive implementation.
 

dexvx

Diamond Member
Feb 2, 2000
3,899
0
0

Thanks for posting 6 month old news.

Originally posted by: dmens
Given the option of ripping out legacy support to regain that dinky little bit of space and running the risk of losing all backwards compatibility, the choice is obvious.


I think Microsoft must've missed the memo when it comes to Vista and DirectX 10. They've made legacy audio/video kinda dead (at least without a seemingly large driver re-write):

http://forums.creative.com/creativelabs....id=1694&view=by_date_ascending&page=1

And BTW, I didn't say strip out legacy. Software emulation is where its at, until a better re-write is possible. The question is how long and how much.