• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Forbes: AMD is the gadfly that still bothers intel

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Intel won't let AMD go out of business, if things start looking really bad they'll secretly prop them up so that the FTC doesn't come after them for being a monopoly.
 
From following the news intel seems very commited to follow a 2 year schedule on shrinkage, and from the newsstream 22 nm seems to be on track.
 
From following the news intel seems very commited to follow a 2 year schedule on shrinkage, and from the newsstream 22 nm seems to be on track.

and who told you this?

22nm will be delayed unless bulldozer poses a threat to intel.
(very unlikely)

So expect massive delays.

This was what i heard from my sponsor when i was asking when i can hope and pray for 22nm samples.

Intel won't let AMD go out of business, if things start looking really bad they'll secretly prop them up so that the FTC doesn't come after them for being a monopoly.

Yup.. they wont do anything that will get AMD pissed off at them for unfair practice.

Which is why there keeping gulftown expensive.
Also why they delayed bringing the 32nm on consumer side, yet its been out there for a while now on enterprise side.

Intel will not do something that will piss off AMD, unless AMD pisses off intel.
And lately AMD hasnt done anything to piss intel off.
 
Last edited:
2. The article also claims progress in manufacturing processes is slowing down and AMD may catch up, but is this really true? Intel is more than a year ahead of AMD to 32nm. Or Maybe they are referring to other manufacturing techniques besides the process node?

Advancement in manufacturing should be more logarithmic than linear, so yes even with less Money you would expect AMD to make gains in that area over time. We see it as linear, since we are still very early into the process. Advancements should slow down and as they do, it will be easier to catch up.

Obviously this is my opinion, and I don't want to make it seem like it is some sort of fact. It just seems obvious to me that this is an area (chip manufacturing) that we should expect slower progress as time goes on.
 
Intel will not do something that will piss off AMD, unless AMD pisses off intel.
And lately AMD hasnt done anything to piss intel off.

Yep, until AMD is able to get 35% of market share for 4 quarters Intel is bond by the settlement to behave. 🙂

Although being very competitive isn't forbidden by the settlement.
 
Last edited:
There are several reasons micro-ops are preferable to a hardset instruction set (and why we don't program in micro-ops). One of the biggest is that micro-ops can be broken into pieces and processed out of order, or pipelined more easily (the original reason for micro-ops). Beyond that, it makes it easy to update and fix bugs in the processor without having the replace the whole thing.

I think you are confusing micro-code with micro-ops. I'm not saying we should drop micro-ops... On the contrary... Micro-ops are the 'native internal format' of the CPU I was referring to. It's the x86-to-micro-op translation step that we need to drop.
On RISC machines you're programming in micro-ops, as they map 1:1 to the machinecode (yes, with ooo and pipelining and the lot).
x86 has some instructions that map directly to micro-ops, others that translate to a set of micro-ops...
And in some cases, multiple x86 instructions are actually fused into a single op.
All these things could be eliminated in the hardware if you'd just do it as a preprocessing step in the compiler.

Predefined entry points. Thats what an API is.

Yea, and? API functions just translate to the driver (which is why a single GPU can support multiple APIs, such as D3D9/10/11 and OpenGL).
That has little to do with the actual code the GPU executes. Nearly everything is done with programmable shaders these days... Most legacy API functions are just translated to shader code by the driver.

Besides, you're always running an OS on a CPU, which has APIs anyway, and drivers underlying, so I don't see how it would be different for a CPU. I've already mentioned at least three examples that work with dynamic (re)compilation: Java, .NET and Windows running x86 code on non-x86 hardware.

What is an instruction set if not a "universal byte code". The princples you are advocating are that of "Lets make an instruction set, but lets not call it an instruction set."

An instruction set is what the CPU understands. It is specific to that CPU.
Universal byte code can be seen as an instruction set, but for a virtual machine. One that is a level higher than actual hardware, and is designed for optimal dynamic compilation to the underlying native instructionset, regardless of what that instructionset is.
(Much like how C/C++ code can be compiled and optimized for virtually every CPU architecture in the world).
This gives you the freedom to redefine and optimize the underlying native hardware. That is one of the reasons why GPUs have evolved so much quicker than CPUs. Only the compilers need to be rewritten, and a completely new GPU can still run all legacy hardware.

I think that is the ideal future for CPUs aswell. But until x86 is abandoned in favour of .NET or some other universal language (an entire open source universe built up from portable code would work aswell, such as linux/BSD), it will be difficult to make the transition.
Once the transition is made however, CPU development should progress considerably quicker than it has in the last 20 years or so.

If you want byte code applications, then that is independent of the CPU. Hardware shouldn't be in charge of the softwares jobs (unless it is in extremely embedded situations, where we see Hardware java byte-code interpreters). That is something that should be done at the OS level, not the hardware level.

This statement makes absolutely no sense.

Why not? It's the truth.
Microsoft's JIT compiler for x86 code was considerably more efficient than Intel's hardware implementation.
You want another statement that makes absolutely no sense?
Try googling for "HP Dynamo". A project from HP where they ran binary code (from an optimizing compiler) through a dynamic recompiler running on the same hardware. It optimized on-the-fly, resulting in performance improvements up to around 20% over running the binary code directly.

This isn't a big issue for phones as the hardware is generally fixed. But for a PC? Impossible.

Difficult, not impossible.
Microsoft has set up the windows driver development environment so that you usually don't need any CPU-specific code. Just standard C code, which can be compiled for x86, x64 and IA64.
It doesn't have to be in universal bytecode. Device drivers can still be distributed in native format. How many different architectures will we have at the same time anyway? Probably just Intel and AMD each having one architecture at a time... Perhaps refreshing that every 5-10 years.
As long as it takes little more than driver developers to recompile for a few extra architectures, I don't see any problems really.

The way MS and Apple are set up, that isn't going to happen anytime soon.

Apple is actually the worst example you could give. They started out on 68k, then moved to PowerPC, and now they are on x86. They already made the whole transition twice, they are actually pretty much showing how it should be done.
 
Last edited:
22nm will be delayed unless bulldozer poses a threat to intel.
(very unlikely)

So expect massive delays.

This was what i heard from my sponsor when i was asking when i can hope and pray for 22nm samples.

Oh yea, because AMD would be the sole competitor to Intel, riiight. It really doesn't matter whether AMD becomes competitive or not. Better process technology offers benefits of economics. Design-wise, a slack is plausible, not on process though.

Every sign indicates that 22nm is on track for Intel. Actually I'd say in overall the death of process technology scaling has been greatly exaggerated. Last time, it was 350um. Now they are doing 45nm regular/32nm cutting-edge with 193nm lasers. I doubt the whole industry as a whole will stop slowing either. Some people love disaster-scenarios to the end that it almost seem masochistic.
 
Last edited:
Yea, and? API functions just translate to the driver (which is why a single GPU can support multiple APIs, such as D3D9/10/11 and OpenGL).
That has little to do with the actual code the GPU executes. Nearly everything is done with programmable shaders these days... Most legacy API functions are just translated to shader code by the driver.

Besides, you're always running an OS on a CPU, which has APIs anyway, and drivers underlying, so I don't see how it would be different for a CPU. I've already mentioned at least three examples that work with dynamic (re)compilation: Java, .NET and Windows running x86 code on non-x86 hardware.
The CPU doesn't control the .NET bytecode interpretation. The OS does. Thats all I'm trying to say. If a mainstream OS moves to only allowing bytecode, then you will see the (possible) death of the x86 architecture. Until that happens. x86 is going to be around for a long time. Hardware isn't going to be developed that magically reads .Net bytecode or Java bytecode (ok, it does actually exist for the java example. However, that kind of limits the ability for the hardware to adapt to newer version of Java. Sort of screws up some of the benefits of bytecode in the first place)

An instruction set is what the CPU understands. It is specific to that CPU.
Universal byte code can be seen as an instruction set, but for a virtual machine. One that is a level higher than actual hardware, and is designed for optimal dynamic compilation to the underlying native instructionset, regardless of what that instructionset is.
K, you are right here.

This gives you the freedom to redefine and optimize the underlying native hardware. That is one of the reasons why GPUs have evolved so much quicker than CPUs. Only the compilers need to be rewritten, and a completely new GPU can still run all legacy hardware.
And wrong here. The GPU still has an instruction set. That doesn't go away. What changes is the way the instructions are delivered. That is on the software level, not the hardware level.

I think that is the ideal future for CPUs aswell. But until x86 is abandoned in favour of .NET or some other universal language (an entire open source universe built up from portable code would work aswell, such as linux/BSD), it will be difficult to make the transition.
Once the transition is made however, CPU development should progress considerably quicker than it has in the last 20 years or so.
.NET isn't something that is going to replace instruction sets. It is something that allows easy communication between architecture. Saying it will replace x86 is somewhat silly. It won't replace, but it may facilitate in the death of the x86 architecture. You'll still have SOME sort of instruction set in the background. That is the idea of .NET or Java. Recompile it to the native architecture.

Why not? It's the truth.
Microsoft's JIT compiler for x86 code was considerably more efficient than Intel's hardware implementation.
You want another statement that makes absolutely no sense?
Try googling for "HP Dynamo". A project from HP where they ran binary code (from an optimizing compiler) through a dynamic recompiler running on the same hardware. It optimized on-the-fly, resulting in performance improvements up to around 20% over running the binary code directly.
Are you saying that MS made essentially a JIT compiler to translate x86 code to IA64 code? That is what is confusing, it sounds like you are saying they made a JIT compiler to change x86 code to x86 code.

Difficult, not impossible.
Microsoft has set up the windows driver development environment so that you usually don't need any CPU-specific code. Just standard C code, which can be compiled for x86, x64 and IA64.
It doesn't have to be in universal bytecode. Device drivers can still be distributed in native format. How many different architectures will we have at the same time anyway? Probably just Intel and AMD each having one architecture at a time... Perhaps refreshing that every 5-10 years.
As long as it takes little more than driver developers to recompile for a few extra architectures, I don't see any problems really.
Ok, Impossible is an exaggeration, I accept that. However, It isn't likely to happen as there is just a boat load of legacy code that MS doesn't want to tick people off with when they say "Hey Every program before this release will not work anymore. Sorry" Maybe in Win95 days that might be done.

As for the driver recompiling... Well, I have only dealt with WinXP driver development and not anything more recent. I know they have invested more in a HAL. I just don't know how easy things will transition.


Apple is actually the worst example you could give. They started out on 68k, then moved to PowerPC, and now they are on x86. They already made the whole transition twice, they are actually pretty much showing how it should be done.

Apple is not a good example of what you and I would like to see achieved. They are an example of "A very tight fist over the hardware your software runs on makes transitioning easier", not how easy it is to change. Just because they made the switch more then once, doesn't mean it is easy, or that that their OS is magically able to run on any hardware system. For the most part, they ignored legacy and went from there.
 
It just seems obvious to me that this is an area (chip manufacturing) that we should expect slower progress as time goes on.

Yep. This is making me curious about which company will go first to 3D chip manufacturing.

Apparently stacking IC on top of one another has certain advantages (shorter paths, etc) compared to a large flat "single layer" chip, but then apparently heat dissipation can become a problem because of the decrease surface area to xtor ratio.

Large flat die with faster, but longer pathways vs small and tall 3D die with slower speeds but shorter pathways? Is that what is happening with 3D vs conventional?
 
The CPU doesn't control the .NET bytecode interpretation. The OS does. Thats all I'm trying to say.

I never said otherwise, did I?

If a mainstream OS moves to only allowing bytecode, then you will see the (possible) death of the x86 architecture. Until that happens. x86 is going to be around for a long time. Hardware isn't going to be developed that magically reads .Net bytecode or Java bytecode (ok, it does actually exist for the java example. However, that kind of limits the ability for the hardware to adapt to newer version of Java. Sort of screws up some of the benefits of bytecode in the first place)

That's not the idea at all.
What we're currently doing is this:
We use x86 code as our 'universal' format (as in: supported on all machines), and have it translated by the CPU to its internal micro-op format.

What I'm proposing is this:
1) Instead of x86, let's design an instructionset (bytecode) that is optimized for translation to a CPU's internal format (much like how Java and .NET bytecode are designed to be easily recompilable to x86 or other CPUs).
2) Instead of having a JIT compiler translate Java/.NET/other universal bytecode to x86, and then having the CPU translate it, let's cut out the middle-man. Translate the bytecode directly to the CPU's internal format in the JIT compiler, and we can do away with x86.

In fact, Transmeta already showed how software can be interesting even for x86. They used software to translate the x86 code to their custom instructionset.
This made their CPUs extremely small and power-efficient, because a lot of the complexity was moved to software (and they could also get around the x86 licensing this way).

I'm just saying that although Transmeta's idea is interesting, it would work even better if we used an optimized bytecode rather than x86.

And wrong here. The GPU still has an instruction set. That doesn't go away. What changes is the way the instructions are delivered. That is on the software level, not the hardware level.

No I'm not wrong.
Read some of AMD's and nVidia's documents. You'll see that their GPU architectures and instructionsets are completely different.
In OpenGL's case, you just pass the GLSL sourcecode directly to the driver. In D3D's case, an intermediate bytecode format is used, which is passed to the driver for final compilation to the native hardware.

.NET isn't something that is going to replace instruction sets. It is something that allows easy communication between architecture. Saying it will replace x86 is somewhat silly. It won't replace, but it may facilitate in the death of the x86 architecture. You'll still have SOME sort of instruction set in the background. That is the idea of .NET or Java. Recompile it to the native architecture.

I said it needs to replace x86, that's a SPECIFIC instructionset. I want newer, better instructionsets instead. Obviously still instructionsets, but not x86.
In the software world, I want .NET bytecode to replace x86 code, so we don't need x86 compatibility in our hardware anymore.

Are you saying that MS made essentially a JIT compiler to translate x86 code to IA64 code?

Yes, that's what I said, isn't it?
You could google it, you know. Why do you want me to argue about that for 3 posts, when I clearly mentioned IA64 and XP SP2? You had all the info to look up what I was talking about (although considering how interesting you appear to find this subject, I find it rather strange that you didn't already know about it).

That is what is confusing, it sounds like you are saying they made a JIT compiler to change x86 code to x86 code.

No, but the HP Dynamo is... except it's not x86, but PA-RISC to PA-RISC.

Ok, Impossible is an exaggeration, I accept that. However, It isn't likely to happen as there is just a boat load of legacy code that MS doesn't want to tick people off with when they say "Hey Every program before this release will not work anymore. Sorry" Maybe in Win95 days that might be done.

I know it's not going to be easy, but that doesn't mean we shouldn't want it. It's a long-term investment. Sadly most people only want short-term benefits.

As for the driver recompiling... Well, I have only dealt with WinXP driver development and not anything more recent. I know they have invested more in a HAL. I just don't know how easy things will transition.

Since drivers run in kernel-mode, there is no advantage to the x86-compatibiltiy in x64, so as far as driver development goes, the transition from x86 to x64 is no easier than it would have been from x86 to any other architecture.

Apple is not a good example of what you and I would like to see achieved. They are an example of "A very tight fist over the hardware your software runs on makes transitioning easier", not how easy it is to change. Just because they made the switch more then once, doesn't mean it is easy, or that that their OS is magically able to run on any hardware system. For the most part, they ignored legacy and went from there.

I never said it was easy. You just said it was impossible, and then you wanted to point to Apple. So I pointed out that Apple already did it twice. And no, they didn't actually ignore legacy. The PowerPC Mac came with a 68k VM built-in, so most 68k software just continued working on PPC Macs.
I don't think they built a PPC emulator for the x86 Macs, but they introduced universal binaries, which also solved the problem seamlessly for the end-user. The Sysinternals tools actually do something similar for Windows. They pack an x86, x64 and IA64 binary in a single executable, and start the correct code at runtime automatically.
 
In fact, Transmeta already showed how software can be interesting even for x86. They used software to translate the x86 code to their custom instructionset.
This made their CPUs extremely small and power-efficient, because a lot of the complexity was moved to software (and they could also get around the x86 licensing this way).
Yeah and Intel tried something similar with Itanium: Keep the hardware simple and let the compilers handle the complicated stuff and we all know how that turned out.

MS built their whole imperium on backwards compatibility, they won't abandon that for some technical advantages, not to speak of all the consumers. What do you think most consumers would want to use.. the architecture where they can reuse all their applications they've been using for years or that new cool thing for which only a handful applications exist?
Yes you could always build a compatibility mode that somehow enables you to run x86 code, but that's the first ugly compromise and would probably have a big performance penalty.. nobody buys a new computer to have their applications run as fast as the old one.

So while everyone would love to get rid of x86, I don't think that's going to happen any time soon - at least I don't see how the proposal circumnavigates all those problems and wouldn't end up like Itanium (let's see how long it can stay in the high-end server market between RISC on the one side and their own x86-64 processors on the other side and MS giving up on it). After all the success of x86-64 instead of Itanium, despite all the advantages show perfectly well how important backwards compatibility is.
 
Yeah and Intel tried something similar with Itanium: Keep the hardware simple and let the compilers handle the complicated stuff and we all know how that turned out.

So? Just because Itanium wasn't successful doesn't mean that the underlying ideas aren't valid.

MS built their whole imperium on backwards compatibility, they won't abandon that for some technical advantages, not to speak of all the consumers. What do you think most consumers would want to use.. the architecture where they can reuse all their applications they've been using for years or that new cool thing for which only a handful applications exist?

The thing is, if we move to something like .NET first, this problem is pre-empted. Java or .NET applications don't care whether you run them on x86, ARM, Itanium or whatever.
And hey look, .NET is actually developed by Microsoft!

So while everyone would love to get rid of x86, I don't think that's going to happen any time soon

I never said that I see it happening soon, I just say it would be best for the long term.
I mean, you sound a bit like Captain Obvious here... we all know that the masses chose x64, not IA64. So it's not like we need to speculate on whether or not people will choose the short-term solution. We KNOW they will.
I'm just saying it's the wrong choice. We can only hope that something like .NET will make the long-term solution the people's choice.
 
Oh yea, because AMD would be the sole competitor to Intel, riiight. It really doesn't matter whether AMD becomes competitive or not. Better process technology offers benefits of economics. Design-wise, a slack is plausible, not on process though.

Every sign indicates that 22nm is on track for Intel. Actually I'd say in overall the death of process technology scaling has been greatly exaggerated. Last time, it was 350um. Now they are doing 45nm regular/32nm cutting-edge with 193nm lasers. I doubt the whole industry as a whole will stop slowing either. Some people love disaster-scenarios to the end that it almost seem masochistic.

He's not saying that process technology isn't going to keep advancing. He's saying that Intel has no need to rush things right now... I mean really, if AMD was as competitive now as they were back in the P4 days, don't you think Intel would have 32nm quads and such, instead of arbitrarily waiting on that? Intel is holding back imho.
 
He's not saying that process technology isn't going to keep advancing. He's saying that Intel has no need to rush things right now... I mean really, if AMD was as competitive now as they were back in the P4 days, don't you think Intel would have 32nm quads and such, instead of arbitrarily waiting on that? Intel is holding back imho.

(Quoting you but not because I disagree)

Imagine Intel rushed them up - imagine Intel got so much advantage that they could be selling i7 980 performance for $60.

AMD would go down.

But how can Intel be so good to deliver such a performance for such a price in the first place?

Because they have tremendous budgets (yes they have skills, but they probably don't have more in a relative base - they just have so much budget they can pursuit loads of different ideas, so many that a few are bound to be awesome).

But how much budget would they have if their margins was $10 on that processor instead of $990? If their margins was low enough to kill AMD, how much budget would they have for R&D.

Their innovation ratio would stall even more in long terms. And since they would innovate less why would people buy a newer CPU?

But then the R&D budgets would be so low it would be a lot easier for new players to enter the market. Entering the market today from scratch is throwing money away, it is suicide.

So yeah, Intel will only speed up/lower prices if AMD can get close enough to scare them.
 
So? Just because Itanium wasn't successful doesn't mean that the underlying ideas aren't valid.
No it just means that the proposed solution better learns from past mistakes if it wants to succede, I don't say it's impossible, but the backwards compatibility is a problem - but it would certainly be a good idea to get rid of that abomination of x86 (what's the largest allowed opcode size.. 15byte? ugh)

The thing is, if we move to something like .NET first, this problem is pre-empted. Java or .NET applications don't care whether you run them on x86, ARM, Itanium or whatever.
And hey look, .NET is actually developed by Microsoft!
Yeah but "1) Instead of x86, let's design an instructionset (bytecode) that is optimized for translation to a CPU's internal format (much like how Java and .NET bytecode are designed to be easily recompilable to x86 or other CPUs)." is something different than "lets just write everything for the JVM/CLR and only write JIT compilers for different ISAs".
Though the second option means you could replace a ISA without any problems, if you just provide the adequate JIT compiler.. that's actually something that could really work, yes - especially considering the fact that several languages besides Java can already be compiled to something useable by the JVM (not aware of any projects that do the same for the CLR).
But there are still lots of C/C++ apps on the market, but hopefully that's a shrinking number..
 
Last edited:
1) and 2) weren't separate options... they were a sequence of steps to be taken to get where I want to be.
Step 1) is indeed very similar to "Let's write everything in Java/.NET", but keeping the option open for designing a new-and-improved VM... after all, Java is now about 15 years old, and .NET is 10 years old. Java was the first of its kind, and .NET was basically a new-and-improved version of the JVM, based on what we learnt from Java. Perhaps it's a good idea to revisit the CLR and see if we can improve it 10 years after its original release.
After all, Java and .NET didn't exactly take the world by storm, so that may be an indication that there's still room for improvement.

Once you have such an ecosystem in place, be it Java, .NET or something new, then it needs to attain critical mass, so that x86 can be replaced by something new.
Having an x86 JIT 'just in case' for better legacy compatibility wouldn't be a bad idea. Heck, Windows 7 uses a special version of VirtualPC to support legacy XP applications. That doesn't give stellar perfomance either, but it's good to have for emergencies.
 
Every sign indicates that 22nm is on track for Intel. Actually I'd say in overall the death of process technology scaling has been greatly exaggerated. Last time, it was 350um. Now they are doing 45nm regular/32nm cutting-edge with 193nm lasers. I doubt the whole industry as a whole will stop slowing either. Some people love disaster-scenarios to the end that it almost seem masochistic.

and IDC learned the hard way on how accurate my timelines are vs Media. 😀

do you not remember him offically saying Hats off to Aigo?

We wont see 22nm on schedule.
Intel would much rather be tweeking fab processes even futher while they give there competitors some breathing room.
So when the competitor caught up, intel can release a matured die, and in high yields and ride on that for more quarters.

Whats the point in bringing out new tech when its not needed.
You say you need it... is that really so? what you need and whats profitable to a company are completely different.
Intel says they rather play with SSD's b4 they bring out 22nm, that is soley on intel, not on the consumer because they want new gen tech. (personally i think its a smarter move to play with SSD's for a year)

Intel is going to "take a break" and go on SSD's.
This is what my friend told me... so you wont see 22nm because intel will be pushing SSD's for about a year.

This is what happened with 32nm.
Do you know how long 32nm was around b4 they dropped the ball?
Lets see march was the last day of NDA. I was allowed to sneak a peak on you guys in january thanks to a EOL stepping.
My first ES sample for me to play with from my sponsor was in november.
My friends first ES sample was in October.

Media stated oh gulftown arrival in 2009.
WRONG... NDA expired March 2010, exactly like my friend stated.

Sorry i trust my friend more then media.
Time and Time again he has proven himself correct.

We will get Sandy Bridge.
Sandy bridge from what i heard is awesome... (@ Stock)
Overclocking... no one will tell me in a straight face without laughing.
Which means.. overclocking is an absolute joke right now.

We wont get 22nm on cpus, unless your looking at ULV and LV cpu's on the Enterprise side, or a new ATOM.


IntelUser i dont mean any harshness if i sound a bit rude i applogize in advance..
 
Last edited:
The fact that your sources are relating 25nm process for IMFT joint venture and Intel's own CPU process technology tells how much he knows.

Availability of ES samples have little connection to maturity of the process technology.

Yes, I do know the "Media" stated late 2009/early 2010 release on Gulftown. They also showed that when the 45nm dual core Nehalem parts aka Auburndale/Havendale was cancelled they pulled 32nm dual core parts forward and 32nm enthusiast parts to "little later". Is the "Media" still wrong? Dates always change.

You were talking about profitability. There's NO way they can get initial costs of 22nm back by such niche products as LV/ULV variants(which are merely binned parts so it doesn't make sense anyway), and much smaller Atom.

Regarding this delay how long are we talking about anyway? "Delay" as in 12+ months or "Delay" Arrandale/Gulftown style(cause some might say they were off their late 2009 release)?

So far, the closest competitor isn't any worse off in terms of process generation at all. With newly revamped Global Foundries/Samsung and their expansions, Intel will have to do more to keep their lead.
 
Last edited:
were gonna have to wait and see how sandy bridge goes b4 a real number can be associated with it.

:T
 
You were talking about profitability. There's NO way they can get initial costs of 22nm back by such niche products as LV/ULV variants(which are merely binned parts so it doesn't make sense anyway), and much smaller Atom.

So far, the closest competitor isn't any worse off in terms of process generation at all. With newly revamped Global Foundries/Samsung and their expansions, Intel will have to do more to keep their lead.

Hmmm agree and disagree. I do think that at some point you are correct, Intel will have to do more to keep their lead. Everyone is going to have to do more and more, costs for fabs are in-friggin-sane. Like building an aircraft carrier. Pretty mind boggling imho.

Now, as for the first part there...hmmmm... you are probably right, but I dunno. Think about this... Intel is pretty friggin smart. A ton of the growth in processor sales in the coming decade is going to be mobile devices such as phones, tablets, mp3 players, gps, all that. I could totally see the first 22nm chips being a low power atom-type "arm killer". If it was good enough Intel could make serious money off of that. With Intel's manufacturing expertise it makes me wonder if they could, at 22nm, make a chip with an atom-type cpu core, a couple larabee type cores for media, their ssd controller, and like 1gb of flash. CPU+GPU+Storage all on one ultra low power chip. Just random musings, anyway. 🙂
 
AMD is a survivor. Not just hanging on though, but slowly making gains for short periods, then plateauing for a few years until it can make a Marketshare move again. Their ATI acquisition will soon result in another shift in Marketshare gains, they have potential of reaching 30% within the next couple years.

I seriously doubt that prediction, even with perfect execution between them and their partners.

You can't beat that mountain of money, especially the manufacturing advantage.
 
and who told you this?

22nm will be delayed unless bulldozer poses a threat to intel.
(very unlikely)

So expect massive delays.

This was what i heard from my sponsor when i was asking when i can hope and pray for 22nm samples.



Yup.. they wont do anything that will get AMD pissed off at them for unfair practice.

Which is why there keeping gulftown expensive.
Also why they delayed bringing the 32nm on consumer side, yet its been out there for a while now on enterprise side.

Intel will not do something that will piss off AMD, unless AMD pisses off intel.
And lately AMD hasnt done anything to piss intel off.

Noone told me, it was the impression I got from reading news site like eetimes or tech-on

Various conference where intel harps on their "silicon cadence", info on equipment manufactures progress etc..
 
Intel put both a CPU and GPU die on the same piece of silicon.... but they were distinct separate elements. AMD is combining both a CPU and a GPU into a single die, hence "Fusion".

I'll be curious to see how well their Fusion processors work for notebooks and low-end desktops.

So "fusion" is the new "Native quad Core"? 😀

We all know how that turned out...PR-fluff.
 
Various conference where intel harps on their "silicon cadence", info on equipment manufactures progress etc..

AMD & Via could exit the x86 business today and Intel would still find themselves pursuing 22nm and 16nm at their current ravenous pace for simple fact that their entire business model hinges on expediting the obsolescence of the very CPU's they sold 2-4yrs ago.

If they don't provide a psychological boost to the enhance the perception that my Q6600 is obsolete then they will find themselves in a situation where people start hanging on to those cpu's until they really do die after 10-15yrs.

Having an effective monopoly does enable the option of reducing the rate of price reduction (which is what we have been observing), i.e. at the business balance sheet level a monopoly enables gross margin enhancement, but in a mature market segment such as PC's and servers the volume sales are from replacement purchases not first-time buyers.

So...unless Intel management wants to reduce Intel's revenues to that of roughly the first-time buyer TAM they will keep signing off on those R&D budget outlays in pursuit of obsoleting the CPU's they sold yesterday and today so all us lemmings rush out to buy the newest widget tomorrow.
 
AMD & Via could exit the x86 business today and Intel would still find themselves pursuing 22nm and 16nm at their current ravenous pace for simple fact that their entire business model hinges on expediting the obsolescence of the very CPU's they sold 2-4yrs ago.

If they don't provide a psychological boost to the enhance the perception that my Q6600 is obsolete then they will find themselves in a situation where people start hanging on to those cpu's until they really do die after 10-15yrs.

Having an effective monopoly does enable the option of reducing the rate of price reduction (which is what we have been observing), i.e. at the business balance sheet level a monopoly enables gross margin enhancement, but in a mature market segment such as PC's and servers the volume sales are from replacement purchases not first-time buyers.

So...unless Intel management wants to reduce Intel's revenues to that of roughly the first-time buyer TAM they will keep signing off on those R&D budget outlays in pursuit of obsoleting the CPU's they sold yesterday and today so all us lemmings rush out to buy the newest widget tomorrow.

I went from a 3.2Ghz Q6600 to a 3.5Ghz i7 920...you don't need "psychology" in order to appriciate the added performance, just every day use will tell that story.
 
Back
Top