From following the news intel seems very commited to follow a 2 year schedule on shrinkage, and from the newsstream 22 nm seems to be on track.
Intel won't let AMD go out of business, if things start looking really bad they'll secretly prop them up so that the FTC doesn't come after them for being a monopoly.
2. The article also claims progress in manufacturing processes is slowing down and AMD may catch up, but is this really true? Intel is more than a year ahead of AMD to 32nm. Or Maybe they are referring to other manufacturing techniques besides the process node?
Intel will not do something that will piss off AMD, unless AMD pisses off intel.
And lately AMD hasnt done anything to piss intel off.
There are several reasons micro-ops are preferable to a hardset instruction set (and why we don't program in micro-ops). One of the biggest is that micro-ops can be broken into pieces and processed out of order, or pipelined more easily (the original reason for micro-ops). Beyond that, it makes it easy to update and fix bugs in the processor without having the replace the whole thing.
Predefined entry points. Thats what an API is.
What is an instruction set if not a "universal byte code". The princples you are advocating are that of "Lets make an instruction set, but lets not call it an instruction set."
This statement makes absolutely no sense.
This isn't a big issue for phones as the hardware is generally fixed. But for a PC? Impossible.
The way MS and Apple are set up, that isn't going to happen anytime soon.
22nm will be delayed unless bulldozer poses a threat to intel.
(very unlikely)
So expect massive delays.
This was what i heard from my sponsor when i was asking when i can hope and pray for 22nm samples.
The CPU doesn't control the .NET bytecode interpretation. The OS does. Thats all I'm trying to say. If a mainstream OS moves to only allowing bytecode, then you will see the (possible) death of the x86 architecture. Until that happens. x86 is going to be around for a long time. Hardware isn't going to be developed that magically reads .Net bytecode or Java bytecode (ok, it does actually exist for the java example. However, that kind of limits the ability for the hardware to adapt to newer version of Java. Sort of screws up some of the benefits of bytecode in the first place)Yea, and? API functions just translate to the driver (which is why a single GPU can support multiple APIs, such as D3D9/10/11 and OpenGL).
That has little to do with the actual code the GPU executes. Nearly everything is done with programmable shaders these days... Most legacy API functions are just translated to shader code by the driver.
Besides, you're always running an OS on a CPU, which has APIs anyway, and drivers underlying, so I don't see how it would be different for a CPU. I've already mentioned at least three examples that work with dynamic (re)compilation: Java, .NET and Windows running x86 code on non-x86 hardware.
K, you are right here.An instruction set is what the CPU understands. It is specific to that CPU.
Universal byte code can be seen as an instruction set, but for a virtual machine. One that is a level higher than actual hardware, and is designed for optimal dynamic compilation to the underlying native instructionset, regardless of what that instructionset is.
And wrong here. The GPU still has an instruction set. That doesn't go away. What changes is the way the instructions are delivered. That is on the software level, not the hardware level.This gives you the freedom to redefine and optimize the underlying native hardware. That is one of the reasons why GPUs have evolved so much quicker than CPUs. Only the compilers need to be rewritten, and a completely new GPU can still run all legacy hardware.
.NET isn't something that is going to replace instruction sets. It is something that allows easy communication between architecture. Saying it will replace x86 is somewhat silly. It won't replace, but it may facilitate in the death of the x86 architecture. You'll still have SOME sort of instruction set in the background. That is the idea of .NET or Java. Recompile it to the native architecture.I think that is the ideal future for CPUs aswell. But until x86 is abandoned in favour of .NET or some other universal language (an entire open source universe built up from portable code would work aswell, such as linux/BSD), it will be difficult to make the transition.
Once the transition is made however, CPU development should progress considerably quicker than it has in the last 20 years or so.
Are you saying that MS made essentially a JIT compiler to translate x86 code to IA64 code? That is what is confusing, it sounds like you are saying they made a JIT compiler to change x86 code to x86 code.Why not? It's the truth.
Microsoft's JIT compiler for x86 code was considerably more efficient than Intel's hardware implementation.
You want another statement that makes absolutely no sense?
Try googling for "HP Dynamo". A project from HP where they ran binary code (from an optimizing compiler) through a dynamic recompiler running on the same hardware. It optimized on-the-fly, resulting in performance improvements up to around 20% over running the binary code directly.
Ok, Impossible is an exaggeration, I accept that. However, It isn't likely to happen as there is just a boat load of legacy code that MS doesn't want to tick people off with when they say "Hey Every program before this release will not work anymore. Sorry" Maybe in Win95 days that might be done.Difficult, not impossible.
Microsoft has set up the windows driver development environment so that you usually don't need any CPU-specific code. Just standard C code, which can be compiled for x86, x64 and IA64.
It doesn't have to be in universal bytecode. Device drivers can still be distributed in native format. How many different architectures will we have at the same time anyway? Probably just Intel and AMD each having one architecture at a time... Perhaps refreshing that every 5-10 years.
As long as it takes little more than driver developers to recompile for a few extra architectures, I don't see any problems really.
Apple is actually the worst example you could give. They started out on 68k, then moved to PowerPC, and now they are on x86. They already made the whole transition twice, they are actually pretty much showing how it should be done.
It just seems obvious to me that this is an area (chip manufacturing) that we should expect slower progress as time goes on.
The CPU doesn't control the .NET bytecode interpretation. The OS does. Thats all I'm trying to say.
If a mainstream OS moves to only allowing bytecode, then you will see the (possible) death of the x86 architecture. Until that happens. x86 is going to be around for a long time. Hardware isn't going to be developed that magically reads .Net bytecode or Java bytecode (ok, it does actually exist for the java example. However, that kind of limits the ability for the hardware to adapt to newer version of Java. Sort of screws up some of the benefits of bytecode in the first place)
And wrong here. The GPU still has an instruction set. That doesn't go away. What changes is the way the instructions are delivered. That is on the software level, not the hardware level.
.NET isn't something that is going to replace instruction sets. It is something that allows easy communication between architecture. Saying it will replace x86 is somewhat silly. It won't replace, but it may facilitate in the death of the x86 architecture. You'll still have SOME sort of instruction set in the background. That is the idea of .NET or Java. Recompile it to the native architecture.
Are you saying that MS made essentially a JIT compiler to translate x86 code to IA64 code?
That is what is confusing, it sounds like you are saying they made a JIT compiler to change x86 code to x86 code.
Ok, Impossible is an exaggeration, I accept that. However, It isn't likely to happen as there is just a boat load of legacy code that MS doesn't want to tick people off with when they say "Hey Every program before this release will not work anymore. Sorry" Maybe in Win95 days that might be done.
As for the driver recompiling... Well, I have only dealt with WinXP driver development and not anything more recent. I know they have invested more in a HAL. I just don't know how easy things will transition.
Apple is not a good example of what you and I would like to see achieved. They are an example of "A very tight fist over the hardware your software runs on makes transitioning easier", not how easy it is to change. Just because they made the switch more then once, doesn't mean it is easy, or that that their OS is magically able to run on any hardware system. For the most part, they ignored legacy and went from there.
Yeah and Intel tried something similar with Itanium: Keep the hardware simple and let the compilers handle the complicated stuff and we all know how that turned out.In fact, Transmeta already showed how software can be interesting even for x86. They used software to translate the x86 code to their custom instructionset.
This made their CPUs extremely small and power-efficient, because a lot of the complexity was moved to software (and they could also get around the x86 licensing this way).
Yeah and Intel tried something similar with Itanium: Keep the hardware simple and let the compilers handle the complicated stuff and we all know how that turned out.
MS built their whole imperium on backwards compatibility, they won't abandon that for some technical advantages, not to speak of all the consumers. What do you think most consumers would want to use.. the architecture where they can reuse all their applications they've been using for years or that new cool thing for which only a handful applications exist?
So while everyone would love to get rid of x86, I don't think that's going to happen any time soon
Oh yea, because AMD would be the sole competitor to Intel, riiight. It really doesn't matter whether AMD becomes competitive or not. Better process technology offers benefits of economics. Design-wise, a slack is plausible, not on process though.
Every sign indicates that 22nm is on track for Intel. Actually I'd say in overall the death of process technology scaling has been greatly exaggerated. Last time, it was 350um. Now they are doing 45nm regular/32nm cutting-edge with 193nm lasers. I doubt the whole industry as a whole will stop slowing either. Some people love disaster-scenarios to the end that it almost seem masochistic.
He's not saying that process technology isn't going to keep advancing. He's saying that Intel has no need to rush things right now... I mean really, if AMD was as competitive now as they were back in the P4 days, don't you think Intel would have 32nm quads and such, instead of arbitrarily waiting on that? Intel is holding back imho.
No it just means that the proposed solution better learns from past mistakes if it wants to succede, I don't say it's impossible, but the backwards compatibility is a problem - but it would certainly be a good idea to get rid of that abomination of x86 (what's the largest allowed opcode size.. 15byte? ugh)So? Just because Itanium wasn't successful doesn't mean that the underlying ideas aren't valid.
Yeah but "1) Instead of x86, let's design an instructionset (bytecode) that is optimized for translation to a CPU's internal format (much like how Java and .NET bytecode are designed to be easily recompilable to x86 or other CPUs)." is something different than "lets just write everything for the JVM/CLR and only write JIT compilers for different ISAs".The thing is, if we move to something like .NET first, this problem is pre-empted. Java or .NET applications don't care whether you run them on x86, ARM, Itanium or whatever.
And hey look, .NET is actually developed by Microsoft!
Every sign indicates that 22nm is on track for Intel. Actually I'd say in overall the death of process technology scaling has been greatly exaggerated. Last time, it was 350um. Now they are doing 45nm regular/32nm cutting-edge with 193nm lasers. I doubt the whole industry as a whole will stop slowing either. Some people love disaster-scenarios to the end that it almost seem masochistic.
You were talking about profitability. There's NO way they can get initial costs of 22nm back by such niche products as LV/ULV variants(which are merely binned parts so it doesn't make sense anyway), and much smaller Atom.
So far, the closest competitor isn't any worse off in terms of process generation at all. With newly revamped Global Foundries/Samsung and their expansions, Intel will have to do more to keep their lead.
AMD is a survivor. Not just hanging on though, but slowly making gains for short periods, then plateauing for a few years until it can make a Marketshare move again. Their ATI acquisition will soon result in another shift in Marketshare gains, they have potential of reaching 30% within the next couple years.
and who told you this?
22nm will be delayed unless bulldozer poses a threat to intel.
(very unlikely)
So expect massive delays.
This was what i heard from my sponsor when i was asking when i can hope and pray for 22nm samples.
Yup.. they wont do anything that will get AMD pissed off at them for unfair practice.
Which is why there keeping gulftown expensive.
Also why they delayed bringing the 32nm on consumer side, yet its been out there for a while now on enterprise side.
Intel will not do something that will piss off AMD, unless AMD pisses off intel.
And lately AMD hasnt done anything to piss intel off.
Intel put both a CPU and GPU die on the same piece of silicon.... but they were distinct separate elements. AMD is combining both a CPU and a GPU into a single die, hence "Fusion".
I'll be curious to see how well their Fusion processors work for notebooks and low-end desktops.
Various conference where intel harps on their "silicon cadence", info on equipment manufactures progress etc..
AMD & Via could exit the x86 business today and Intel would still find themselves pursuing 22nm and 16nm at their current ravenous pace for simple fact that their entire business model hinges on expediting the obsolescence of the very CPU's they sold 2-4yrs ago.
If they don't provide a psychological boost to the enhance the perception that my Q6600 is obsolete then they will find themselves in a situation where people start hanging on to those cpu's until they really do die after 10-15yrs.
Having an effective monopoly does enable the option of reducing the rate of price reduction (which is what we have been observing), i.e. at the business balance sheet level a monopoly enables gross margin enhancement, but in a mature market segment such as PC's and servers the volume sales are from replacement purchases not first-time buyers.
So...unless Intel management wants to reduce Intel's revenues to that of roughly the first-time buyer TAM they will keep signing off on those R&D budget outlays in pursuit of obsoleting the CPU's they sold yesterday and today so all us lemmings rush out to buy the newest widget tomorrow.