I need to go on a semi-verbosely rant for a moment so I apologize up front if anyone thinks I'm thread trashing.
If you take a few steps backwards for a moment and try to look at the big picture, I think that most will agree with me that there is an approaching 'war' between Intel, Nvidia and AMD, a war over what will be the next 'winning' architecture of hardware-assist rendering.
I think 'it' (the current situation) is evolving into a war because the winning architecture will set the ground for what will be, eventually, after a couple of generation, the architecture for hardware-assist real-time photorealism. This is the future of hardware-assist real-time rendering.
As economy of scale has proven time and time again, even if the market IS big enough for several players/companies, it will eventually focus over a single one. AMD, Nvidia and intel all wants to be that company! Every single one of them is pushing, hard, to reach that goal. We are seeing just the start of it.
While it is completely understandable that every company will approach this endeavor differently, I foresee problems, big ones.
First of all, many software companies, as in the majority of them, are revolved around the "time to market" concept, with every programmer that I talk what I hear the most is how much their bosses are crazy because they promised their client that they will finish the project in an unreasonably time frame, and the 12 (sometimes 16) hours hauls they need to make to accomplish the deadlines, this is possible manly duo to one very impotent aspect of programming, reusability.
Software companies rely on reusable code, they depend on it, but most importantly they accumulate it, I really cannot stress this enough. IP (intellectual property) in the software industry is a huge deal, it is so huge that that some of the bigger name that ware considered big iron companies are now considered IP companies, names like Cisco and IBM, IBM for example gets a lot of press for their supercomputers but from their financially point their software assets are a bigger part of their technology assets pie, If IBM would start selling their technology assets they will make more money of their software assets then their CPU assets or their server assets, hardware assets aren't as expensive as it used to be (with the exception of fabrication facilities), IBM didn?t get that much for their laptop business. Back when DEC has started going under and its CEO started to sell the company piece by piece, they got the most for their software, and their IP.
Medium and smaller software companies live or die by their IP, I seen some go under for not changing with current technologies fast enough i.e. not developing enough IP for the newer technologies/markets and I have seen other companies that want through an explosion (in the good sense) for the single reason that they had the right IP.
For example I seen a small, 5 employees total, DB software company grow to 500 in 2 years just because they concentrated on cooperated DB software, things like ERP, CRM and ERM, that company is now worth a few dozens million USD. They own, maintain, and continually develop their core IP, which they use repeatedly with every new project they undertake, and just change the interfaces and customizing it to the client specific needs, client specific customized code the client usually gets, with some additionally costs, but their core IP is licensed for use only and the code remains closed (the project is shipped with the cope IP as binaries only) for only the software company to use/reuse, that way they can achieve many, many projects at the same time, while each and every one of these projects score them a very nice amount.
Going back to the subject at hand, as I mentioned in the beginning, the new offering that Intel, Nvidia and AMD are trying to push are targeted not at replacing the current discreet GPU, they are targeted toward a future architecture that will eventually be able to achieve hardware-assist real-time photorealism, I personally don?t think that current 3D architecture that exists in current GPU can/will easily attain that level of performances, I think the architecture should be changed, but I think that it should be changed in measurable steps.
The further away from current x86 architecture these solutions will be the harder and longer it will take to materialized. X86 has one of the biggest repositories of code in this segment, probably more then any other architecture. If next generation G/CPU or whatever they are, stray too far away from existing solution this will shake the entire 3D software industry.
If the article in the in the first post is correct, then Intel approach to this is the equivalent of a U turn with a triple somersault. Documentation, specification guideline, libraries, tools, utilities, all will have to be rewritten; some will be needed to be rewritten from scratch to make use of the new architecture, it will make just too much existing code obsolete.
In short, I don?t like this approach, it is just too radical.
Reading through the article again it seems very unlikely that a mini-core can accept both VLIW and x86 (or bare-bone X86), I know Intel has an impressive track record in regard to decoders but come on it?s a MINI core, where will it fit?
Going for a partial x86 or a pseudo x86 (with the appropriate additions) architecture just makes a lot more sense, Intel move doesn't, maybe can they produce a magical compiler that will consume slightly modify existing code and produce from it binaries that are native to their new architecture? I heard they have an extremely capable software team, but this will not be, by any means, a small feat. Maybe Intel had some kind of a rabbit in their hat? Maybe they are just relaying on their ability to push their new architecture by sheer force?
I don?t know, but I do know this, I have a BAD feeling about this one.
edit for spelling