How well has breaking backwards compatibility (which that would do) worked for intel in the past?
How can amd even start to compete with a teraflop chip?
Where has compatibility been compromised?
Is Ferzerp talking about hetereogenous computing? You linked that right before his post.
Heterogeneous computing is a microarchitecture, not an ISA.
Compatibility is an ISA thing, not a microarchitecture issue.
Hence my confusion at what appears to be a conflation between the two.
I meant to go back and edit, but my point is more that it wouldn't be totally transparent and would likely wreak havoc on legacy apps. I misworded what I meant and walked away. I'm not sure why it came out that way.
Yes, I get that in an ideal world it wouldn't, but.......
You can always trust Hiroshige Goto for insightful, albeit speculative and cutting edge, CPU roadmaps and diagrams that no one else seems to be able to deliver. I'm impressed (pun somewhat intended).There is speculation of sorts along the lines of thinking that Haswell or the next architecture (the tock after 14nm Rockwell's tick) will be some manner of a heterogenous architecture with a bevy of these little cores present to handle the exceedingly well-threaded applications (like VM and so on).
![]()
![]()
![]()
I'm not sure what your concern is, but what you would need to run DirectX and OpenGL applications on Larrabee cores is a software renderer for those APIs. Work on such a renderer is, as I understand it, still under way by Tom Forsyth, Michael Abrash and others.That's what I am confused about too. Going this route would mean entire different path for graphics on the desktop side, and synchronization between the two would need at the least some software optimization.
But that doesn't mean it breaks backward compatibility.
Work on such a renderer is, as I understand it, still under way by Tom Forsyth, Michael Abrash and others.
The basic problem is that even with ever increasing GPU power, we have talented and very experienced game designers like John Carmack and Tim Sweeney saying that they don't really know what to do with it because they're limited by the APIs. We've had a development toward more programmable GPUs, most of all with DirectX 11 (and on the other side of the coin, CUDA and OpenCL), but that will inevitably affect raw graphics performance just like for example Larrabee sacrifices some that could be attained with fixed-function hardware. If you recall, the reason that GPUs came into existence in the 1990's was that the performance penalty of doing everything on the CPU, as opposed to fixed function hardware, was not justified by the flexibility of software renderers. Now we have the opposite problem, which is why I believe it is likely that the future of PC gaming is with software renderers (but this time accelerated by massively parallel CPU architectures). If that happens through the use of Larrabee, further widened AVX units or whatever is another story, but it will happen.
(RIP Lord of the Realms II, I can't install it on my Win7 x64 OS and I am loathe to the subtle peculiarities that come with multi-boot solutions)
So do the individual cores on these have integer units or is it just floating point? This is definitely Intel's response to GPUs in HPC, we could see that with Larrabee. AMD is implementing GCN partly for better merging of GPU and CPU on an ISA level.
Completely OT but there is a workaround to get LotRII running on Win7 x64. I play it all the time still. Incredible game that was way ahead of its time.
😛