• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Intel unveils Knights Corner - 1 teraflop chip

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Where has compatibility been compromised?

Is Ferzerp talking about heterogenous computing? Because this chip has the full x86 instruction set and I believe it can even boot an OS. Or having to reprogram for graphics?

I'm doubting if Intel will replace GenX cores and go Larrabee-anything in the future. Perhaps both will exist simultaneously. If it'll get into PC it needs to do graphics well.

About the form factor: What makes a CPU form factor a drawback is that they no longer have the massive bandwidth they would have on a video card form factor. Although, I guess CPU form factor would get significantly larger capacities and allows for a system not needing regular Xeons?
 
Last edited:
Is Ferzerp talking about hetereogenous computing? You linked that right before his post.

Heterogeneous computing is a microarchitecture, not an ISA.

Compatibility is an ISA thing, not a microarchitecture issue.

Hence my confusion at what appears to be a conflation between the two.
 
Heterogeneous computing is a microarchitecture, not an ISA.

Compatibility is an ISA thing, not a microarchitecture issue.

Hence my confusion at what appears to be a conflation between the two.

That's what I am confused about too. Going this route would mean entire different path for graphics on the desktop side, and synchronization between the two would need at the least some software optimization.

But that doesn't mean it breaks backward compatibility.
 
I meant to go back and edit, but my point is more that it wouldn't be totally transparent and would likely wreak havoc on legacy apps. I misworded what I meant and walked away. I'm not sure why it came out that way.

Yes, I get that in an ideal world it wouldn't, but.......
 
I meant to go back and edit, but my point is more that it wouldn't be totally transparent and would likely wreak havoc on legacy apps. I misworded what I meant and walked away. I'm not sure why it came out that way.

Yes, I get that in an ideal world it wouldn't, but.......

To be sure it has the potential to be poorly implemented and poorly supported, but that is more a factor of the project management than it is an intrinsic unavoidable tradeoff per se.

The havoc to be wreaked on legacy apps is more likely to be limited to performance than compatibility.

But even in the case where true incompatibility might arise it would likely be no different than the havoc wreaked by legacy programs that use 16bit installers and their incompatibility with 64bit environments like Win7 x64.

(RIP Lord of the Realms II, I can't install it on my Win7 x64 OS and I am loathe to the subtle peculiarities that come with multi-boot solutions)

As for true ISA incompatibilities though, consider the Transmeta CPU's. They were ISA compatible solely through emulation, the underlying hardware (architecture) had little to do with processing x86 instructions. Unless Intel really intends to break compatibility they will likely implement some form of an emulator for the deprecated instructions and the legacy apps will just take a penalty hit versus the performance that would have been possible if it were running on metal.

(but the end-user may not perceive this penalty hit if the clockspeeds and IPC are such that the overall performance is still comparable to traditional homogeneous solutions)
 
There is speculation of sorts along the lines of thinking that Haswell or the next architecture (the tock after 14nm Rockwell's tick) will be some manner of a heterogenous architecture with a bevy of these little cores present to handle the exceedingly well-threaded applications (like VM and so on).

kaigai-02.jpg


kaigai1.jpg


kaigai5.jpg
You can always trust Hiroshige Goto for insightful, albeit speculative and cutting edge, CPU roadmaps and diagrams that no one else seems to be able to deliver. I'm impressed (pun somewhat intended).

That's what I am confused about too. Going this route would mean entire different path for graphics on the desktop side, and synchronization between the two would need at the least some software optimization.

But that doesn't mean it breaks backward compatibility.
I'm not sure what your concern is, but what you would need to run DirectX and OpenGL applications on Larrabee cores is a software renderer for those APIs. Work on such a renderer is, as I understand it, still under way by Tom Forsyth, Michael Abrash and others.

There's already an x86 DirectX software renderer included in Windows 7, called WARP, but as it runs on the computer's CPU it's naturally not very efficient.
 
Last edited:
Work on such a renderer is, as I understand it, still under way by Tom Forsyth, Michael Abrash and others.

Michael Abrash does not work at Intel anymore. He moved to Valve this year. I assume because the original graphics architecture was cancelled.

Also the pre-Sandy Bridge GPUs were more "general purpose" than GPUs of today. Too many operations were done on software, like hardware T&L/VS. Maybe that's the time they thought the Larrabee concept would work well everywhere, like integrated graphics and they would gradually transition GenX to completely software renderer like Larrabee.

But Knights Corner does the opposite. Things like texture unit and whatever was graphics is being taken out in that chip. It's hard to believe they'll then go back to software renderer graphics method only a generation or two after that.

That article from PCWatch is well before Intel announced cancellation for graphics Larrabee. Fixed function units are always faster than software renderers. If you can make a very fast software renderer you can make even faster special purpose one. The key is balance, the way it looks like Larrabee for graphics is not the way to go.
 
Last edited:
The basic problem is that even with ever increasing GPU power, we have talented and very experienced game designers like John Carmack and Tim Sweeney saying that they don't really know what to do with it because they're limited by the APIs. We've had a development toward more programmable GPUs, most of all with DirectX 11 (and on the other side of the coin, CUDA and OpenCL), but that will inevitably affect raw graphics performance just like for example Larrabee sacrifices some that could be attained with fixed-function hardware. If you recall, the reason that GPUs came into existence in the 1990's was that the performance penalty of doing everything on the CPU, as opposed to fixed function hardware, was not justified by the flexibility of software renderers. Now we have the opposite problem, which is why I believe it is likely that the future of PC gaming is with software renderers (but this time accelerated by massively parallel CPU architectures). If that happens through the use of Larrabee, further widened AVX units or whatever is another story, but it will happen.
 
The basic problem is that even with ever increasing GPU power, we have talented and very experienced game designers like John Carmack and Tim Sweeney saying that they don't really know what to do with it because they're limited by the APIs. We've had a development toward more programmable GPUs, most of all with DirectX 11 (and on the other side of the coin, CUDA and OpenCL), but that will inevitably affect raw graphics performance just like for example Larrabee sacrifices some that could be attained with fixed-function hardware. If you recall, the reason that GPUs came into existence in the 1990's was that the performance penalty of doing everything on the CPU, as opposed to fixed function hardware, was not justified by the flexibility of software renderers. Now we have the opposite problem, which is why I believe it is likely that the future of PC gaming is with software renderers (but this time accelerated by massively parallel CPU architectures). If that happens through the use of Larrabee, further widened AVX units or whatever is another story, but it will happen.

No way it will ever go all software. I do agree we are on the verge of more software but fixed functions will never die. Nor would we want them to completely die. As i say that, i still agree that we all are in drastic need of a change. And this is exactly why our GPUs are morphing.

Speaking of Sweeney, I remember an old interview between him in an ugly fellow named charlie, (lol). its still very relevant today. Its Andrew Richards and Sweeney in a debate that goes with everyting we ave been talking about lately. It will also bring light to why ARM cpus just may find their way into consoles. The power constraints!

But I want to say before, i dont agree with sweeney. i stand by the position that fixed functions are too important to do completely away with. A hybrid is the future. And once we establish neat techneques, you will see more fixed functions for the new ways of rendering. Fixed function is a double edged sword, but its too powerful to do completely away with. We need a balance of both to move forward a good pace.

to those who havent seen this, its kind old but really highly relevant:

part1
http://www.youtube.com/watch?v=tnogwO84O0Q&feature=player_embedded

part2

http://www.youtube.com/watch?v=oABs_HCnMBg&feature=player_embedded

part3

http://www.youtube.com/watch?v=8yoHsVfoCH0&feature=player_embedded

part4

http://www.youtube.com/watch?v=UIByXVphoM0&feature=player_embedded

part5

http://www.youtube.com/watch?v=GmwzrHmDwSQ

anyone who hasnt seen these will be in for a treat, it deserves its own thread. Its neat looking back at these seeing where we are currently at and going!

check it!
 
Last edited:
(RIP Lord of the Realms II, I can't install it on my Win7 x64 OS and I am loathe to the subtle peculiarities that come with multi-boot solutions)

Completely OT but there is a workaround to get LotRII running on Win7 x64. I play it all the time still. Incredible game that was way ahead of its time.

😛
 
So do the individual cores on these have integer units or is it just floating point? This is definitely Intel's response to GPUs in HPC, we could see that with Larrabee. AMD is implementing GCN partly for better merging of GPU and CPU on an ISA level.
 
So do the individual cores on these have integer units or is it just floating point? This is definitely Intel's response to GPUs in HPC, we could see that with Larrabee. AMD is implementing GCN partly for better merging of GPU and CPU on an ISA level.

It has everything that can be called an x86 CPU, branch prediction, ALUs, decoders. Closest resembling chip is Pentium P54C but with 512-bit vector unit.
 
Completely OT but there is a workaround to get LotRII running on Win7 x64. I play it all the time still. Incredible game that was way ahead of its time.

😛

Thanks for that bit of inspiration :thumbsup: I googled for a bit and found out what you meant, its running perfectly now 🙂
 
Back
Top