nVidia scientist on Larrabee

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
DreamWorks animation said that Larabee allows them to increase what they can do by 20X.

Versus the old Athlon machines they are upgrading, I believe that easily.

If Larabee really sucked, Sony would not want to put it into a PlayStation 4 console that they will be losing money on to begin with.

The only site that even tries to claim it will be is the Inquirer. If they reported the sun was coming up tomorrow I'd be worried :)

Intel has said that all of their driver programmers are focused on their current IGP solutions. They have usurped the 3d Labs people to work on Larabee.

3DLabs never wrote a high performance driver. They wrote extremely robuts ones, but their hardware never came close to its peak theoretical performance.

- Larabee will be a monster when it comes to 3D rendering (hence the DreamWorks comments)

You are assuming that the DX11 GPUs won't be, why?

Just think of all the time saved in eliminating DX and OpenGL.

Been there, done that. Time saved? You mean the exponential increase in development time? Try this, create a sphere and apply a cube map in x86 code. I can do it using D3D in less time then it takes to write this post, you will be hours hand coding it under x86 if you are very good. I was present for the days before the 3D APIs, I never want to go back thanks :)

Games will be able to run just as fast (or faster) in software mode compared to running in OpenGL/DX.

No, they won't even be close. Intel has already backed away from both that claim, and the claim that Larry will run with current GPGPUs despite a build process edge and the fact that they are backing away from it comparing with what will be outdated last gen GPUs. This is based on what Intel is claiming as of now, not my assesment.

You seem really concerned about this chip, but when credible sources like DreamWorks and Anand himself are excited

You think a P4 1.4GHZ is faster then the i7 Intel chips? That is a serious question, and it relates to how much faith you should put into some of your sources :) One of them embarassed himself quite badly trying to give analysis on a non x86 part, so badly he had to pull the article and never reposted it. He claimed Cell at best could compete with a P4 1.4GHZ chip, while Cell is still thoroughly trouncing the latest and greatest from the x86 world at the tasks it was built for.

You know, Intel isn't the first company that has tried this. Sony thought they were going to do the same thing with Cell and then realized later on in development how stupid the idea was of creating a general purpose type architecture that was going to compete with what in essence is a DSP. There just isn't any comparison.

You have my apologies for my initial comments.

Heh, anytime you have any questions about me just shoot me a PM, as far as the person you were concerned about, you put enough monkeys banging away on enough keyboards..... ;)
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Originally posted by: SickBeast
If Larabee really sucked, Sony would not want to put it into a PlayStation 4 console that they will be losing money on to begin with.

I dont think you should put rumors and facts in the same category.

So are you suggesting that the 3D/gaming industry should migrate back to software rendering when hardware rendering has provided something much more than the former for years?


 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Originally posted by: Cookie Monster
Originally posted by: SickBeast
If Larabee really sucked, Sony would not want to put it into a PlayStation 4 console that they will be losing money on to begin with.

I dont think you should put rumors and facts in the same category.

So are you suggesting that the 3D/gaming industry should migrate back to software rendering when hardware rendering has provided something much more than the former for years?
I think that the best programmers will extract the best performance from the chip with the most transistors, regardless of how 'bad' it is or what its capabilities are. Due to the programmability of Larabee, I say that anything is possible, especially if John Carmack gets involved.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Ben, I'm not going to argue with your post because it makes sense.

The one thing I will reiterate is the fact that you really have no idea what Larabee can do until you get your hands on one. You may have to eat crow on this one. What will you do if Larabee gets more 3D Marks than the best cards that NV and AMD have at the time? It could get messy. You are indeed bold with your criticisms.

BTW, in response to your set of "cube" comments. If intel creates the right tools, it will be faster to do what you described for Larabee. I understand your reluctance to learn something new, but sometimes you have to. If I knew what you know, I would probably bitch about Larabee as well. I do AutoCAD at an elite level and it would take me years to re-learn a new program to do what I can do today in CAD.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Originally posted by: SickBeast
Meh, you caught a typo at the end. I meant to say CPU vs. GPU. I edited it already.

I'm surprised you wouldn't want an 80-core CPU in your box to help render 3D Max scenes. I know I would. It was taking my Athlon XP 3200+ over two hours to render each frame. It took a week for my computer to render my university thesis. I wanted to make a video but didn't have time to hive my university's computers together.

You can say that Larabee is "nothing", yet it is impressive. How much would 80 computers with 80 cheap CPUs in them cost? Really, Larabee is going to be like having an entire university computer lab in your computer. It's incredible.

You point out that 3dlabs and such didn't fare well, yet I know for a fact that at one of the firms I worked at they paid huge $$$ for the rendering workstations. They were capable of photo-realistically creating interior renderings of hospitals including the elaborate equipment. I was completely blown away by it, and that was 5 years ago running on a quadruple CPU box with a now outdated GPU.

You keep talking about die space, but really, if intel is able to put Larabee onto a single die and sell it for $300, why do you care so much? When NV pushed the envelope with G80 I bought one. It's tech like this that pushes the boundaries and moves things forward.

We already have 80+ cores (stream processors) in ATI and Nvidia CPUs. Actually, check out Rubycon who is installing four (4) GTX295's in a rig for crunching/encoding. That is 1920 stream processors available. More and more people are setting up these systems with multiple Nvidia GPU's for a mini mainframe in a desktop.

5 years ago really cannot be compared with what is available for computing today. 3Dlabs, as BenSkywalker had stated, wrote some very robust software. But it still cannot compare to what we have today in the form of the still yet fully untapped power in GPU's.

Die space is relative to how much each wafer costs Intel to produce. Maybe we will see Larrabee for 300.00, or maybe we will see it for 1000.00. Slapping a pricetag on a product that is not even out there yet doesn't make sense. All theoretical without any basis.

And why do you think Ben cares so much? It doesn't look like that to me. That is a conclusion you drew from his posts. I don't think he give a damn what happens in the end. He is just looking at current available data (not much by the way) and making an intelligent guess on what will come to pass. Larrabee isn't going to compete with ATI or Nvidia in the graphics department. And we are not saying this because we DONT want it to happen, which is probably what your mind translates comments like that into when you read them. It's because ATI and Nvidia have been light years ahead for years and years. Picture Nvidia suddenly announcing an X86 CPU for desktops (pretend licensing wasn't an issue), how do you think it would fare against the likes of i7 or even C2Duo? How much faith would you have that it would compete with them? Especially in its first incarnation out the door? You would be one of the most skeptical members on this forum, no?

Which is why I am having some difficulty in figuring as to why you have so MUCH faith in Larrabee. Blindly.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Originally posted by: SickBeast
Ben, I'm not going to argue with your post because it makes sense.

The one thing I will reiterate is the fact that you really have no idea what Larabee can do until you get your hands on one. You may have to eat crow on this one. What will you do if Larabee gets more 3D Marks than the best cards that NV and AMD have at the time? It could get messy. You are indeed bold with your criticisms.

BTW, in response to your set of "cube" comments. If intel creates the right tools, it will be faster to do what you described for Larabee. I understand your reluctance to learn something new, but sometimes you have to. If I knew what you know, I would probably bitch about Larabee as well. I do AutoCAD at an elite level and it would take me years to re-learn a new program to do what I can do today in CAD.

Exactly. You don't have your hands on one either. Yet you are talking about it as if you had already seen benchies of it blowing away GPGPUs in similar tasks. We have little faith in Intel as a 3D graphics company, because they are a CPU company first and foremost. I'd like to hear your point of view if say ATI (if not bought out by AMD) or Nvidia announced and was designing their own x86 CPUs for desktop/server/notebook markets. How do you think they would do against Intel or AMD CPUs?

Cube comments. (Actually a sphere with cube map applied) "If intel creates the right tools, it will be faster to do what you described for Larabee." <-- How do you know this? You're stating this as factual information. Why?
 

dguy6789

Diamond Member
Dec 9, 2002
8,558
3
76
I have no doubt in Intel's ability to create a really powerful GPU. People who doubt Intel's ability to make one do it because they have this strange idea that because previous Intel graphics products were slow, that any product they make will probably be slow. Those people don't understand that Intel never designed those products to compete with anything performance wise.

Making a good product is more about the money you put into making it than it is about the experience you have doing it. This is why Intel can make Larabee competitive; they have more money to throw at it than AMD, ATi, or nVidia. Take a look at Microsoft with the Xbox 360 compared with Sony and their consoles. The newcomer is completely wiping the floor with the veteran. The reason? Microsoft has the money to force success.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
But, they aren't making a GPU. They are making a cluster of CPU's, older ones at that, to try and do what GPU's were designed to do by throwing massive parallelism at the problem. Whether it works well or not is a big mystery right now. I highly doubt it's capabilities will be anything near NV/ATI. I do however have little doubt that Larrabee will perform better than Intels IGPs.
 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
Originally posted by: Keysplayr
But, they aren't making a GPU. They are making a cluster of CPU's, older ones at that, to try and do what GPU's were designed to do by throwing massive parallelism at the problem. Whether it works well or not is a big mystery right now. I highly doubt it's capabilities will be anything near NV/ATI. I do however have little doubt that Larrabee will perform better than Intels IGPs.

The Larrabee cores are far from a cluster of old CPU cores.....they are redesigned for significantly increased FP performance and to handle far more specific workloads than the old P54C core they are based on. It's not like Intel is grouping together a bunch of old in-order Pentium cores and thinking that with 32 of them they are going to compete with modern GPUs via software rendering. The Larrabee cores have 512-bit vector units, meaning each is capable of 4x the FP performance per clock of a modern Intel CPU.

Also remember that Larrabee does have an element of fixed function hardware, it has dedicated texture units. The Larrabee cores have to handle shading and rasterization when running games. Larrabee should have plenty of power for that; 2 TFLOPs+ performance is what I would expect from the launch unit (2 TFLOPs = 32 cores @ 2.0GHz). And given the rumored die size from the wafer photo (600-700mm^2!), the projections of 32 cores seem low. I would expect 48-64 cores to fit into that kind of die space on 45nm.

Overall I doubt that Larrabee is going to be the fastest gaming processor at release - if it is then that is extremely impressive. I think it will probably provide similar/better performance compared to current GPUs like RV770/GT200, based off its FP performance - but will probably not compete with the top of the line AMD/NV GPUs in 2010.

What Larrabee will do is redefine the GPGPU. I think that it will lead to widespread adoption of GPGPU thanks to the x86 architecture which will enable developers to support it without any significant change to their programs. The kind of power that is going to be available with Larrabee - likely 2TFLOPs+ peak FP, 32 cores / 128 threads..... is going to be very impressive.
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: SickBeast


DreamWorks animation said that Larabee allows them to increase what they can do by 20X. (!)
Larrabee is not out yet. Besides DreamWorks already uses Intel and merely made a PR statement that they will upgrade to new Intel when it comes out.

If Larabee really sucked, Sony would not want to put it into a PlayStation 4 console that they will be losing money on to begin with.
Sony already shot down this rumor.

- Larabee will not need drivers
LOL
- Larabee will be a monster when it comes to 3D rendering (hence the DreamWorks comments)
Commenting on a product that does not exist yet.

Just think of all the time saved in eliminating DX and OpenGL. The Mac and Linux suddenly become gaming platforms. Games will be able to run just as fast (or faster) in software mode compared to running in OpenGL/DX.
It's clear you have no idea what you are talking about.


 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Keys, if you look at what NV did right out of the gate with the TNT and GeForce cards, I think it shows that a good company can get it right the first time. In this business you sort of have to. If you can't recoup your R&D money with your first effort, you need a money tree and a ton of alcohol.

If you want to know what I think Larabee will be it is this: a 40 core CPU that can run as a GPU and compete at the midrange level with AMD and NV. If that is what it is capable of, I will be interested in one. Midrange graphics is fine, and there is a ton that you can do with 40 processors running general code that is available today.

With NV you're waiting for everyone to reprogram everything for CUDA, whereas with intel you're not; you're simply stuck with inferior graphics until they can cram more cores onto the thing.

As for my "cube" comments - the right software tools will make it easy and fast to program for anything.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: SickBeast
I just don't know how a scientist who claims to have a passion for parallel computing can bash a product like Larabee, considering what companies like Pixar are doing with it.

The entire article reeks of bullshit and marketing.
Did you actually read the article? Dally states very clearly the reasoning behind his comments, without getting overly technical. While he certainly solidified his credentials in the area of academia, I somehow doubt he spent much, if any time teaching bullshit or marketing at Stanford's B-school. Oh that's right, he was only the chair of their CS department and pioneer of parallel computing before taking his current position at Nvidia.

As for his comments, he's stating why he thinks Intel's approach to parallel computing isn't as good as Nvidia's (Ben breaks down many of the finer points nicely in this thread), which in no way diminishes his outlook of parallel computing:
  • NY Times article:
    ?Intel?s chip is lugging along this x86 instruction set, and there is a tax you have to pay for that,? Mr. Dally said.

    Intel says that staying with x86 makes life easier on software developers familiar with such an architecture. Mr. Dally rejects this by saying Intel will need to take up valuable real estate on the chip to cater to the x86 instructions.
As a scientist, and not a marketer, he's probably wondering why Intel is dedicating so much die space to functional units you'd find on your typical desktop x86 CPU when execution units (Vector processing units in Larrabee's case) should be the primary focus in improving parallel processing performance. Of course he's not the only person who has wondered this out loud, as you can see in pretty much any preview of Larrabee, including the write-up done by Anand and Derek here on AT.

He also goes on to say he decided to take the position at Nvidia over another competitor, like Intel, because of these design decisions and the ability to impact decisions going forward: Dally ?Intel just didn?t seem like a place where I could effect very much change,? he said. ?It?s so large and bureaucratic.?



 

ArchAngel777

Diamond Member
Dec 24, 2000
5,223
61
91
Lets just wait and see... It might flop, it might not. Intel has some brilliant people working for them. Lets give it a chance before we make the decision that it is going to suck.

BTW - I don't particular have much faith in it, but I am more than willing to be surprised by its release, in a good way, if it so turns out that way.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: SickBeast
He points out that Larabee's x86 cores are wasteful in terms of die size. This may hold true for graphics performance, but he fails to mention the benefit of having that many x86 cores in your computer. Any video encoding app would benefit without having to be patched or re-coded, for example.

For most people, it's better to have more general-processing power than it is to have a ton of graphics power. If you look at most laptop computers, it illustrates this quite clearly.

Originally posted by: Extelleron
What Larrabee will do is redefine the GPGPU. I think that it will lead to widespread adoption of GPGPU thanks to the x86 architecture which will enable developers to support it without any significant change to their programs. The kind of power that is going to be available with Larrabee - likely 2TFLOPs+ peak FP, 32 cores / 128 threads..... is going to be very impressive.
You guys are placing a lot of faith in Intel's Larrabee compiler to effectively make single or few threaded applications run efficiently on Larrabee's vector execution units without any additional help, especially given many of these apps don't scale particularly well on existing x86 architectures. You really don't need to look any further than a current example of scalar vs. vector design when comparing Nvidia vs. ATI stream processing units, where ATI's 5 vector design depends heavily on optimization for scaling and efficiency.

Larrabee's design will be even more dependent on application or compiler optimizations with 16 vector execution units per core. I'm also not sure where you get the impression current apps will automatically accelerate on Larrabee without being recompiled or without any application optimization, just because they share the same base x86 ISA. I think the main concern about Larrabee is that not only do you have pottentially less efficient vector units per Larrabee core with an x16 design, you have all this additional redundant x86 overhead before you can even access those execution units.

  • AT's Larrabee Preview by Anand and Derek

    NVIDIA's SPs work on a single operation, AMD's can work on five, and Larrabee's vector unit can work on sixteen. NVIDIA has a couple hundred of these SPs in its high end GPUs, AMD has 160 and Intel is expected to have anywhere from 16 - 32 of these cores in Larrabee. If NVIDIA is on the tons-of-simple-hardware end of the spectrum, Intel is on the exact opposite end of the scale.

    We've already shown that AMD's architecture requires a lot of help from the compiler to properly schedule and maximize the utilization of its execution resources within one of its 5-wide SPs, with Larrabee the importance of the compiler is tremendous. Luckily for Larrabee, some of the best (if not the best) compilers are made by Intel. If anyone could get away with this sort of an architecture, it's Intel.

    At the same time, while we don't have a full understanding of the details yet, we get the idea that Larrabee's vector unit is sort of a chameleon. From the information we have, these vector units could exectue atomic 16-wide ops for a single thread of a running program and can handle register swizzling across all 16 exectution units. This implies something very AMD like and wide. But it also looks like each of the 16 vector execution units, using the mask registers can branch independently (looking very much more like NVIDIA's solution).

    We've already seen how AMD and NVIDIA architectural differences show distinct advantages and disadvantages against eachother in different games. If Intel is able to adapt the way the vector unit is used to suit specific situations, they could have something huge on their hands. Again, we don't have enough detail to tell what's going to happen, but things do look very interesting.

Here's a good indication that existing code compiled for x86 isn't going to leverage Larrabee's additional vector unit functionality and additional extensions: Intel outlines CT parallel programming language at IDF. I'm sure it comes with its own highly specialized x86 compiler needed to fully extract performance out of those vector units and additional LRBni extensions.

The fact they're now pushing their own parallel computing language certainly flies in the face of some comments they made about GPGPU and CUDA becoming a footnote in the annals of history. I'd say Larrabee's existence alone would lend creedance to some of Nvidia's predictions about the future of computing, but Intel's focus on using Larrabee for highly parallel computing rather than their existing multicore desktop architectures certainly solidifies those claims.

At this point it seems the inclusion of x86 on Larrabee was more of an attempt by Intel to ensure x86 doesn't become "a footnote in the annals of computing history." Because we all know how much Intel covets and protects that x86 license don't we? ;)
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
ben and dally are exactly right on this. and that is the thing. you can make fast x86 CPUs, but they're designed to please everybody and run anything, so it's great to have a separate processor in the system that specializes only in SIMD/FPU, and a specialized architecture is best for this. Intel knows this because they demonstrated a processor years back that was basically comprised of 160 FPU units and 80 little 5-way routers. It was made with only 100 million transistors, was 275 sq mm, and did 1 tflop at 62 watts and 2 tflop at 265 watts. and that was on 65nm. it wasn't x86.

If you wanted to make a chip composed of 80 silverthorne cores, you would have a die area of 20 square centimeters! that's more than three square inches. 80 silverthornes can only do 320 gigaflops, so i just don't know where you guys were headed with that example. larrabee is supposed to do 2 tflop with 300 watts. That isn't much better than the flops-per-watt nVIDIA is doing right now. AMD is far ahead in this regard. my hope is that the compute shader in DirectX11 and OpenCL will help bring more attention and widespread adaptation and adoption of compute-intensive SIMD work to GPUs. This is when we can see how a larrabee core fares against a geforce SP or radeon SP.

my understanding, anyway, as that these future shader models are newer environments that allow general purpose computing on the shaders. it's going to be more widespread than CUDA and Brook+ anyway, right? and if that is the case then i'm not sure why intel chose x86 for larrabee.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Chizow, intel has always made the best compilers. I have good reason to have faith in what they can do in that regard.

WRT your other comments, in the AT article you linked to, they said that in the future it was possible that Larabee would show up in Windows Task Manager as 40 processors or whatever. If the processor can be seen by Windows and is x86, I see no reason why it could not run any current application you wanted it to run.

Oh, and video encoding apps tend to scale linearly with the number of processors you have. The same goes for 3D rendering apps. You have simply have each processor render its own frame, or else its own portion (tile) of a frame. I know this because I used to do it for a living.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: SickBeast
Chizow, intel has always made the best compilers. I have good reason to have faith in what they can do in that regard.
While its true their x86 compilers have been the best in the past, that does nothing to refute Dally's claims that Intel is taking an inferior design approach to parallel computing with Larrabee. But of course the absolute measure of design will be actual performance, which we'll see late this year or early next.

WRT your other comments, in the AT article you linked to, they said that in the future it was possible that Larabee would show up in Windows Task Manager as 40 processors or whatever. If the processor can be seen by Windows and is x86, I see no reason why it could not run any current application you wanted it to run.
And again, why would anyone want to spend on Larrabee with the equivalent of 40 Pentium cores to run their Windows apps when a $100 2-4 Core CPU runs typical desktop apps faster? If Intel is taking a multi-core approach from the OS and application level, I'm not sure how they're going to avoid similar threading and scaling problems seen with their current desktop x86 processors. But again, I doubt they'll take this approach, I'm guessing they'll let the compiler and driver handle all the scheduling and Larrabee will look like one big logical processor.

Oh, and video encoding apps tend to scale linearly with the number of processors you have. The same goes for 3D rendering apps. You have simply have each processor render its own frame, or else its own portion (tile) of a frame. I know this because I used to do it for a living.
Except its not that simple when applied to Larrabee, or even other CPUs in the past, as there's always going to be scaling inefficiencies and overhead from multithreading. You can look over past benches where multi-cores don't always translate into linear performance gains, or how multi-cores don't result in the same gains as linear clock speed increases.

With Larrabee you have to ask will 40 Pentium cores encode faster than 8 logical Nehalem cores. Based on current CPU performance compared to older parts, I'd say no. Sure there's the potential for scaling beyond 40 cores with the x16 vector units, but that's like additional multi-threading on top of multiple threads. Again, given how much difficulty existing apps and devs are having with multi-threading with physical cores, expecting anything close to 100% efficiency based on theoreticals is optimistic at best.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Originally posted by: SickBeast
Keys, if you look at what NV did right out of the gate with the TNT and GeForce cards, I think it shows that a good company can get it right the first time. In this business you sort of have to. If you can't recoup your R&D money with your first effort, you need a money tree and a ton of alcohol.

If you want to know what I think Larabee will be it is this: a 40 core CPU that can run as a GPU and compete at the midrange level with AMD and NV. If that is what it is capable of, I will be interested in one. Midrange graphics is fine, and there is a ton that you can do with 40 processors running general code that is available today.

With NV you're waiting for everyone to reprogram everything for CUDA, whereas with intel you're not; you're simply stuck with inferior graphics until they can cram more cores onto the thing.

As for my "cube" comments - the right software tools will make it easy and fast to program for anything.

TNT wasn't Nvidia's first graphics chip. It's no surprise you haven't heard of them. They weren't that good. NV1, NV2, NV3 (Riva128 and Riva128ZX). Then the TNT came out.
Linky to story

So the TNT wasn't Nvidia's right out of the gate graphics venture.
 

ShawnD1

Lifer
May 24, 2003
15,987
2
81
Originally posted by: SickBeast
Intel has said that all of their driver programmers are focused on their current IGP solutions. They have usurped the 3d Labs people to work on Larabee. This tells me two things:

- Larabee will not need drivers
- Larabee will be a monster when it comes to 3D rendering (hence the DreamWorks comments)

Just think of all the time saved in eliminating DX and OpenGL. The Mac and Linux suddenly become gaming platforms. Games will be able to run just as fast (or faster) in software mode compared to running in OpenGL/DX.

I don't quite understand what Larrabee is. Is this is a graphics card or an old school math coprocessor?

Here's a quote from that wiki page:
The original IBM PC included a socket for the Intel 8087 floating point coprocessor (aka FPU) which was a popular option for people using the PC for CAD or mathematics-intensive calculations. In that architecture, the coprocessor sped up floating-point arithmetic on the order of fiftyfold.
[.....]
The 8087 was tightly integrated with the 8088 and responded to floating-point machine code operation codes inserted in the 8088 instruction stream. An 8088 processor without an 8087 would interpret these instructions as an internal interrupt, which could be directed to trap an error or to trigger emulation of the 8087 instructions in software.

Then here's a quote from wiki's Larrabee page
However, Larrabee's hybrid of CPU and GPU features should be suitable for general purpose GPU (GPGPU) or stream processing tasks.[1] For example, Larrabee might perform ray tracing or physics processing,[3] in real time for games or offline for scientific research as a component of a supercomputer.

Really...
Havok Physics is a physics engine developed by Irish company Havok. It is designed for computer and video games by allowing interaction between objects or other characters in real-time and by giving objects physics-based qualities in three dimensions. By using dynamical simulation, Havok allows for more lifelike worlds and animation, such as ragdoll physics or intelligence in massive falling things. The company has also released a Havok Animation. Havok was purchased by Intel in 2007.
[......]
Havok can also be found in Autodesk Media & Entertainment's 3ds max as a bundled plug-in called reactor. A plugin for Autodesk Media & Entertainment's Maya animation software and an xtra for Adobe Director's Shockwave are also available.


Does it seem like we're going in circles? My 80386 had a second slot on it for a math coprocessor. Like wiki states, it's for things like CAD and other intense math applications. Coprocessors died out and my next computer, a Pentium 1, did not have such a slot. After that we had graphics cards for CAD/Maya to do essentially the same thing as what the coprocessor did. Now Intel is making an x86-based "graphics card" that is designed to do floating point math. If this Larrabee card is closely tied to the CPU and can take over some (or all) floating point instructions directed at the CPU, we'll be exactly where we were 20 years ago.

Based on what I know, which is almost nothing, I'm going to guess that Larrabee's performance will suck ass compared to a graphics card. The biggest potential for catching on would be if the card is capable of doing general computing without making any software changes. If this thing is like having 2 central processors without needing some stupid $300 motherboard or slow ass "buffered" memory, I will buy one. Maybe offload some of the Havok calculations in Fallout 3.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: BenSkywalker
Any video encoding app would benefit without having to be patched or re-coded, for example.

That would ignore what a non x86 based alternative with comparable layout could do. Some here may demand ignorance to enter into a discussion, but in all seriousness x86 is about the poorest architecture you could imagine for this type of architecture. Decode hardware when tiny die space per core is the essence of your design goal is rather foolish. What makes this worse, far worse, is that the applications will still require a recompile in order to run on Larrabee, it isn't an OoO architecture- default x86 code would roll over and die running on it(wouldn't be surprising to see a normal processor be faster on anything with decent amounts of branching).

With several hundred thousand transistors per core wasted on decode hardware, more trasnsistors utilized to have full I/O functionality given to each core, a memory setup that is considerably more complex then any of the other vector style processor choices available Larrabee is making an awful lot of compromises to potential performance to be more Intel like then it needs to be.

Everyone seems to be taking the stance that Larrabee must have a lot going for it because of how much Intel is putting into it. Itanium anyone? Everyone with so much as an extremely small dose of understanding knew that Itanium was going to be a huge failure in the timeframe it hit. Sadly, a VLIW setup for something like Larrabee would end up being a much better option then where they are headed.

I guess, the best way to think of it is that Intel clearly sees a major movement as does everyone else in computing power. The problem is, Intel wants to take as much lousy outdated broken down crap with them as they can. We already have x86 as our main CPUs to handle that garbage, why do we need more of the same wasted die space on our GPUs? To make it so that lousy existing x86 code that isn't well suited for extreme levels of parallelization can be recompiled in an easier fashion? So let's prop up our outdated poorly structured code base for a short term gain and hold back everything else in the long term? Just doesn't make sense to me.

first accurate post of the thread... larabee is set to waste over 40% of its total space on redundant x86 decode hardware (one for each core), and for what? no SSE, no out of order... NOTHING is going to run on it without a serious recompile and recode... so why bother with it in the first place? its wasting space on a gimmick.

the way i see it, intel is banking on taking a loss (by wasting nearly half the die on NOTHING) for the chance to get x86 to become the standard, if that happens then they are granted legal monopoly status and no one may compete with them. It seems as clear as day that this could be their only course of action, intel engineers are not stupid.

I think this is why the professor in question is joining the fight... nvidia is the one company that stands a chance at braking the x86 stranglehold and potentially getting us heterogenous computing. although i wouldn't be surprised if they would just opt to displace intel as the only legal monopoly backed by stupid misapplied patent laws.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Taltamir I have some choice words for you but can't be bothered getting a vacation over it. Your first statement is both blatantly inaccurate and inflammatory. P&N is in another area here.
 

ShawnD1

Lifer
May 24, 2003
15,987
2
81
Originally posted by: BenSkywalker
Any video encoding app would benefit without having to be patched or re-coded, for example.
Everyone seems to be taking the stance that Larrabee must have a lot going for it because of how much Intel is putting into it. Itanium anyone? Everyone with so much as an extremely small dose of understanding knew that Itanium was going to be a huge failure in the timeframe it hit.
?????
http://en.wikipedia.org/wiki/Itanium
From 2001 through 2007, IDC reports that a total of 184,000 Itanium-based systems have been sold. For the combined POWER/SPARC/Itanium systems market, IDC reports that POWER captured 42% and SPARC captured 32%, while Itanium-based system revenue reached 26% in the second quarter of 2008.
[...]
An Itanium-based computer first appeared on list of the TOP500 supercomputers in November 2001.[25] The best position ever achieved by an Itanium 2 based system in the list was #2, achieved in June 2004, when Thunder (LLNL) entered the list with an Rmax of 19.94 Teraflops. In November 2004, Columbia entered the list at #2 with 51.8 Teraflops, and there was at least one Itanium-based computer in the top 10 from then until June 2007. The peak number of Itanium-based machines on the list occurred in the November 2004 list, at 84 systems (16.8%); by November 2008, this had dropped to nine systems (1.8%).[55]
The reason IA-64 is in decline is because X86 is on the rise. 6 of the top 10 computers are Opteron, 1 is a Xeon, 1 is a mix of Opteron/Cell. The other 2 are Power.

Intel is quite good at making stuff. Their x86 processors beat Motorola/IBM's POWER processors; Apple computers were regularly outclassed while they still had PowerPC/G3/G4/G5 processors. Intel is currently beating AMD at making x86 desktop, laptop, and netbook processors. Intel has always had excellent chipsets for their motherboards. I'm not sure if it's still true, but Intel made the best SSD hard drive a few months ago (Ananadtech has a great article about it).

If Intel has a seemingly ridiculous design for their "graphics card", it seems reasonable to assume it's that way for a reason.

the way i see it, intel is banking on taking a loss (by wasting nearly half the die on NOTHING) for the chance to get x86 to become the standard, if that happens then they are granted legal monopoly status and no one may compete with them. It seems as clear as day that this could be their only course of action, intel engineers are not stupid.

I think this is why the professor in question is joining the fight... nvidia is the one company that stands a chance at braking the x86 stranglehold and potentially getting us heterogenous computing. although i wouldn't be surprised if they would just opt to displace intel as the only legal monopoly backed by stupid misapplied patent laws.
This seems a bit paranoid. While Intel obviously does have a lot to gain/hold with x86, x86 is winning because it's simply better for most tasks. Intel doesn't even control the to 10 super computers in the world, but their architecture in AMD's Opteron processors does.

Nvidia and ATI's push towards GPU computing does absolutely nothing to stop the x86 stranglehold since they still run on x86 machines. Furthermore, Intel is one of the companies supporting OpenCL. It seems like Intel simply wants to have their own GPGPU product since they can't possibly stop GPGPU from happening.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Originally posted by: SickBeast
Taltamir I have some choice words for you but can't be bothered getting a vacation over it. Your first statement is both blatantly inaccurate and inflammatory. P&N is in another area here.

What for? He is as sure Larrabee won't work just as much as you think it will work. Is your opinion the only one that is allowed? If his post above is enough to generate "choice words" on your part, then you don't belong here. In fact, you saying you have vacationable "choice words" toward him was more inflammatory than ANYTHING taltamir posted, which wasn't.