nVidia scientist on Larrabee

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Originally posted by: Keysplayr
Originally posted by: SickBeast
Taltamir I have some choice words for you but can't be bothered getting a vacation over it. Your first statement is both blatantly inaccurate and inflammatory. P&N is in another area here.

What for? He is as sure Larrabee won't work just as much as you think it will work. Is your opinion the only one that is allowed? If his post above is enough to generate "choice words" on your part, then you don't belong here. In fact, you saying you have vacationable "choice words" toward him was more inflammatory than ANYTHING taltamir posted, which wasn't.
Keep baiting me, it's not going to work. If anything I'll send you a PM or tell you what needs to be said over at ABT. :light:
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Unless Intel pulls a total miracle, Larrabee, at least it's first incarnation, will not live up to it's hype. IMHO. For some apps, it's performance will be breathtaking. It has to. There is something out there that it can run better than anything else on the planet. But not the market it is seemingly targeting. Another player is always welcome in my book, and I don't want to see Larrabee tank. But at this time, I just can't get behind it.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Originally posted by: Keysplayr
Unless Intel pulls a total miracle, Larrabee, at least it's first incarnation, will not live up to it's hype. IMHO. For some apps, it's performance will be breathtaking. It has to. There is something out there that it can run better than anything else on the planet. But not the market it is seemingly targeting. Another player is always welcome in my book, and I don't want to see Larrabee tank. But at this time, I just can't get behind it.

Well, for NV, Larabee represents death, seeing as it could make CUDA redundant. Without an x86 license, NV will be SOL and pigeon-holed. I can see why you would want to refute my arguments so vehemently and even go so far as to call for me to be banned.

On another website that sort of crap might work out for you, Keysplayr. :thumbsdown:
 

Wreckage

Banned
Jul 1, 2005
5,529
0
0
Originally posted by: SickBeast

Well, for NV, Larabee represents death

NVIDIA owns a part of Larrabee. They no more fear Larrabee than they do DirectX 10.1 :laugh:
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Originally posted by: Wreckage
Originally posted by: SickBeast

Well, for NV, Larabee represents death

NVIDIA owns a part of Larrabee. They no more fear Larrabee than they do DirectX 10.1 :laugh:

One day you will reconcile with the fact that NV could die, Wreckage. The rest of us have. I know it's hard for you.
rose.gif
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Originally posted by: SickBeast
Originally posted by: Keysplayr
Unless Intel pulls a total miracle, Larrabee, at least it's first incarnation, will not live up to it's hype. IMHO. For some apps, it's performance will be breathtaking. It has to. There is something out there that it can run better than anything else on the planet. But not the market it is seemingly targeting. Another player is always welcome in my book, and I don't want to see Larrabee tank. But at this time, I just can't get behind it.

Well, for NV, Larabee represents death, seeing as it could make CUDA redundant. Without an x86 license, NV will be SOL and pigeon-holed. I can see why you would want to refute my arguments so vehemently and even go so far as to call for me to be banned.

On another website that sort of crap might work out for you, Keysplayr. :thumbsdown:

What are you talking about?
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
Originally posted by: SickBeast
Well, for NV, Larabee represents death, seeing as it could make CUDA redundant. Without an x86 license, NV will be SOL and pigeon-holed.
Considering that in either case you need to write embarrassingly parallel code to feed a GPU no matter who makes it, I'm not sure why any of this would make CUDA redundant overnight. There's no innate advantage to the x86 instruction set here since there's no backwards compatibility needs to speak of. Intel could produce an amazing chip and toolset, establishing x86 as a de-facto standard in the GPU space, but it's awfully early to call that.

Plus we'd also be talking about OpenCL failing (otherwise why not write everything in OpenCL and be sure it's compatible everywhere?) which seems unlikely given who's backing it and the lack of competition in that space (MS hasn't been pushing DXCS outside of game developers from what I've seen).

CUDA (and Stream) as front-end programming interfaces will be made redundant (so you're right on that bit), but I just don't see why it would be by x86. OpenCL is a far more likely candidate for the killer.

And SickBeast, go take a nap or get some fresh air or something. You're getting unruly and annoying - it's certainly not productive.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Originally posted by: SickBeast
Originally posted by: Keysplayr
Unless Intel pulls a total miracle, Larrabee, at least it's first incarnation, will not live up to it's hype. IMHO. For some apps, it's performance will be breathtaking. It has to. There is something out there that it can run better than anything else on the planet. But not the market it is seemingly targeting. Another player is always welcome in my book, and I don't want to see Larrabee tank. But at this time, I just can't get behind it.

Well, for NV, Larabee represents death, seeing as it could make CUDA redundant. Without an x86 license, NV will be SOL and pigeon-holed.

You kind of have that reversed. Intel is too huge to entertain any thought of them going out of business, so I won't even go there. However, Intel DID see Nvidia as a threat since G80 and CUDA. They saw a GPU obliterate a CPU's performance in a few choice apps at first, then another, and another. They glimpsed their cluster server sales potentially tanking. 3 GTX280's ($2200 rig) in a single desktop PC outperforming a cluster of over 120 Intel Xeon CPU's (tens of thousands of dollars) taking up 40 rack spaces in a climate controlled computer room was enough to make them order extra toilet paper, I can assure you of that. So, Larrabee is Intels first attempt at trying to nip their problem in the bud. And lets not forget, Intel's Larrabee wont be going up against G200's/RV790 when it launches. GT300's and R8xx will be here and who knows what they'll be capable of in their evolution. But you keep track of the news, and you should know all this already.

 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Originally posted by: Keysplayr
Originally posted by: SickBeast
Originally posted by: Keysplayr
Unless Intel pulls a total miracle, Larrabee, at least it's first incarnation, will not live up to it's hype. IMHO. For some apps, it's performance will be breathtaking. It has to. There is something out there that it can run better than anything else on the planet. But not the market it is seemingly targeting. Another player is always welcome in my book, and I don't want to see Larrabee tank. But at this time, I just can't get behind it.

Well, for NV, Larabee represents death, seeing as it could make CUDA redundant. Without an x86 license, NV will be SOL and pigeon-holed.

You kind of have that reversed. Intel is too huge to entertain any thought of them going out of business, so I won't even go there. However, Intel DID see Nvidia as a threat since G80 and CUDA. They saw a GPU obliterate a CPU's performance in a few choice apps at first, then another, and another. They glimpsed their cluster server sales potentially tanking. 3 GTX280's ($2200 rig) in a single desktop PC outperforming a cluster of over 120 Intel Xeon CPU's (tens of thousands of dollars) taking up 40 rack spaces in a climate controlled computer room was enough to make them order extra toilet paper, I can assure you of that. So, Larrabee is Intels first attempt at trying to nip their problem in the bud. And lets not forget, Intel's Larrabee wont be going up against G200's/RV790 when it launches. GT300's and R8xx will be here and who knows what they'll be capable of in their evolution. But you keep track of the news, and you should know all this already.

The problem is that CUDA requires a lot of support from the industry in order to implement it. If you look at the control that intel and microsoft have had of things over the past 20+ years, I don't think NV has much of a chance of their standard taking over.

I'm not saying this is good; I'm simply being realistic.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Ok, lets be realistic.

Scenario 1: Larrabee is released and it offers a solid performance equivilent or even better than then available GPUs.
Intel will definitely have the upper hand and will place enormous pressure on GPGPU's. The industry rapidly adopts Larrabee and can compile software easily for it with Intels state of the art compilers. Does not look good for AMD or Nvidia.

Scenario 2: Larrabee is released and it does OK against then available GPGPUs but not quite. Not convincing enough to get current CUDA adopters to switch over from their mini mainframes. Intel would have to work very hard to show something worthy in Larrabee.

Scenariio 3: We have another Pentium4 on our hands and it's a nightmare to write proper code for. Intel is now committed to this for a few years, while AMD and Nvidia continue to improve their already impressive hardware.

I'm keeping this pretty brief, and can be discussed forever, and there will be other scenarios in between above and below what I loosely listed here. Bottom line though, if intel doesn't get Larrabee right, they have a major problem. The first signs of Larrabee being a stinker would be a few delays of it's launch. Look out for those. If there are none, it's probably a good sign for it.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
The worst case for intel is that they will lose out on some of their server business for the time being.

The worst case for NV is that they will go out of business. That chance exists for intel as well, however the odds are stacked enormously in their favor.

I do agree with your scenarios, however, Keys.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Originally posted by: SickBeast
BTW Virge, were you posting as a member or as a mod? :confused:

You know the difference by now whether a mod is posting as a moderator or a member. Mod comments are always in bold and signed by said moderator.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: SickBeast
The worst case for intel is that they will lose out on some of their server business for the time being.

The worst case for NV is that they will go out of business. That chance exists for intel as well, however the odds are stacked enormously in their favor.

I do agree with your scenarios, however, Keys.
Out of business? Given 75% of Nvidia's business is still solely focused on their GPUs, how would Larrabee threaten their existence? Are you now claiming Larrabee is going to be a viable alternative as a real-time graphics rasterizer? As has already been alluded to, even Intel has backed off this stance long ago. They're now heavily pushing the HPC angle because there is at least a chance for them to compete there.

Originally posted by: SickBeast
BTW Virge, were you posting as a member or as a mod? :confused:
Dally could probably help you out here, as ViRGE's title is often used in academia. Its an honorary title signifying retirement, probably due in large part to threads like these.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Keysplayr
Ok, lets be realistic.

Scenario 1: Larrabee is released and it offers a solid performance equivilent or even better than then available GPUs.
Intel will definitely have the upper hand and will place enormous pressure on GPGPU's. The industry rapidly adopts Larrabee and can compile software easily for it with Intels state of the art compilers. Does not look good for AMD or Nvidia.

Scenario 2: Larrabee is released and it does OK against then available GPGPUs but not quite. Not convincing enough to get current CUDA adopters to switch over from their mini mainframes. Intel would have to work very hard to show something worthy in Larrabee.

Scenariio 3: We have another Pentium4 on our hands and it's a nightmare to write proper code for. Intel is now committed to this for a few years, while AMD and Nvidia continue to improve their already impressive hardware.

I'm keeping this pretty brief, and can be discussed forever, and there will be other scenarios in between above and below what I loosely listed here. Bottom line though, if intel doesn't get Larrabee right, they have a major problem. The first signs of Larrabee being a stinker would be a few delays of it's launch. Look out for those. If there are none, it's probably a good sign for it.

This is a nice summation of the branches in the timeline tree.

I would add that it needs to be taken within the context of the backdrop of Intel's pocketbook and ever-growing process technology advantage - meaning that even if Scenario 3 were the time-zero reality it would simply be a matter of time for Intel to iterate Larrabee thru die-shrinks and ISA/architecture improvements to evolve the scenario situation into becoming (2) and eventually (1).

Itanium serves as an example of such an evolution. As does the P4 -> Core -> i7 evolution. Persistence and money have a habit of doing this. Competitive advantage is rarely maintained by hope and morale alone.

The only way to argue that scenario (1) will not come to pass (eventually) is to make the requirement that Intel abandons their efforts in some capacity (always a possibility) and pulls engineers off future iterations of the product or we must make the questionable argument that NV engineers and management somehow are more crafty and clever in critically defining ways than Intel's engineers and management. (i.e. NV and Intel do an AMD/Intel A64/Prescott thing)

In a capex intensive industry such as semiconductors it is essentially a consequence of the math that tomorrow's victor will be whomever has the money (and desire) to invest today towards owning a marketspace tomorrow. There are exceptions to the rule (A64/Prescott) but in an evolving marketspace time smooths out all the exceptions and the end conclusions are just about as inevitable.

The USSR eventually went bankrupt and the west eventually won the cold the war, with the money and technology advantages that Intel has over NV and AMD it takes a rather peculiar set of boundary conditions for one to argue in good faith that the outcome isn't sort of a foregone conclusion.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Idontcare
In a capex intensive industry such as semiconductors it is essentially a consequence of the math that tomorrow's victor will be whomever has the money (and desire) to invest today towards owning a marketspace tomorrow. There are exceptions to the rule (A64/Prescott) but in an evolving marketspace time smooths out all the exceptions and the end conclusions are just about as inevitable.
I'm not sure how the capex comparison directly applies to Nvidia, given they're fabless and always have been. Sure that places them at the mercy of prevailing technologies made available to them by their fab partners, but that also greatly reduces their capex and risk in both the short and long-term. Their recent quarterly financials clearly reflect this, that despite the terrible economy they were able to manage losses reasonably well with relatively low overhead and fixed costs.

Spending on R&D and leading minds clearly isn't a foreign concept to Nvidia, as the OP of this thread clearly indicates with the hiring of Dally. Nvidia is a body shop and think tank more than anything and while Intel also hires the best talent available, the capex concerns you mention are also a huge part of their business.

I'd also say Nvidia's efforts and resources have been directed in the right areas, as some of the key technologies they've been endorsing and developing, like hardware physics and HPC, were heavily criticized by their competitors in the past. The fact those very same competitors are now singing a very different tune and pushing their own competing technologies is certainly a good indication Nvidia was doing something right. ;)
 

akugami

Diamond Member
Feb 14, 2005
6,210
2,552
136
I think Keysplayer's scenario #2 is the most likely and I agree with Idontcare that with Intel's bank account they can easily keep plugging away at Larrabee until it turns into scenario #1. I do not think Larrabee is going to be another Itanium. The reason Itanium failed was more due to bad x86 emulation than anything. Not being a programmer or anything my understanding of computer architectures is limited but it seemed like Itanium had everything going for it BUT x86 performance. Intel's x86 architecture was simply so good it killed off IA64. AMD's x86-64 extensions worked because they added to x86 and didn't try to compete with it.

Developers familiar with x86 should have an easier time of moving to Larrabee than CUDA. Any developer currently contemplating coding part of their apps to support CUDA, such as movie encoding or transcoding, has to seriously consider moving to Larrabee instead because of familiarity as well as the likelihood of Intel improving on the design enough to eventually match anything AMD or nVidia can do on the GPGPU front while offering decent traditional GPU performance. This is definitely a case where Intel's monopoly helps them in the long run.

One thing I've not seen mentioned is a CPU/GPU marriage in something like Havok and game artificial inteligence. While new games are better utilizing multiple CPU cores it is also true that we'll be moving to more cores as we get process shrinks. In a couple of years it is not out of the question that future Havok API's will marry performance between the CPU and GPU for AI routines as well as physics acceleration.

This is something that can only be of advantage to Intel and AMD and while nVidia certainly can do the same with a reworked PhysX API, they are definitely going to be half a step behind both AMD and Intel. And that half step is why AMD is never going to support PhysX CUDA, though PhysX on OpenCL is another matter since this levels the playing field. This CPU/GPU marriage would also be of tremendous help to those coding for apps such as audio encoding, video encoding or photo retouching such as the Adobe CS bundle.

While I do not doubt that nVidia can rework CUDA for similar uses I'm sure there are many developers who would prefer working with one familiar architecture across the board (which can result in time savings) rather than with different architectures to achieve the same thing.

This is only the rambling of an end user, any inaccuracies can be attributed to me making guesses and not actually in the know about anything.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
As does the P4 -> Core -> i7 evolution
The P4 design was literally thrown in the trash, the mobile pentium, which was a p3 derivative made by the israel team was taken and adopted to desktop and made into the Core design combined with changes to manufacturing process (highK metal), and the i7, while based on core, takes some revolutionary leaps with several major changes including changes.
This isn't evolution of sticking things through because netburst has been literally thrown away and is unused.

This seems a bit paranoid. While Intel obviously does have a lot to gain/hold with x86, x86 is winning because it's simply better for most tasks. Intel doesn't even control the to 10 super computers in the world, but their architecture in AMD's Opteron processors does.
Well, there are two issues here, the core design, and the x86 instruction set. The instruction set is not "better for some tasks", it is an instruction set, it makes it easier to write a compiler at the cost of die space. A total waste on a GPU, but quite useful in a single core CPU.
The die design IS pretty useful though... for certain very specific tasks... but for most parrallel tasks it is not...

Someone here posted an excellent link to a true parallel design, where there are simply no threads, the software doesn't know or doesn't care about how many there are, and scales perfectly and infinitely... and this is not something ANY of the companies are looking at right now.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: taltamir
As does the P4 -> Core -> i7 evolution
The P4 design was literally thrown in the trash, the mobile pentium, which was a p3 derivative made by the israel team was taken and adopted to desktop and made into the Core design combined with changes to manufacturing process (highK metal), and the i7, while based on core, takes some revolutionary leaps with several major changes including changes.

Actually the indirect branch predictor used on the Pentium M Dothan/Banias and the Core Duo are the same used on the Prescott processor and further enhanced on the Core 2 Architecture and the Core i7, the only useful thing that came from the Netbust architecture.
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
Originally posted by: SickBeast


The problem is that CUDA requires a lot of support from the industry in order to implement it. If you look at the control that intel and microsoft have had of things over the past 20+ years, I don't think NV has much of a chance of their standard taking over.

I'm not saying this is good; I'm simply being realistic.

I would imagine that Larrabee will be going up against OpenCL more than CUDA by the time that it is available. CUDA's market penetration is still low enough that many of those developers will likely just rewrite with OpenCL to pick up the other hardware platforms (I'm mostly thinking about HPC here).

There really doesn't seem to be a lot of reason to hand-code an application just for Larrabee in a world where OpenCL has already been adopted.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
i can't find any news sites that say one way or another if larrabee supports OpenCL or not. i see a lot of "if's" while fudzilla says it does support OpenCL but they don't cite their source.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: evolucion8
Originally posted by: taltamir
As does the P4 -> Core -> i7 evolution
The P4 design was literally thrown in the trash, the mobile pentium, which was a p3 derivative made by the israel team was taken and adopted to desktop and made into the Core design combined with changes to manufacturing process (highK metal), and the i7, while based on core, takes some revolutionary leaps with several major changes including changes.

Actually the indirect branch predictor used on the Pentium M Dothan/Banias and the Core Duo are the same used on the Prescott processor and further enhanced on the Core 2 Architecture and the Core i7, the only useful thing that came from the Netbust architecture.

point, they kept the branch prediction, but the rest was thrown away...

And yes, with all three going to do openCL, I don't see why companies will opt for an x86 or cude or whatever amd has exlusive when they can support all three with openCL, or DX11... and its not like the first versions of larabee will even make the cores accessible to windows directly (as CPUs), that is slated for later versions.