• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Final Word on Larrabee

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
ATI-Crossfire
Nvidia-SLI
Intel-Servers

One of them is doing it wrong. 😀

Heh, one of them is doing it right if they convince gamers to pay for four servers packed with knights ferry cards 😉

Selling ice-boxes to eskimos is the saying.

(what are Nvidia's and AMD's trailing 4Q net profit for graphics divisions?)
 
Heh, one of them is doing it right if they convince gamers to pay for four servers packed with knights ferry cards 😉

Selling ice-boxes to eskimos is the saying.

(what are Nvidia's and AMD's trailing 4Q net profit for graphics divisions?)

lol, that's a big "if", but you're right, if they managed it, it would be the biggest Marketing Win ever.
 
Heh, one of them is doing it right if they convince gamers to pay for four servers packed with knights ferry cards 😉

Selling ice-boxes to eskimos is the saying.

(what are Nvidia's and AMD's trailing 4Q net profit for graphics divisions?)

AMD's revenue in graphics for Q4 09 was ~35% of their total revenue for the year in graphics ($427 Million in Q4, $1.206 Billion FY). Their Operating Gain in Q4 for graphics was $68 million.
I don't think Nvidia breaks out those numbers (or I just couldn't find them...).
 
They seem to gravitate towards the Odd when it comes to Video. Like the i740(IIRC that's what it was called) and low RAM idea. I suspect these are just really smart people(I'm not always convinced of this though) seeing a problem and addressing it, but not really understanding the fundamentals of the thing they're trying to fix.

I highly suspect that both ATI and NVidia were kinda biting their tongue during the whole attempt. Both knowing possibly numerous reasons why it was going to fail, but not wanting to help them.
I think it was pretty fricken obvious that the attempt would fail and I said as much in one of my old posts. Specialized silicon has a HUGE performance advantage over software rendering. Just try using a software renderer and see how super slow it is. I'm talking 2-3 orders of magnitude. Now making a more computational dense part like larrabee can close the gap by about 1 order of magnitude but that's still not enough to completely close the gap. The idea was fool hardy to begin with and Intel was foolish to try it.
 
The idea was fool hardy to begin with and Intel was foolish to try it.

Using terms like foolish is a delicate matter. Just because a project did not get seen through to a product doesn't make it a complete failure. I'm sure much knowledge was gained during this project. Yes it did cost Intel's CTO his job, but was that his own fault for over promising?

While I'm sure Intel would rather have a sell-able product, a negative result is still valuable. After all, knowing what doesn't work to solve a problem is just as important as knowing what does work.
 
actually, I think that's exactly what he did and that's what kept the deal from happening. jhh's company might be down right now, but he's a bad mofo. It'll be interesting to see if sometime down the road intel doesn't regret that decision.
 
actually, I think that's exactly what he did and that's what kept the deal from happening. jhh's company might be down right now, but he's a bad mofo. It'll be interesting to see if sometime down the road intel doesn't regret that decision.

Intel could always initiate a hostile takeover. I'd bet a lot of shareholders would tender their shares for a 40% premium. I'm not aware of Nvidia have adopted any poison pill measures.
 
If Intel extends their fixed function units a bit, over a few generations (less direct mapping of DX, but still fairly simple computational units--something halfway between SB's IGP and current Radeons), they'll have flexibility there to tackle some corner cases better, and the many-core CPU can work well as leverage for branchy code. For all of its ills, Hyperthreading aught to be excellent for handling the small wait times between CPU & GPU, when dealing with fine-grained work.

Combine that with having the expensive many-weak-core devices for the HPC market, and nVidia will end up even more screwed. They're in no short-term danger, IMO, but they're going to have to push down some of that pride and ego, and learn from their competitors.

As part of that they will need to stop with the black boxes (same goes to you, Imagination!). Hide the secret sauce behind well-documented interfaces; because the ability for Linux kernel and Xorg hackers to fix bugs will come to matter more than all of the performance you can bring. Also, I doubt we're all that far away, outside of Windows, at least, for FOSS compilers to do well with GPGPU (when/if Gallium3D becomes stable, and supports a few FOSS-friendly GPUs, the capability will be there to exploit).
 
Back
Top