• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

BSN: Intel Larrabee finally hits 1TFLOPS - 2.7x faster than nVidia GT200!

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

IntelUser2000

Elite Member
Oct 14, 2003
7,596
2,457
136
Wait so BSN is only just now getting around to reporting the stuff you already posted about weeks ago!?

I assumed it was additional public benchmarking, not rehash of the same old benchmark run.

My bad :oops:

Do you know if this 48-core "Single-chip Cloud Computer (SCC)" is a Larrabee derivative or if it is a wholely different design/architecture?

http://news.cnet.com/8301-1001_3-10407818-92.html
Don't worry, last time the report was done there was no video. :)


The 48-core chip is related to nothing. It's a test chip that will be made to experiment with future multi-processor scaling and software optimization. They'll make few hundred to send to developers.

So here's what BSN also says

-EVGA/XFX are known launch partners(this seems very likely)
-Late Q2 release on HPC parts and even later for consumer parts(that seems far, but ok)
-But Intel says the performance will be good :D(unknown)
 
Last edited:

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
I hope I'm not dumbing it down too much (simply due to my lack of experience and knowledge in this field), but if Larrabee is that far behind in ray-tracing capabilities, is Intel just trying to sell mini-super computers (that hopefully scale very well)? What exactly would be the benefit to consumers? Encoding/compiling boosts? Or are they trying to go for broke and capture the scientific sector as well (are there any reports of double-precision performance?).
Larrabee should be a beast for GPGPU tasks, not so much on the graphics front. I think Intel was planning on doing Larrabee as a defensive measure against nV/ATi moving into that sector. Having a GPU market offset the costs associated with the R&D for such a part makes for a very interesting market. The issue I'm seeing with their hardware at this point is that they don't have remotely close to enough power to be competitive with ATi or nV's much older hardware, their entire chip is offering performance comparable to the shader hardware on the other parts ignoring that every bit of Larrabee's performance is in one giant pool and must handle all of the basic ops required for rasterization not to mention due to the nature of the chip even those simplistic operations are made considerably more complex as they need an emulation layer to have their part act as a rasterizer.

What it is looking like at this point in time, Larrabee will be comparable to Fermi in the GPGPU space and comparable to ~9400GT in the graphics space if they are lucky.
 

IntelUser2000

Elite Member
Oct 14, 2003
7,596
2,457
136
http://www.semiaccurate.com/2009/11/16/fermi-massively-misses-clock-targets/

The numbers we have been hearing for GT300/Fermi for the past few months were 768 GigaFLOPS for double-precision floating-point and 1.5 TeraFLOPS in single-precision floating-point
Nvidia said in its presentation, that is, 520 to 630 GigaFLOPS in double-precision floating-point. A quick trip to the calculator says 630/768 = 0.82, or nearly a 20% clock miss.
That's not better than Larrabee if the #1 quote is true, and slower if #2 is true. But since they are massively delayed, and the rule of delay(call it IntelUser's rule :) ) says it'll end up slower, #2 is more likely. It's not just Charlie's numbers, I have seen that number quoted somewhere else too.

Of course FLOPS aren't everything, especially in HPC. Larrabee is more likely to reach peak. Games are different story, but we know nothing there.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Well back to the broken record. Ya can't really talk about Nehalem without considering larrabee / As I have said for 3 years now . They will work better together than any combo on planet . Gaming well be slow because of software development but intels buysing companies and spending on development. Gaming software will hold Larrabee back foe a time. As I said you really won't know Nehalem until your introduced to Larrabee. Than there is that 3rd part . Now what was thast . Oh! Ya software . I(ntel Ct is going to kick ass on a nehalem larrabee setup . I literally mean kick as . Even single thread programms are broken up and run as multi threaded if programmers wishes it to be so , . So ya I want to see / INTEL Nehalem/ Intel Larrabee/ Intel Ct VS. NV.????/ NV ferma/ NV CUDA. Let the battle stations begin.

May the Best tech WIN!!
 

Genx87

Lifer
Apr 8, 2002
41,063
495
126
Larrabee should be a beast for GPGPU tasks, not so much on the graphics front. I think Intel was planning on doing Larrabee as a defensive measure against nV/ATi moving into that sector. Having a GPU market offset the costs associated with the R&D for such a part makes for a very interesting market. The issue I'm seeing with their hardware at this point is that they don't have remotely close to enough power to be competitive with ATi or nV's much older hardware, their entire chip is offering performance comparable to the shader hardware on the other parts ignoring that every bit of Larrabee's performance is in one giant pool and must handle all of the basic ops required for rasterization not to mention due to the nature of the chip even those simplistic operations are made considerably more complex as they need an emulation layer to have their part act as a rasterizer.

What it is looking like at this point in time, Larrabee will be comparable to Fermi in the GPGPU space and comparable to ~9400GT in the graphics space if they are lucky.
Which honestly is good enough for the integrated desktop market. I will be honest and admit if they only hit 9400GT performance it will be a disappointment. One of the biggest drawbacks in the desktop market in terms of gaming is that 70% of the machines shipped cant play a game from 2001. Granted 9400 GT performance is better. But game devs will still be reluctant to move back to the huge market of the PC with so many shitty systems being shipped by the hour.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
That's not better than Larrabee if the #1 quote is true, and slower if #2 is true. But since they are massively delayed, and the rule of delay(call it IntelUser's rule ) says it'll end up slower, #2 is more likely. It's not just Charlie's numbers, I have seen that number quoted somewhere else too.
So the entire Larrabee chip may be able to outperform the shader hardware on the Fermi. In a discussion on the CPU forum that would be an excellent point, in the video forum I would point out that Larrabee has to worry about the true monsterous beast they need to compete with, ATi's and nV's rasterizer hardware(which is far beyond what anyone else has ever done).

Games are different story, but we know nothing there.
Yes, we know quite a lot actually. Abrash has filled in how the render engine in Larrabee is going to work, it is going to be utterly demolished if it doesn't get a quick order of magnitude increase in power by mid range parts. Due to the vector layout on Larrabee and the base P56 core it uses a pixel block write(as single pixel writes are going to be far too expensive) is going to be a nigh perfectly linear 1:1 take away from its' raw FLOPS performance. If we task Larrabee with a paltry 163MPixels it is down to 837GFLOPs, but those pixels need to have texture samples taken and blended, using the most simple operation anyone is going to remotely consider, bilinear, you are looking at 5 ops per pixel, oops, we don't have any ops left to handle geometry transformation, lighting or any shader effects at all. That is the reality that Larrabee is dealing with. 1TFLOP of shader power on top of a decent rasterizer is quite decent. 1TFLOP total power available to a GPU emulator? Catastrophic.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
61
Still though, if you can sell me an x86 ISA-compatible processor with compiler support that hits 1TFlops for <$500 you will get my attention.
Isn't Stream and CUDA able to hit nearly 80% of their theorical speeds on their respectives GPU's? It isn't x86 but depending of your type of code, it may not mean much.
I downloaded Sandra 2010 and using the GPGPU Stream benchmark, it scored 1.12 TFLOPS, near of the theorical speed of my GPU.

Due to the vector layout on Larrabee and the base P56 core it uses a pixel block write(as single pixel writes are going to be far too expensive) is going to be a nigh perfectly linear 1:1 take away from its' raw FLOPS performance. If we task Larrabee with a paltry 163MPixels it is down to 837GFLOPs, but those pixels need to have texture samples taken and blended, using the most simple operation anyone is going to remotely consider, bilinear, you are looking at 5 ops per pixel, oops, we don't have any ops left to handle geometry transformation, lighting or any shader effects at all. That is the reality that Larrabee is dealing with. 1TFLOP of shader power on top of a decent rasterizer is quite decent. 1TFLOP total power available to a GPU emulator? Catastrophic.
But isn't Larrabe a superscalar architecture? Even if it is or isn't, I doubt that it will be fast enough for decent gaming.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
But isn't Larrabe a superscalar architecture?
Yep, and I was figuring on using every bit of that including full optimization of the vector units, obviously Larrabee isn't going to hit 1THZ clockspeeds ;)
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Hey you got your wish.
It's not my wish, I'd much rather have a wider selection of GPUs to select. What we were looking at was a dump truck showing up for a Formula 1 race. I have been laughing and quite a few people bought the utterly absurd hype that it was going to be competitive. I'd love to see Intel take the GPU market seriously, so far they most certainly haven't.
 

ASK THE COMMUNITY