Intel Larrabee is capable of 2 TFLOPS

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Originally posted by: Aberforth

1. GTX 280 only 10% faster than HD 4870 and that's with a over sized 1.4 BILLION TRANSISTORS GPU, the problem lies in their bloated architecture.

Not talking about their architecture, just saying Tflop is a theoretical floating point figure. You wouldn't blindly compare power supplies by looking at wattage only.

2. Come on- there are issues with drivers with every hardware, 30% of vista crashes are due to nv drivers not Intel, take a look at nvidia driver forums- there is at least a dozen people complaining everyday about TDR errors and SLI issues.

Only intel took years to get their onboard graphics to have hardware T&L enabled and not to mention tons of game incompatibilities such as with Halo.
 

Schadenfroh

Elite Member
Mar 8, 2003
38,416
4
0
Originally posted by: schneiderguy

Well, it sure is faster at increasing your electricity bill :Q

Hopefully you will be able to plug the GPU into the wall like the Voodoo5 6000.
 

extra

Golden Member
Dec 18, 1999
1,947
7
81
Hehehe. I'll believe it's competitive with nvidia and amdati's parts when i see it.

Tflops.. okay.. that has absolutely nothing to do with how good it will play games. Or if the drivers will even be good enough that the games you want to play will work. And let's be realistic here--of the great video card driver writers out there, at least at the moment, none of them appear to work at intel lol.

What we can basically guarantee reliably:

This thing will be great for some general purpose number crunching type stuff stuffed in great numbers into some cards in a rack.
 

geoffry

Senior member
Sep 3, 2007
599
0
76
Originally posted by: SilentRunning

I believe nRollo is referring to Intel's i740 aka StarFighter graphics.

You are right, I miss read that part and thought he said IGP.
 

ajaidevsingh

Senior member
Mar 7, 2008
563
0
0
I was thinking that 2.5Ghz+ is a lot of speed and you need that to break that for the 2TFlop SP speed, then it hit me like a pile of bricks...THE ATOM... Its 2Ghz and gives out way too little heat and also its 25mm^2 ahhhhhhhh Our best RV770 is 250mm^2 that is way too huge.

Just think you can have 10 Atoms in place of a single RV770 and this is before Intel strips the In order chip of anything it does not need and makes it even smaller how about 20mm^2, 15 Atoms instead of a RV770.

Scarry i know but Atom platform is the best its small, cheap to make "In order chips" Not hot as my Prescott was and best of all energy efficient.

Single Atom = 2.5W TDP and 25mm^2
32 Atom's = 80W TDP and 800mm^2

32 Atom derived new Larrabee core 'Speculation'= 70W TDP & 600mm^2

Now R700 will have a total of 500mm^2 and GTX 280 allready has a huge die size. I do know that maybe AMD will go along to 45nm but then again that chip plant does not think so and are delaying the shift to 45nm!!!
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Originally posted by: AmberClad
Originally posted by: geoffry
Originally posted by: keysplayr2003

Who's talking about market share here? Straight up performance is what I think the center of discussion is here. Intel could sell a billion Larabee, but they could all suck just as bad as their IGP.

While it may not be important to you for some reason, if they sell a decent amount of the intial cards even if it isn't a monster it would allow Intel to viably continue their development of future cards and become even more competitive in the long term.

I would think you would be happier to see a third firm enter the discrete GPU market.
Are you unaware that keys is an Nvidia Focus Group member and that he receives free hardware from NV? Based on your comment, it would seem that you aren't. I don't mean that as a flame to anyone, but your comment seems odd and out of place, given the situation.

Situation? Ok, this is getting ridiculous. Something needs to be done here. You guys cannot keep doing this over and over again amidst every discussion we have.

OT: Intel and I go waaaaay back and are still my fav. Fearing that Intel would best Nvidia or ATI was not even crossing my mind. But you both certainly thought so without truly knowing.
I'll ask you to stop doing my thinking for me and just carry on the discussion normally, thanks in advance.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
If everything we are reading here is accurate, Intel is going to be utterly humiliated and badly.

Let's take every performance metric as absolute, 2TFLOPS sounds like a great number when you compare it to a current rasterizer. The architecture should lend itself nicely to executing graphics code in terms of extracting paralelism, but why aren't we seeing the IPS rating anywhere? Long before FLOPS mattered in the graphics industry it was all about IPS(Instructions Per Second)- the area where the majority of your actual rasterization comes into play and furthermore where is the enormous amount of eDRAM going to be on this chip and why not more details on that? On a system level they aren't going to have remotely close to enough bandwidth to handle anything resembling rasterization by today's standards which puts them into a place where we already know they are trying to go- ray tracing. Ray tracing uses a staggering amount less read/write bandwidth then rasterization and would lend itself nicely to this general layout, however, seems to me that they are several orders of magnitude short of where they need to be to consider real time ray tracing remotely close to comparable rasterization. 2TFLOPS placed on top of a monster rasterizer is a lot. 2TFLOPS as a full rendering engine is good for a nice chuckle.

Then we get to the other end of the spectrum they are going after- HPC. Intel has tried approaches like this in the past and they have run into the same problem every other CPU maker has in trying to extract the level of paralelism that they are shooting for- trying to figure out a way to get their compilers to work with their processor architecture. This seems to be the fundamental difference between the direction Intel is headed and where nV is going. nVidia seems to be taking their compiler and code base and then building their processors around what the compiler needs to operate most effectively. Don't get me wrong, I think Intel has the most talented compiler coders in the world(and I say that without exception) but anyone who has tried to deal with the problem nows how incredibly daunting it is. I know, a lot of people will rightly point out the incredible resources at Intel's disposal and the fact that they can devote nigh infinite resources to deal with the problem at least compared to their competition. This is a well reasoned and well informed point, but I would have to counter with a single word.

Itanium

If you don't know what it is, look it up ;)

Intel has failed in catastrophic fashion twice before, once when trying to enter the discrete graphics market and once when trying to enter HPC with their explicitly paralel instruction computing- Intel is obviously a very smart company, but one can't honestly fail to wonder how it is they expect to conquer the two markets that soundly rejected them in the past with the same part.

Edit- Just want to point out for the more well read members that I am grossly oversimplifying as I don't think this would be the ideal space to explain exactly what the difference is in terms of execution units and resources required between IPS and FLOPS and how that relates. Yes, it was intentional, just trying to get the idea across ;)
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
the reason 2 Tflops is important is that it is almost double everything else on the market, the RV770 = 1.2Tflops, and the GTX280 is 0.9Tflop.
C2Q quad core? 30 GIGA flops (0.03 Tflops).

So CUDA and the AMD version are great for physics, great for scientific stuff, great for video encoding, great for folding at home... they are over 20 times faster then the fastest quad core CPU when it comes to those...
If intel delivers a board that gives twice that FLOP power for those apps that need it, with x86 extensions to boot, you suddenly have what is essentially a "math extension board", which will annihilate CUDA and AMDs whatever it is called (where code is run on GPU). Being able to do some graphics, well, that is just a bonus.

Originally posted by: Intelman07
Originally posted by: RussianSensation
1. You can't measure performance through TFlops alone as GTX 280 is faster than HD4870.
2. Even if intel can bring you the fastest graphics card, their driver support has no proven track record whatsoever.

What does a TFlop mean then. If the HD 4870 has more, why shouldn't it be quicker. Is it drivers? Code tweaking?

FLOPS = Floating Point Operations Per Second.
Floating point operation means a mathematical calculation involving a decimal number (vs integer operation, in which no decimals / fractions are allowed).
 

runawayprisoner

Platinum Member
Apr 2, 2008
2,496
0
76
Originally posted by: Intelman07
Originally posted by: RussianSensation
1. You can't measure performance through TFlops alone as GTX 280 is faster than HD4870.
2. Even if intel can bring you the fastest graphics card, their driver support has no proven track record whatsoever.

What does a TFlop mean then. If the HD 4870 has more, why shouldn't it be quicker. Is it drivers? Code tweaking?

FLOPS means FLoating point Operations Per Second. Otherwise known as... precise operations per second. What this actually means is that it'll be extremely precise... and fast at that as well.

What it doesn't mean... is that it will be faster at graphics rendering. Graphics rendering is not about FLOPS but rather, about getting more processed in less time. Not all graphics elements will use intense floating point operations to render, thus... it then depends on how fast the card calculates in general.

Let me put it in a simpler way: FLOPS only means very precise calculations. Current 3D rendering in games don't do very precise calculations (thus why we have clipping problems, glitches, and such... even jumping textures), so FLOPS is pretty much useless, since non-very precise calculations can be done much faster than very precise calculations. As for how fast, it all depends on the processor.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
what? flops arent about "precise" calculations, and using "imprecise calculations" can still benfit from more flops. And glitches have nothing to do with precision.
The reason the flops do not directly correlate to performance is because not all the operations are done in the SP units.
 

runawayprisoner

Platinum Member
Apr 2, 2008
2,496
0
76
It is, sadly. Imprecise calculations are where the battle truly begins, and it's not exactly dependent on more FLOPS. Why? Because it depends on the way the algorithms are written. Strictly speaking... say... for example, squaring 55. You can either multiply 50 by 55 and 5 by 55 then total them, or you can multiply 60 by 50 and then total them with the square of 5. It "can" benefit from more FLOPS, but more FLOPS only means that the algorithms for floating point calculations are more optimized. It doesn't mean normal calculations (imprecise) are optimized in the same way, as there are many ways to do math. And that's the key difference.

As for the shaders, let's just say those are designed to calculate completely different things (a.k.a. it's also possible to offload works on them, but they might have algorithms optimized for different calculations, thus... it might be slower using them). IF they all conform to the same algorithms, and the same way to calculate, then it just means that more SPs calculate faster at the same clock frequency (a.k.a. same rate of instructions per clock cycle, since they are using the same algorithms). It's not the case here.
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: RussianSensation
Originally posted by: Intelman07
Originally posted by: RussianSensation
1. You can't measure performance through TFlops alone as GTX 280 is faster than HD4870.
2. Even if intel can bring you the fastest graphics card, their driver support has no proven track record whatsoever.

What does a TFlop mean then. If the HD 4870 has more, why shouldn't it be quicker. Is it drivers? Code tweaking?

Just a theoretical computational power number related to floating point calculations. While it might manifest itself in heavy scientific apps, it just represents a "theoretical peak" throughput. There are many other factors such as shader complexity, AA efficiency, memory bus architecture and memory subsystem performance, etc. By this account 2 TFlops PS3 should be faster than GTX 280 in Folding@home which it isnt and even faster than R700 but it only has a 7900GT OC onboard...

Tflops tell us nothing about AA performance either where 4870 shines at 8AA.

(taps microphone) HELLOOOOOOO....IS THIS THING ON??
Larrabee is to be a CTM only GPU...

No Shaders!!!
No AA!!
Not built to play ANY game!!!
From everything they've stated so far, Larrabee is not a desktop part at all!!!
 

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
Originally posted by: ajaidevsingh
I was thinking that 2.5Ghz+ is a lot of speed and you need that to break that for the 2TFlop SP speed, then it hit me like a pile of bricks...THE ATOM... Its 2Ghz and gives out way too little heat and also its 25mm^2 ahhhhhhhh Our best RV770 is 250mm^2 that is way too huge.

Just think you can have 10 Atoms in place of a single RV770 and this is before Intel strips the In order chip of anything it does not need and makes it even smaller how about 20mm^2, 15 Atoms instead of a RV770.

Scarry i know but Atom platform is the best its small, cheap to make "In order chips" Not hot as my Prescott was and best of all energy efficient.

Single Atom = 2.5W TDP and 25mm^2
32 Atom's = 80W TDP and 800mm^2

32 Atom derived new Larrabee core 'Speculation'= 70W TDP & 600mm^2

Now R700 will have a total of 500mm^2 and GTX 280 allready has a huge die size. I do know that maybe AMD will go along to 45nm but then again that chip plant does not think so and are delaying the shift to 45nm!!!

Yeah, I was thinking "why didn't they use Atom cores" too. At least just for the power consumption issue. 300W is a LOT of power consumption. I think this thing is going to flop as badly as the FX5800 from NVidia.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Originally posted by: geoffry
Originally posted by: nRollo
Originally posted by: shangshang
Originally posted by: Bateluer
Larrabee is over a year away?

While Intel definitely has marketing muscle and the R&D facilities and fab facilities to make a good chip, time will tell the performance and capabilities of the card.

When a gorilla such as Intel has R&D, fabs, and marketing, it can't be discounted so casually. Intel can afford to make a mistake and still easily survive to improve their product.

Remember AMD and its K7 heydays? The K7 was spanking Intel's ass all over, but AMD today as a company is in a much much worser shape than Intel. For Intel to compete with Larrabee, they only simply need to make it cheap and widely available, and beat their competitions via a pricewar.

I'm willing to bet that AMD investors (and NV investors) are worried about Intel.

It's hard to say.

Intel's discrete graphics effort was a huge flop, and there are other things they've tried oiutside their core market that flopped as well.

The other thing is the Larrabee is so far off (and so little is known) it's impossible to speculate. OK- R700 performance at what? Vantage? Crysis? Doom1? With AA/AF? HDR?

The used to be a company called Bitboyz that were always "this close" to the new supercard.

While in performance terms you can call the intel IGP a flop, you definitely can't call it one in terms of sales.

Which is I think what Shang's pont was, even if the performance isn't killer, Intel might be able to still gain share through wickedly low prices due to its fab process and experience or some other killer app or feature of Larrabee.

Then again it might be terrible at everything and fade into history, you never know.

Intel can continue to produce what they produce and retain a ridiculous market share. Because the biggest customers of Intel IGP is business. People who dont give a shat about graphics power. I suspect that Intel wants to raise the bar and move into the mid and higher end market with larrabee. We will have to wait and see. I am confused by the information pointing to it containing 32 pentium 1 cores. That seems odd to me.

 

AmberClad

Diamond Member
Jul 23, 2005
4,914
0
0
Originally posted by: keysplayr2003
Situation? Ok, this is getting ridiculous. Something needs to be done here. You guys cannot keep doing this over and over again amidst every discussion we have.

OT: Intel and I go waaaaay back and are still my fav. Fearing that Intel would best Nvidia or ATI was not even crossing my mind. But you both certainly thought so without truly knowing.
I'll ask you to stop doing my thinking for me and just carry on the discussion normally, thanks in advance.
If I misunderstood your thoughts as far as Larrabee, then my apologies.

I've gone on record here at AT as far as my feelings that it'd be a "minor miracle" if Larrabee turned out to be more than epic fail, given Intel's track record. But unlike some people, I wouldn't mind if Intel pulled it off. ATI manages to get it right once in a while, but they've proven to be...unreliable...as far as providing Nvidia with close competition on a consistent basis.

So alluding to what geoffry mentioned -- the more the merry. Even if it sucks, maybe it'll be a good enough of a value at the low end to put some pressure on. It just seems like some people around here are hoping for them to fail. Sorry, if I'm wrongly lumping you together with those people.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Originally posted by: AmberClad
Originally posted by: keysplayr2003
Situation? Ok, this is getting ridiculous. Something needs to be done here. You guys cannot keep doing this over and over again amidst every discussion we have.

OT: Intel and I go waaaaay back and are still my fav. Fearing that Intel would best Nvidia or ATI was not even crossing my mind. But you both certainly thought so without truly knowing.
I'll ask you to stop doing my thinking for me and just carry on the discussion normally, thanks in advance.
If I misunderstood your thoughts as far as Larrabee, then my apologies.

I've gone on record here at AT as far as my feelings that it'd be a "minor miracle" if Larrabee turned out to be more than epic fail, given Intel's track record. But unlike some people, I wouldn't mind if Intel pulled it off. ATI manages to get it right once in a while, but they've proven to be...unreliable...as far as providing Nvidia with close competition on a consistent basis.

So alluding to what geoffry mentioned -- the more the merry. Even if it sucks, maybe it'll be a good enough of a value at the low end to put some pressure on. It just seems like some people around here are hoping for them to fail. Sorry, if I'm wrongly lumping you together with those people.

Allright then. Thanks.
A lot of people always want the big guy to fall. They want the underdogs to pull one out of their hats. This does happen every once in a while. Like a David and Goliath story.
They want the big guy to fail even more so than they want the technology to advance. For no real reason. Strange eh?

Once in a while, AMD/ATI plays the role of David. Most of the time however, they take Chuck's role. :) (Chuck is usually pried out from between Goliath's toes with the handle of a club.)
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: AmberClad
Originally posted by: keysplayr2003
Situation? Ok, this is getting ridiculous. Something needs to be done here. You guys cannot keep doing this over and over again amidst every discussion we have.

OT: Intel and I go waaaaay back and are still my fav. Fearing that Intel would best Nvidia or ATI was not even crossing my mind. But you both certainly thought so without truly knowing.
I'll ask you to stop doing my thinking for me and just carry on the discussion normally, thanks in advance.
If I misunderstood your thoughts as far as Larrabee, then my apologies.

I've gone on record here at AT as far as my feelings that it'd be a "minor miracle" if Larrabee turned out to be more than epic fail, given Intel's track record. But unlike some people, I wouldn't mind if Intel pulled it off. ATI manages to get it right once in a while, but they've proven to be...unreliable...as far as providing Nvidia with close competition on a consistent basis.

So alluding to what geoffry mentioned -- the more the merry. Even if it sucks, maybe it'll be a good enough of a value at the low end to put some pressure on. It just seems like some people around here are hoping for them to fail. Sorry, if I'm wrongly lumping you together with those people.


Again, there will be no low end, mid end or other end...

Larrabee is to be for HPC computers and used as a CTM GPU only...
So to answer the inevitable questions, it will never even be able to play Crysis, let alone at good frame rates...

Edit: Let me try and explain better...
It's like everyone is a hot car enthusiast and is very excited about Intel's new very powerful type of engine.
Unfortunately, what most don't get is that the name of this new engine is Saturn V, and it doesn't really work very well in a car...
 

bunnyfubbles

Lifer
Sep 3, 2001
12,248
3
0
Originally posted by: VirtualLarry
Originally posted by: ajaidevsingh
I was thinking that 2.5Ghz+ is a lot of speed and you need that to break that for the 2TFlop SP speed, then it hit me like a pile of bricks...THE ATOM... Its 2Ghz and gives out way too little heat and also its 25mm^2 ahhhhhhhh Our best RV770 is 250mm^2 that is way too huge.

Just think you can have 10 Atoms in place of a single RV770 and this is before Intel strips the In order chip of anything it does not need and makes it even smaller how about 20mm^2, 15 Atoms instead of a RV770.

Scarry i know but Atom platform is the best its small, cheap to make "In order chips" Not hot as my Prescott was and best of all energy efficient.

Single Atom = 2.5W TDP and 25mm^2
32 Atom's = 80W TDP and 800mm^2

32 Atom derived new Larrabee core 'Speculation'= 70W TDP & 600mm^2

Now R700 will have a total of 500mm^2 and GTX 280 allready has a huge die size. I do know that maybe AMD will go along to 45nm but then again that chip plant does not think so and are delaying the shift to 45nm!!!

Yeah, I was thinking "why didn't they use Atom cores" too. At least just for the power consumption issue. 300W is a LOT of power consumption. I think this thing is going to flop as badly as the FX5800 from NVidia.

Probably because Atom wouldn't work as well in such a capacity, gotta remember that Atom wasn't exactly designed to be a performance powerhouse... also have to keep in mind that 300W would be maximum load, when not being pushed so hard I'd bet it could scale back the number of cores in use to cut consumption to a fraction of that


Originally posted by: Quiksilver
-Entirely made of x86 Pentium P54C cores
-Has 32 processing cores

This is just silly. Why not use newer processors, ones that are more efficient, so you won't need so many cores to put out the same processing power and won't needed 300W of power draw and will be likely be cooling running as well.

"The x86 processor cores in Larrabee will be different in several ways from the cores in current Intel CPUs such as the Core 2 Duo, taking lessons from Intel's ongoing "Tera-scale" research, as exemplified by their "Polaris" 80-core processor demonstrator. Larrabee's x86 cores will be much simpler than those on a Core 2 processor, not using out-of-order execution. This will allow them to be much smaller, so more can fit on a single chip. Other differences include the addition of a new set of extended SIMD instructions similar to SSE but more focused on graphics applications, and 4-way simultaneous multithreading for each core."

http://en.wikipedia.org/wiki/Larrabee_(GPU)

it seems quite clear to me that they aren't slapping just any x86 architecture onto a card and calling it a day...It may be the Pentium 4 team that is working on Larrabee, but I'd have to say I'd trust them to come up with something better than the average average AT nay-sayer with no real engineering background :p
 

magreen

Golden Member
Dec 27, 2006
1,309
1
81
Originally posted by: runawayprisoner
Originally posted by: Intelman07
Originally posted by: RussianSensation
1. You can't measure performance through TFlops alone as GTX 280 is faster than HD4870.
2. Even if intel can bring you the fastest graphics card, their driver support has no proven track record whatsoever.

What does a TFlop mean then. If the HD 4870 has more, why shouldn't it be quicker. Is it drivers? Code tweaking?

FLOPS means FLoating point Operations Per Second. Otherwise known as... precise operations per second. What this actually means is that it'll be extremely precise... and fast at that as well.

What it doesn't mean... is that it will be faster at graphics rendering. Graphics rendering is not about FLOPS but rather, about getting more processed in less time. Not all graphics elements will use intense floating point operations to render, thus... it then depends on how fast the card calculates in general.

Let me put it in a simpler way: FLOPS only means very precise calculations. Current 3D rendering in games don't do very precise calculations (thus why we have clipping problems, glitches, and such... even jumping textures), so FLOPS is pretty much useless, since non-very precise calculations can be done much faster than very precise calculations. As for how fast, it all depends on the processor.
Originally posted by: runawayprisoner
It is, sadly. Imprecise calculations are where the battle truly begins, and it's not exactly dependent on more FLOPS. Why? Because it depends on the way the algorithms are written. Strictly speaking... say... for example, squaring 55. You can either multiply 50 by 55 and 5 by 55 then total them, or you can multiply 60 by 50 and then total them with the square of 5. It "can" benefit from more FLOPS, but more FLOPS only means that the algorithms for floating point calculations are more optimized. It doesn't mean normal calculations (imprecise) are optimized in the same way, as there are many ways to do math. And that's the key difference.

As for the shaders, let's just say those are designed to calculate completely different things (a.k.a. it's also possible to offload works on them, but they might have algorithms optimized for different calculations, thus... it might be slower using them). IF they all conform to the same algorithms, and the same way to calculate, then it just means that more SPs calculate faster at the same clock frequency (a.k.a. same rate of instructions per clock cycle, since they are using the same algorithms). It's not the case here.

"FLOPS means FLoating point Operations Per Second. Otherwise known as... precise operations per second. What this actually means is that it'll be extremely precise... and fast at that as well."
With that statement you showed that you have absolutely no idea what you're talking about. It only went down from there.
 

lopri

Elite Member
Jul 27, 2002
13,314
690
126
Originally posted by: Viditor
Again, there will be no low end, mid end or other end...

Larrabee is to be for HPC computers and used as a CTM GPU only...
So to answer the inevitable questions, it will never even be able to play Crysis, let alone at good frame rates...

Edit: Let me try and explain better...
It's like everyone is a hot car enthusiast and is very excited about Intel's new very powerful type of engine.
Unfortunately, what most don't get is that the name of this new engine is Saturn V, and it doesn't really work very well in a car...
I hear what you say loud and clear. :)

But then why is Intel spreading FUD like doom & gloom of current graphics market, the wonderful-ness of ray-tracing vs rasterization, etc.?
 

runawayprisoner

Platinum Member
Apr 2, 2008
2,496
0
76
Originally posted by: magreen
"FLOPS means FLoating point Operations Per Second. Otherwise known as... precise operations per second. What this actually means is that it'll be extremely precise... and fast at that as well."
With that statement you showed that you have absolutely no idea what you're talking about. It only went down from there.

Thank you for your kind remarks.

So I should have pointed to an article somewhere on the internet instead of trying to explain what exactly FLOPS meant? Then I'd throw in some for you to consider, without showing much more of my idiocy regarding this (saves you the trouble of having to correct me every time, too):

http://en.wikipedia.org/wiki/FLOPS

Should that one be too long to read, here is one with fewer words:

http://www.webopedia.com/TERM/F/FLOPS.html

And I admit to not being able to explain exactly what that term meant, and what kind of impact it has on the performance of the processor that the figure was measured out of, or so to say.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,004
126
FLOPs is all and well and good but the question is:

  • How fast does it run current and past games?
  • What AA/AF modes does it bring to the table, how?s the IQ, and how fast do they run?
  • How?s the driver compatibility?
In order for me to move to Intel they would have to offer me something substantially better than current vendors do, and a FLOPs figure doesn?t tell us that.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
the reason 2 Tflops is important is that it is almost double everything else on the market, the RV770 = 1.2Tflops, and the GTX280 is 0.9Tflop.

RV770 is 0TFLOPS in anything resembling 754r compliance, this may change with their next generation part though(pointing that out as it removes them from a LOT of viable HPC applications).

The problem with 2TFLOPS theoretical peak is how are they going to extract that much paralelism with x86 code? They are moving away from OoO support with the architecture so even the existing library of code that can operate in such a manner on x86 architectures is likely going to require a recompile at the very least(more then likely some hand tweaking of the existing code first as a minimum).

This has always been Intel's problem with this market, and this seems to do nothing to help that situation in the least.