Larrabee die shot shown at Visual Computing Institute presentation

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Keysplayr

Elite Member
Jan 16, 2003
21,219
54
91
What incentive does the industry have to "come to ATI" exactly? Is it their superior GPGPU architecture? Is it the almost impossible to code for shaders? Or at least the almost total absense of decent tools that should have been provided by ATI? Or is it their gaming performance which is at best and severely overclocked equal to their competitions best?
Yeah, I can see the industry beating down their door.
 

TidusZ

Golden Member
Nov 13, 2007
1,765
2
81
Originally posted by: Keysplayr
What incentive does the industry have to "come to ATI" exactly? Is it their superior GPGPU architecture? Is it the almost impossible to code for shaders? Or at least the almost total absense of decent tools that should have been provided by ATI? Or is it their gaming performance which is at best and severely overclocked equal to their competitions best?
Yeah, I can see the industry beating down their door.

I read his post and was like wtf is this guy talking about. Then I read yours, and things make sense again. Ty.
 

ilkhan

Golden Member
Jul 21, 2006
1,117
1
0
Originally posted by: TidusZI read his post and was like wtf is this guy talking about. Then I read yours, and things make sense again. Ty.
Its Nemesis. Sometimes his brain knows what its talking about, sometimes his ass knows what its talking about. ;)
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Originally posted by: Keysplayr
Originally posted by: SickBeast
Originally posted by: OCguy
Originally posted by: SickBeast
There was an article at the Inquirer the other day trashing NV and their new G300 GPU. In it, Charlie said that DX11 is actually going to favor the Larrabee when it comes to shader code. He also said that AMD has essentially supported DX11 since the 2900XT and that NV is completely screwed with CUDA, PhysX, DX11, and the G300.

And I bet you actually believed it. :laugh:

Some of it made sense, yes. I personally don't see CUDA or PhysX as a complete waste of time, however, and I'm not going to write off the GT300 before it even comes out. I can see where they are coming from in that they may do very poorly in terms of performance per transistor, however I would not write off NV given the fact that they have had the fastest overall GPU for the past several generations.

The problem is, NV has not executed in terms of midrange derivatives of the GT200, and AMD will surely make more money on their next gen part right off the bat due to their superior strategy.

Not if it's the same architecture they won't. IMHO, people will want the new core on GT300. Simply because it's a complete change. New tech. MIMD. If AMD changes their architecture, they they have a good shot, but I don't think they're doing much more than doubling the shaders and adding ROP's. Sure, It'll perform great in games, but how will it perform in OpenCL, DirectX Compute of Windows 7, or Snow Leopard? Ah, but I'm getting ahead of things here. I know. Wait and see. ;)

The thing is, apparently there have been a bunch of features in AMD's GPUs that have not been utilized by DX10 in its current form. AMD thought that the DX10 spec would go much further than it actually did. Therefore, in their current form, AMD GPUs are more or less DX11 GPUs. In essence, it would probably be wasteful for AMD to re-invent the wheel at this point just for the sake of doing so.

In terms of GPGPU performance, I'm going to reserve judgment for quite some time. In all likelihood, GPGPU performance is not going to matter for a long time because we will probably not have great applications that benefit from it for at least the next two years. Hopefully I will be proven wrong. Of course, there will be scientific and server applications that use it. I'm talking about killer apps for the consumer.
 

Hacp

Lifer
Jun 8, 2005
13,923
2
81
Originally posted by: SickBeast
Originally posted by: Keysplayr
Originally posted by: SickBeast
Originally posted by: OCguy
Originally posted by: SickBeast
There was an article at the Inquirer the other day trashing NV and their new G300 GPU. In it, Charlie said that DX11 is actually going to favor the Larrabee when it comes to shader code. He also said that AMD has essentially supported DX11 since the 2900XT and that NV is completely screwed with CUDA, PhysX, DX11, and the G300.

And I bet you actually believed it. :laugh:

Some of it made sense, yes. I personally don't see CUDA or PhysX as a complete waste of time, however, and I'm not going to write off the GT300 before it even comes out. I can see where they are coming from in that they may do very poorly in terms of performance per transistor, however I would not write off NV given the fact that they have had the fastest overall GPU for the past several generations.

The problem is, NV has not executed in terms of midrange derivatives of the GT200, and AMD will surely make more money on their next gen part right off the bat due to their superior strategy.

Not if it's the same architecture they won't. IMHO, people will want the new core on GT300. Simply because it's a complete change. New tech. MIMD. If AMD changes their architecture, they they have a good shot, but I don't think they're doing much more than doubling the shaders and adding ROP's. Sure, It'll perform great in games, but how will it perform in OpenCL, DirectX Compute of Windows 7, or Snow Leopard? Ah, but I'm getting ahead of things here. I know. Wait and see. ;)

The thing is, apparently there have been a bunch of features in AMD's GPUs that have not been utilized by DX10 in its current form. AMD thought that the DX10 spec would go much further than it actually did. Therefore, in their current form, AMD GPUs are more or less DX11 GPUs. In essence, it would probably be wasteful for AMD to re-invent the wheel at this point just for the sake of doing so.

In terms of GPGPU performance, I'm going to reserve judgment for quite some time. In all likelihood, GPGPU performance is not going to matter for a long time because we will probably not have great applications that benefit from it for at least the next two years. Hopefully I will be proven wrong. Of course, there will be scientific and server applications that use it. I'm talking about killer apps for the consumer.

Like H264 encoding, which takes a bazillion years?
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Originally posted by: Hacp
Like H264 encoding, which takes a bazillion years?

Badaboom is a step in the right direction, but it lacks the full control of a program like Handbrake.

Like I said, it will take time for good programs that do stuff like that to come out, especially if it's for OpenCL or DX11.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
According to early PCWatch, vendors say Intel was planning a 32 core variant and a 24 core variant at 45nm, and 48 core with a shrink. 24 core would be a disabled 32 core die.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: IntelUser2000
According to early PCWatch, vendors say Intel was planning a 32 core variant and a 24 core variant at 45nm, and 48 core with a shrink. 24 core would be a disabled 32 core die.

Interesting. So product differentiation will come by way of harvesting partially defective die, disabling the cores and selling them as 24 core chips.

So 32core must deliver reasonable performance if Intel feels that a reduced core-count SKU is marketable.

The cores at 8-9mm^2 makes for about 300mm^2 die then, kinda right in the middle between Ati and NV I suppose.

Have there been any serious discussions one way or the other as to whether Intel will do an "X2" with these larrabees on the same PCB?

Intel isn't making the PCB inhouse, are they? I'd assume they aren't, which means there is a third party somewhere in Taiwan who is very much in the loop in the TDP, pin-counts, memory type and config, etc as they are gearing up for the PCB and cooling solutions. That means rumors on PC Watch and Digitimes will prolly have some truth to them as we've seen time and again in the past.
 

sandorski

No Lifer
Oct 10, 1999
70,784
6,343
126
Has Intel turned into Bitboys Oy yet? Keep hearing about this chip, not seeing much though.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: sandorski
Has Intel turned into Bitboys Oy yet? Keep hearing about this chip, not seeing much though.

I don't think Bitboys ever got to the point where their CEO was holding up a wafer displaying proof of the existance of their otherwise vaporware GPU.

Bitboys UhOh and Duke Nukem 4nevar...:laugh:
 

dali71

Golden Member
Oct 1, 2003
1,117
21
81
Originally posted by: Idontcare
Originally posted by: sandorski
Has Intel turned into Bitboys Oy yet? Keep hearing about this chip, not seeing much though.

I don't think Bitboys ever got to the point where their CEO was holding up a wafer displaying proof of the existance of their otherwise vaporware GPU.

Bitboys UhOh and Duke Nukem 4nevar...:laugh:

I have to post this for old times' sake: Bf!3D2k

 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Originally posted by: Idontcare

I don't think Bitboys ever got to the point where their CEO was holding up a wafer displaying proof of the existance of their otherwise vaporware GPU.

Bitboys UhOh and Duke Nukem 4nevar...:laugh:

I would think one of the reasons Bitboys solution failed was they were trying something much dfferent than other guys were. Weren't they the ones putting several Megabytes of memory integrated onto the die back when the total memory of the cards were in the 16-32MB range? The only way to effectively do that without inflating the die size to unacceptable die sizes are using eDRAM, or embedded DRAM. Considering chip manufacturers still assess that as a risky proposition, how would it have been back then in 1999-2000?? With a start-up?

The cores at 8-9mm^2 makes for about 300mm^2 die then, kinda right in the middle between Ati and NV I suppose.

They think the 600mm2 die is actually the 32 core version. Although they have been wrong before(mind you, not much), so it might just possibly be a greater core count one. It sorta makes sense though. The recently unveiled detailed die pic seems similar in shape to the chips on the wafers that were shown earlier and known to be 600mm2.
 

Kuzi

Senior member
Sep 16, 2007
572
0
0
Originally posted by: Keysplayr
If AMD changes their architecture, they they have a good shot, but I don't think they're doing much more than doubling the shaders and adding ROP's. Sure, It'll perform great in games, but how will it perform in OpenCL, DirectX Compute of Windows 7, or Snow Leopard? Ah, but I'm getting ahead of things here. I know. Wait and see. ;)

I agree the RV870 will be a tweaked RV770 with increased SPUs/TMUs etc. This should not be a bad thing as the expectation is that RV870 will be smaller (cheaper/less complex) than the GT300. For gaming I expect the RV870 to perform similarly to the GT300. Both being really fast.

The GT300 gets the advantage in compute, such as accelerating some windows programs, Physics effects etc. NV is more concerned with this because they don't have a CPU of their own, so they are trying to run/speed-up more CPU tasks on the GPU.

As for Larrabee, again, I can't imagine it reaching GT300/RV870 performance levels for gaming, at least not in it's first iteration. If we take Drivers/Developers/SLI/Cross-Fire etc, Nv and ATI got years of experience, and won't budge so easily.

Personally I would like to have Larrabee in a system that already has an ATI/NV GPU (will that be possible, maybe hacked drivers?:D), because Larrabee should be able to perform certain tasks better, physics/video encoding come to mind.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: IntelUser2000
The cores at 8-9mm^2 makes for about 300mm^2 die then, kinda right in the middle between Ati and NV I suppose.

They think the 600mm2 die is actually the 32 core version. Although they have been wrong before(mind you, not much), so it might just possibly be a greater core count one. It sorta makes sense though. The recently unveiled detailed die pic seems similar in shape to the chips on the wafers that were shown earlier and known to be 600mm2.

Otellini said the 600+ mm^2 die were an "extreme" version of Larrabee...implying that "less extreme" versions would be coming to market.

Also we have seen more than one performance scaling document now that contained real (not simulated) scaling data up to 64 cores.

If 32 cores requires 600mm^2 then there is no way a physical sample with 64 cores exists...which would then require the whitepaper data so far is falsified.

32 cores at 600mm^2 would also require the cores to be on-par with the size of a penryn core (~20mm^2)...which seems rather absurd if you think about how much ISA and architecture penryn has that larrabee does not.

I believe a 680mm^2 larrabee chip does exist, but I believe it is an "extreme" (Otellini's words) version of Larrabee which packs 64 cores and will be sold/shipped to the big-name render houses as a GPGPU for render farms where the niche market will support the gross margin pricetag of such a behemoth (same as Dunnington and Nehalem-EX).

I believe a 340mm^2 larrabee chip also exists, it has 32 cores (each <10mm^2) and it will be harvested to 24 or 16 cores SKU's to further enhance yields and provide inherent product differentiation at the consumer level.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Originally posted by: Idontcare

I believe a 340mm^2 larrabee chip also exists, it has 32 cores (each <10mm^2) and it will be harvested to 24 or 16 cores SKU's to further enhance yields and provide inherent product differentiation at the consumer level.

I just did some image manipulation to estimate the size of the x86 cores in Larrabee. http://forum.beyond3d.com/showthread.php?t=54144

Using the pic from post #16, the red squares, which there are 32 of them and highly likely to be x86 cores, take only 1.74% of the die(using simple stretching in paint :) 6% horizontal and 29% vertical compared to the die).

For the sake of simplicity let's assume 1.8%, and 640mm2 die size. The cores which include 256KB L2 caches would only take 11.5mm2 assuming the die pic is indeed the 600mm2+ die.

Penryn in comparison takes 22-23mm2.

Possibility #1: Maybe the cores are 7.5mm2 and final version will feature 48 cores for the 600mm2.

Possibility #2: The high end could be "X2" version like ATI

Die pic for Silverthorne.
http://www.anandtech.com/showdoc.aspx?i=3276&p=13

On Silverthorne, the core+256KB L2 cache takes little over 9.4mm2. Larrabee will have simpler integer cores, but the FP units are vastly better. According to the Larrabee presentation, the Vector FP portion takes 1/3 of the core size.