Holy Lord... Intels Larrabee ---disclaimer: INQ LINQ

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
Originally posted by: DAPUNISHER
Could this be better suited for some form of new consumer electronics devices, as oppossed to the PC?

They did seem to mention that it was "stupidly" scalable, so I would imagine there will be a broad market for this type of technology. From handhelds to Mainframes.
 

JungleMan1

Golden Member
Nov 3, 2002
1,321
0
0
Originally posted by: zephyrprime
I don't see how a general purpose processor can possibly beat a specialized processor. After all, there isn't anything in a general purpose processor that can't be in a special purpose processor but there are things in a special purpose processor that isn't in a general purpose processor.
Heh, that's what people said about math coprocessors back in the late 80s.

Yes, technically, you are right, a specialized chip will always be faster; however, as technology advances, we'll be able to integrate more functionality into a general-purpose chip than anyone is ever going to use. For example, nowadays our CPUs are advanced enough to pull off any type of mathematical functions, so we have no need for a math chip.

In 10 years, dedicated graphics chips will seem as quaint as math coprocessors are today.

Oh, and also, NVIDIA will not die out because of one product. All NVIDIA has to do is come up with their own version of the product.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: keysplayr2003
Originally posted by: DAPUNISHER
Could this be better suited for some form of new consumer electronics devices, as oppossed to the PC?

They did seem to mention that it was "stupidly" scalable, so I would imagine there will be a broad market for this type of technology. From handhelds to Mainframes.

did someone *miss* this from VR? :p
How about 16x performance of any fastest graphics card out there now [referring to G80]
:Q

in 2 years

you guys should really have been following this :p


:D
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
Originally posted by: apoppin
Originally posted by: keysplayr2003
Originally posted by: DAPUNISHER
Could this be better suited for some form of new consumer electronics devices, as oppossed to the PC?

They did seem to mention that it was "stupidly" scalable, so I would imagine there will be a broad market for this type of technology. From handhelds to Mainframes.

did someone *miss* this from VR? :p
How about 16x performance of any fastest graphics card out there now [referring to G80]
:Q

in 2 years

you guys should really have been following this :p


:D

We are following it. ;)
 

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
Originally posted by: JungleMan1
Originally posted by: zephyrprime
I don't see how a general purpose processor can possibly beat a specialized processor. After all, there isn't anything in a general purpose processor that can't be in a special purpose processor but there are things in a special purpose processor that isn't in a general purpose processor.
Heh, that's what people said about math coprocessors back in the late 80s.

Yes, technically, you are right, a specialized chip will always be faster; however, as technology advances, we'll be able to integrate more functionality into a general-purpose chip than anyone is ever going to use. For example, nowadays our CPUs are advanced enough to pull off any type of mathematical functions, so we have no need for a math chip.

In 10 years, dedicated graphics chips will seem as quaint as math coprocessors are today.

Oh, and also, NVIDIA will not die out because of one product. All NVIDIA has to do is come up with their own version of the product.
You're right that the cpu is gaining more specialized functions over time as moore's law acts itself out. I could see that the gpu will simply be integrated into the cpu like the math coprocessor was.

But I don't see this rumored 16 core x86 being a real product because an approach like that wouldn't yield good graphics performance in my opinion. 16 cores is a small number for a gpu. Also, performing functions like texturing with a cpu would be exceedingly slow.

 

A554SS1N

Senior member
May 17, 2005
804
0
0
I really do hate the idea of a combined GPU/CPU - things may be rosy at first, but what about when upgrades are required - the CPU may be fast enough, but graphics requirements tend to move alot faster, so what happens when an upgrade is needed? A CPU/GPU combination of decent quality isn't going to be cheap, and then it's limited by the slower speed of older memory, so what happens there? It seems like a way of killing the easy upgrade market, and forcing people to buy virtually all-new inards. This tech is excellent for laptops, but rubbish for desktop PC's.
 

DeathReborn

Platinum Member
Oct 11, 2005
2,786
789
136
Originally posted by: apoppin
Originally posted by: keysplayr2003
Originally posted by: DAPUNISHER
Could this be better suited for some form of new consumer electronics devices, as oppossed to the PC?

They did seem to mention that it was "stupidly" scalable, so I would imagine there will be a broad market for this type of technology. From handhelds to Mainframes.

did someone *miss* this from VR? :p
How about 16x performance of any fastest graphics card out there now [referring to G80]
:Q

in 2 years

you guys should really have been following this :p


:D

I guess the real test will be how it runs OGL/D3D code or if they'll bring an entirely new API to the table.
 

5150Joker

Diamond Member
Feb 6, 2002
5,549
0
71
www.techinferno.com
Originally posted by: JungleMan1
Originally posted by: zephyrprime
I don't see how a general purpose processor can possibly beat a specialized processor. After all, there isn't anything in a general purpose processor that can't be in a special purpose processor but there are things in a special purpose processor that isn't in a general purpose processor.
Heh, that's what people said about math coprocessors back in the late 80s.

Yes, technically, you are right, a specialized chip will always be faster; however, as technology advances, we'll be able to integrate more functionality into a general-purpose chip than anyone is ever going to use. For example, nowadays our CPUs are advanced enough to pull off any type of mathematical functions, so we have no need for a math chip.

In 10 years, dedicated graphics chips will seem as quaint as math coprocessors are today.

Oh, and also, NVIDIA will not die out because of one product. All NVIDIA has to do is come up with their own version of the product.



I dunno about that, a physics chip is essentially a math co-processor isn't it? And that's exactly what Ageia is trying to sell us as well as nVidia/AMD.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,007
126
Heh, that's what people said about math coprocessors back in the late 80s.
I don't ever remember a math coprocessor having 768 MB RAM allocated all to itself.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,007
126
And nobody ever said it did.
That's my point - for a GPU to work at today's performance levels it needs fast dedicated RAM and lots of it.

If you want to move the GPU onto the CPU's die you need to deal with this fact.

If you use system RAM then you end up with something a tad better than current GMA.
 

miniMUNCH

Diamond Member
Nov 16, 2000
4,159
0
0
I hope/think that the move to CPU/GPU integration will also absorb the chipset functionality -- i.e. the whole "system on a chip" nirvana that has been trying to get going since 2000.
 

TanisHalfElven

Diamond Member
Jun 29, 2001
3,512
0
76
Originally posted by: miniMUNCH
I hope/think that the move to CPU/GPU integration will also absorb the chipset functionality -- i.e. the whole "system on a chip" nirvana that has been trying to get going since 2000.

there are amd geode
 

kedlav

Senior member
Aug 2, 2006
632
0
0
Nothing difficult about putting the memory controller on die for the cgpu. If its going to have that many cores, a few million more transistors are tiddliewinks comparatively speaking. As far as the ram issue, there's no real reason they couldn't develop a slot on the motherboard for interchangeable ram a la a dimm. Considering the speed of Hypertransport 11 would likely be, which is somewhere around where we'd be, it wouldn't be a huge hit. Again, it wouldn't immediately replace the higher end discrete GPUs, but it doesn't need to, it simply needs to marginalize their product to the point where they can't afford the research to stay cutting edge, which leads to the eventual buyout. nVidia knows it needs to start working on changing the system, and whether that means an Intel partnership, their own processing unit, folding up, or something else...
 

kobymu

Senior member
Mar 21, 2005
576
0
0
Originally posted by: BFG10K
That's my point - for a GPU to work at today's performance levels it needs fast dedicated RAM and lots of it.

If you want to move the GPU onto the CPU's die you need to deal with this fact.

The 'dedicated' part is misleading/redundant, and i'm pretty sure, although not absolutely certain, that moving the GPU onto the CPU's die will require additional bandwidth only, and duo to the parallel nature of modern GPUs, latency can even take a slight to moderate hit, GPUs need a feeding stream of high bandwidth memory. And it is also worth mentioning that modern CPU (Core2Duo and especially Athlon64), in the desktop environment, have bandwidth to spare in most applications (although not anywhere near the requirements of modern GPUs). But I generally agree with your point.

What i'm mostly curious about is how does the CPU cache fit into this, because allowing the level 2 cache contain 3D data can potentially have a awful effect on its effectiveness (really bad!), I don?t think that the 3D data from even a relatively small scene in a modern game can fit into 2/4MB, this is a second issue "you need to deal with". What i'm curious about is how, exactly, will CPU/GPU designers overcome this issue.

And we haven?t even touched the integration issue...
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
Originally posted by: BFG10K
And nobody ever said it did.
That's my point - for a GPU to work at today's performance levels it needs fast dedicated RAM and lots of it.

If you want to move the GPU onto the CPU's die you need to deal with this fact.

If you use system RAM then you end up with something a tad better than current GMA.

Maybe it's not how fast the ram is, but how you use it. Just side step for a moment. Look at the netburst architecture. Needed insane clock speeds to compete. Eventually getting up to 3.8 GHz (which actually is what I have now).
Enter Conroe. It is @ half the clock speed (for lack of the exact ratio) and still blows away netburst. Different architecture.

BFG, you shouldn't stonewall what an onboard GPU should need. It depends entirely on the design of it. Maybe it won't need 3000MHz GDDR5/6. Diverse thinking should be utilized here. Not what "should" be.
 

Schmeh

Member
Jun 25, 2004
29
0
0
Originally posted by: keysplayr2003
Originally posted by: BFG10K
And nobody ever said it did.
That's my point - for a GPU to work at today's performance levels it needs fast dedicated RAM and lots of it.

If you want to move the GPU onto the CPU's die you need to deal with this fact.

If you use system RAM then you end up with something a tad better than current GMA.

Maybe it's not how fast the ram is, but how you use it. Just side step for a moment. Look at the netburst architecture. Needed insane clock speeds to compete. Eventually getting up to 3.8 GHz (which actually is what I have now).
Enter Conroe. It is @ half the clock speed (for lack of the exact ratio) and still blows away netburst. Different architecture.

BFG, you shouldn't stonewall what an onboard GPU should need. It depends entirely on the design of it. Maybe it won't need 3000MHz GDDR5/6. Diverse thinking should be utilized here. Not what "should" be.

It doesn't matter how you design the GPU or what architecture you use, high-end GPU's process ridiculous amounts of data. In order to be able to feed the GPU the data you need either tons of on die memory, like edram, or you need tons of bandwidth to off die memory. The on die approach isn't feasible right now, because you need something like 128MB or more and that would be way to many transistors. So the only real option is with huge amounts of bandwidth to off die memory. The way to get this is with either really fast memory and good sized bus, or regular speed memory and a huge bus. Using DDR2 800 (PC6400), you would need a 1024 bit bus to get 100GB/s, which still won't match the bandwidth of the R600.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
Originally posted by: Schmeh


It doesn't matter how you design the GPU or what architecture you use, high-end GPU's process ridiculous amounts of data. In order to be able to feed the GPU the data you need either tons of on die memory, like edram, or you need tons of bandwidth to off die memory. The on die approach isn't feasible right now, because you need something like 128MB or more and that would be way to many transistors. So the only real option is with huge amounts of bandwidth to off die memory. The way to get this is with either really fast memory and good sized bus, or regular speed memory and a huge bus. Using DDR2 800 (PC6400), you would need a 1024 bit bus to get 100GB/s, which still won't match the bandwidth of the R600.

Of course it would matter how it's designed. What in the world would make you think otherwise? You're basing your information on what we have in front of us here today. Look a few years down the street, if you can.
 

dguy6789

Diamond Member
Dec 9, 2002
8,558
3
76
Originally posted by: keysplayr2003
Originally posted by: BFG10K
And nobody ever said it did.
That's my point - for a GPU to work at today's performance levels it needs fast dedicated RAM and lots of it.

If you want to move the GPU onto the CPU's die you need to deal with this fact.

If you use system RAM then you end up with something a tad better than current GMA.

Maybe it's not how fast the ram is, but how you use it. Just side step for a moment. Look at the netburst architecture. Needed insane clock speeds to compete. Eventually getting up to 3.8 GHz (which actually is what I have now).
Enter Conroe. It is @ half the clock speed (for lack of the exact ratio) and still blows away netburst. Different architecture.

BFG, you shouldn't stonewall what an onboard GPU should need. It depends entirely on the design of it. Maybe it won't need 3000MHz GDDR5/6. Diverse thinking should be utilized here. Not what "should" be.

QFT.

Look at the 9800 Pro and the 6600GT. The 9800 Pro has a good deal more memory bandwidth thanks to the 256 bit memory bus versus the 6600's 128. Despite this, the 6600GT is faster in pretty much every single case including AA/AF and high res.
 

kobymu

Senior member
Mar 21, 2005
576
0
0
Originally posted by: dguy6789
Look at the 9800 Pro and the 6600GT. The 9800 Pro has a good deal more memory bandwidth thanks to the 256 bit memory bus versus the 6600's 128. Despite this, the 6600GT is faster in pretty much every single case including AA/AF and high res.

Including directX 8 games?

 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
Look at the 9800 Pro and the 6600GT. The 9800 Pro has a good deal more memory bandwidth thanks to the 256 bit memory bus versus the 6600's 128. Despite this, the 6600GT is faster in pretty much every single case including AA/AF and high res.
Don't let the fanatics catch you stating that -- they got terribly bent out out shape anytime I used to bring it up...
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
Originally posted by: Gstanfor
Look at the 9800 Pro and the 6600GT. The 9800 Pro has a good deal more memory bandwidth thanks to the 256 bit memory bus versus the 6600's 128. Despite this, the 6600GT is faster in pretty much every single case including AA/AF and high res.
Don't let the fanatics catch you stating that -- they got terribly bent out out shape anytime I used to bring it up...

Is there a bus or something you can walk in front of? Seriously though, there was NO call for your comment.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
Originally posted by: dguy6789
Originally posted by: keysplayr2003
Originally posted by: BFG10K
And nobody ever said it did.
That's my point - for a GPU to work at today's performance levels it needs fast dedicated RAM and lots of it.

If you want to move the GPU onto the CPU's die you need to deal with this fact.

If you use system RAM then you end up with something a tad better than current GMA.

Maybe it's not how fast the ram is, but how you use it. Just side step for a moment. Look at the netburst architecture. Needed insane clock speeds to compete. Eventually getting up to 3.8 GHz (which actually is what I have now).
Enter Conroe. It is @ half the clock speed (for lack of the exact ratio) and still blows away netburst. Different architecture.

BFG, you shouldn't stonewall what an onboard GPU should need. It depends entirely on the design of it. Maybe it won't need 3000MHz GDDR5/6. Diverse thinking should be utilized here. Not what "should" be.

QFT.

Look at the 9800 Pro and the 6600GT. The 9800 Pro has a good deal more memory bandwidth thanks to the 256 bit memory bus versus the 6600's 128. Despite this, the 6600GT is faster in pretty much every single case including AA/AF and high res.

Kind of not the best example, only because the 6600GT uses MUCH higher core and mem clocks over the 9800pro/XT. But yeah, specs don't really mean anything if they are implemented properly. clock for clock, I'm pretty sure the 9800 would walk all over a 6600GT. Better comparison would be 6600 vanilla.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: keysplayr2003
Originally posted by: apoppin
Originally posted by: keysplayr2003
Originally posted by: DAPUNISHER
Could this be better suited for some form of new consumer electronics devices, as oppossed to the PC?

They did seem to mention that it was "stupidly" scalable, so I would imagine there will be a broad market for this type of technology. From handhelds to Mainframes.

did someone *miss* this from VR? :p
How about 16x performance of any fastest graphics card out there now [referring to G80]
:Q

in 2 years

you guys should really have been following this :p


:D

We are following it. ;)
*should really have been*

good
:thumbsup:

of course ... Sony made the same claims for cell :p

 

SamurAchzar

Platinum Member
Feb 15, 2006
2,422
3
76
Originally posted by: JungleMan1
Originally posted by: zephyrprime
I don't see how a general purpose processor can possibly beat a specialized processor. After all, there isn't anything in a general purpose processor that can't be in a special purpose processor but there are things in a special purpose processor that isn't in a general purpose processor.
Heh, that's what people said about math coprocessors back in the late 80s.

Yes, technically, you are right, a specialized chip will always be faster; however, as technology advances, we'll be able to integrate more functionality into a general-purpose chip than anyone is ever going to use. For example, nowadays our CPUs are advanced enough to pull off any type of mathematical functions, so we have no need for a math chip.

In 10 years, dedicated graphics chips will seem as quaint as math coprocessors are today.

Oh, and also, NVIDIA will not die out because of one product. All NVIDIA has to do is come up with their own version of the product.

QFT, but my reasoning is slightly different. With programmable shaders and the kind of stuff the GPU can pull nowadays, it's getting nearer and nearer to CPU territory. It's no longer a simple, number crunching ASIC.
On the other hand, with monster SIMD units and the likes, the CPU is getting better and better at doing the specialized, clock-optimized work that was previously off-loaded to ASICs. And it's getting better at parallelism too.

Once complex logic is introduced into the work of the GPU, it will require the exact same facilities as a CPU to perform work, namely branch prediction, out of order execution and the likes. Intel has these pinned.