Nvidia at work on combined CPU with graphic - On 65nm in 2008

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
Originally posted by: sm8000
Who's overclocking Dell/HP/Compaq machines?

If you bother to read what I was replying to you would understand. READ BEFORE YOU POST!

I was posting in reply to the statement that the overclocker market is small. So I adressed the largest part of the market... OEMs. Those OEMs won't be sold on NV's CPU if they decide to attempt to move into the consumer level desktop area with it.
 

Steve

Lifer
May 2, 2004
16,572
6
81
www.chicagopipeband.com
LOL, you take this stuff really personally. You need to chill out.

And I agree with what you're saying now, that OEMs won't just jump in with a new CPU brand. Look how long it took Dell to adopt AMD. nVidia CPUs will have to be proven as well.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
a little update:

http://www.theinquirer.net/default.aspx?article=37163
Nvidia moves one step closer to its own CPUs

... nearly US$ (not CAN$) three million just poured by Nvidia into a tiny Canadian outfit called Acceleware Corp, based in cold Calgary.

Acceleware is heavily involved in high-performance computing tasks acceleration using Nvidia GPUs, mainly for the electromagnetic (cellphone, microwave, IC), seismic (oil exploration - remember the company's location!), biomedical, industrial and military markets. They argue that, for many of these apps, the acceleration possible on specific tasks can reach between 10x and 40x - and that was on accelerator cards similar to the old 7900GTX. The new G80 generation should provide further boost.

Up to now, Acceleware approach was quite different from companies like PeakStream, who provide affordable (in fact, trial versions are free) numerical libraries to offload any suitable user tasks to the (mostly ATI, soon Nvidia too) GPUs. Acceleware sells comparatively expensive, guaranteed turnkey solutions for ritzy clients, either software bundles with the card, or a complete AMD or Intel-based multiprocessor workstations with large memory and up to four accelerator cards.

The Nvidia investment may change that - the 'greens' need a software partner who can rapidly turn out some kind of more mainstream yet highly optimised NV-based FP processing for anything from ray-traced rendering for movies to genomics or financial modeling. So, it has to go beyond proprietary turnkey into the open market, where its existing FP optimisation expertise could be of substantial help to Nvidians across a greater field. Once the FP portion is fixed and the brand is accepted on more programmers' desks this way, Nvidia can start focusing on the general-purpose integer portion - fixing the X86 execution compatibility, the next step towards having its own ultrafast CPU solution soon.
 

StopSign

Senior member
Dec 15, 2006
986
0
0
Whoa, this thread got dug out of the grave. I didn't realize how old it is until I read a post that mentioned "the upcoming release of 680i".

Know what would be ironic right now? nVidia releases their own CPU, socket and chipset later this year that blows everything out the water, like what Conroe did to K8.
 

nyker96

Diamond Member
Apr 19, 2005
5,630
2
81
i heard of that fusion chip, sounds like a pretty good idea and may actually go well for oem builders since they can forgo g-card. who knows this arrangement may even yield high performance since cpu-gpu so closely knit. one thing will be true, this type of thing will save on energy, no watt hungry external cards just an extra core or two on the main chip.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Why on earth would intel start making discrete graphics while the rest of the world works toward integrating graphics with the CPU? Something definately doesn't make sense here.
 

Duvie

Elite Member
Feb 5, 2001
16,215
0
71
Hmmm...I live in Portland, OR and I haven't heard anything about an Nvidia Design Center...

Makes sense...snooping around Intel's backyard. If the claims of Nvidia wanting to be abosrbed by INtel are true, this would make even more sense


For most of us enthusiast this want make much of a deal for a bit. INtegrated graphics suck, often times even for non gamers...

I dont see Intel being able to compete wit AMD?ATI on this front unless they do purchase Nvidia. This is just simply using some cores in a multi core cpu to do the graphics. CPU cores as has been long discussed are not suited for that.
 

Bateluer

Lifer
Jun 23, 2001
27,730
8
0
Originally posted by: SickBeast
Why on earth would intel start making discrete graphics while the rest of the world works toward integrating graphics with the CPU? Something definitely doesn't make sense here.

Intel will do both, they have the resources. That way, they will have the best of both worlds. They'll have a tightly integrated CPU/GPU combo, and high performing CPUs and GPUs, allowing them to claim both aspects of the market.
 

Arcada

Banned
Jan 14, 2007
45
0
0
Originally posted by: nitromullet
Originally posted by: apoppin
Originally posted by: Yoxxy
Nvidia is doing this because they want to get bought by Intel. Although GPU's are much better than cpu's at doing repetitive tasks they need a lot more logic to do load balancing and computation of different ordinal and rational behaviors. If they do make a processor it won't be for a high-end system. This is meant to be a low-end computer with integrated graphics and probably a cpu/video solution that is soldered directly to the motherboard.

of course . . . in the beginning . . . everything starts 'simple' and in this case to reduce costs. nvidia is 'following' . . . but very a aggressive move.... and i don't think it is to be 'acquired'. i think they would hate that and not work well in a 'takeover' situation . . . and probably never happen under their current CEO.

however . . . it appears to be the 'future' . . . it is speculated that AMD acquired ATi to do just this: merge the CPU/GPU and specialize the platform toward specifics.

I agree with apoppin here. NVIDIA is a leader not a follower, and quite honestly, they rock at everything they do. A few years ago, people were saying the same about NV's move into the chipset game. While nForce1 wasn't all that great, nForce eventually became the hands down best AMD chipset (since nForce2), and now they make chipsets that rival Intel's own.

Another thing to consider: NVIDIA may have a lot of work to do on the CPU side, but Intel has a whole lot of work to do on the GPU side. AMD/ATI may take the initial lead on this, but they are behind NV and Intel with regards to chipsets.

qft
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Aha ... some details on Intel's CPU-GPU - Larrabee

http://www.vr-zone.com/?i=4605
From our non-Intel sources, we came to know about Intel's Visual Computing Group (VCG) discrete graphics plans. There seems to be a few interesting developments down the pipeline that could prove quite a challenge to NVIDIA and AMD in 2 years time. As already stated on their website, the group is focused on developing advanced products based on a many-core architecture targeting high-end client platforms initially. Their first flagship product for games and graphics intensive applications is likely to happen in late 2008-09 timeframe and the GPU is based on multi-core architecture. We heard there could be as many as 16 graphics cores packed into a single die.

The process technology we speculate for such product is probably at 32nm judging from the timeframe. Intel clearly has the advantage of their advanced process technology since they are always at least one node ahead of their competitors and they are good in tweaking for better yield. Intel is likely use back their CPU naming convention on GPU so you could probably guess that the highest end could be called Extreme Edition and there should be mainstream and value editions. The performance? How about 16x performance of any fastest graphics card out there now [referring to G80] as claimed. Anyway it is hard to speculate who will lead by then as it will be DX10.1/11 era with NVIDIA G9x and ATi R7xx around.

http://www.theinquirer.net/default.aspx?article=37548
VRZ got it almost dead on, the target is 16 cores in the early 2009 time frame, but that is not a fixed number. Due to the architecture, that can go down in an ATI x900/x600/x300 fashion, maybe 16/8/4 cores respectively, but technically speaking it can also go up by quite a bit.

What are those cores? They are not GPUs, they are x86 'mini-cores', basically small dumb in order cores with a staggeringly short pipeline. They also have four threads per core, so a total of 64 threads per "CGPU". To make this work as a GPU, you need instructions, vector instructions, so there is a hugely wide vector unit strapped on to it. The instruction set, an x86 extension for those paying attention, will have a lot of the functionality of a GPU.

What you end up with is a ton of threads running a super-wide vector unit with the controls in x86. You use the same tools to program the GPU as you do the CPU, using the same mnemonics, and the same everything. It also makes things a snap to use the GPU as an extension to the main CPU.

Rather than making the traditional 3D pipeline of putting points in space, connecting them, painting the resultant triangles, and then twiddling them simply faster, Intel is throwing that out the window. Instead you get the tools to do things any way you want, if you can build a better mousetrap, you are more than welcome to do so. Intel will support you there.

Those are the cores, but how are they connected? That one is easy, a hugely wide bi-directional ring bus. Think four not three digits of bit width and Tbps not Gbps of bandwidth. It should be 'enough' for the average user, if you need more, well now is the time to contact your friendly Intel exec and ask.

As you can see, the architecture is stupidly scalable, if you want more CPUs, just plop them on. If you want less, delete nodes, not a big deal. That is why we said 16 but it could change on more or less on a whim. The biggest problem is bandwidth usage as a limiter to scalability. 20 and 24 core variants seem quite doable.

The current chip is 65nm and was set for first silicon in late 07 last we heard, but this was undoubtedly delayed when the project was moved from late 08 to 09. This info is for a test chip, if you see a production part, it will almost assuredly be on 45 nanometres. The one that is being worked on now is a test chip, but if it works out spectacularly, it could be made into a production piece. What would have been a hot and slow single threaded CPU is an average GPU nowadays.

Why bring up CPUs? When we first heard about Larrabee, it was undecided where the thing would slot in, CPU or GPU. It could have gone the way of Keifer/Kevet, or been promoted to full CPU status. There was a lot of risk in putting out an insanely fast CPU that can't do a single thread at speed to save its life.

The solution would be to plop a Merom or two in the middle, but seeing as the chip was already too hot and big, that isn't going to happen, so instead a GPU was born. I would think that the whole GPU notion is going away soon as the whole concept gets pulled on die, or more likely adapted as tiles on a "Fusion like" marchitecture.

In any case, the whole idea of a GPU as a separate chip is a thing of the past. The first step is a GPU on a CPU like AMD's Fusion, but this is transitional. Both sides will pull the functionality into the core itself, and GPUs will cease to be. Now do you see why Nvidia is dead?

So, in two years, the first steps to GPUs going away will hit the market. From there, it is a matter of shrinking and adding features, but there is no turning back. Welcome the CGPU. Now do you understand why AMD had to buy ATI to survive? µ

* Update
We originally stated that Intel had briefed VRZone, but it transpires that wasn't the case. Apologies.
yes i DO understand why AMD acquired ATi

theinq is correct [imo] in stating and ferreting out the obvious