Nvidia GPUs soon a fading memory?

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

dzoner

Banned
Feb 21, 2010
114
0
0
We were talking about making it a success, requiring a lot of SOFTWARE support.
No doubt they can copy-paste an IGP onto a CPU, Intel's already shown them how it's done.
But that has NOTHING to do with pushing nVIdia out of the DISCRETE market...
You need a LOT more than that.

You have word AMD won't have their drivers ready for Fusion's launch?

by "an IGP onto a CPU" you mean "an IGP onto a processor package"? Putting an 'IGP onto a CPU' connotates it being on the same chip. That isn't what Intel did.

But it is what AMD has already accomplished. Except it isn't an IGP, it's derived from an actual discrete DX11 chip. With 400 SPs. ON CHIP.
 

dzoner

Banned
Feb 21, 2010
114
0
0
I don't think that matters much for the argument...
Namely, although the GPU is 45 nm, we are comparing against non-integrated CPU + IGP solutions, where all CPUs are 45 nm aswell, and IGPs sometimes even more than 45 nm.
So if anything, the 32 nm CPU would skew the comparison in Clarkdale's advantage, as we don't have any non-integrated 32 nm solutions to compare to.
Despite that, the difference isn't very large.

The 'argument' was me pointing out to Edgy his statement "Merge of CPU & GPU on one chip provides relatively MINOR advantage in power and cost savings and therefore ..." had no merit as no such chip has been released.

The same reality applies to your argument. And your conclusion that 'Despite that, the difference isn't very large'.

Not that your argument above made any sense to me in any case.
 

A_Dying_Wren

Member
Apr 30, 2010
98
0
0
You have word AMD won't have their drivers ready for Fusion's launch?

by "an IGP onto a CPU" you mean "an IGP onto a processor package"? Putting an 'IGP onto a CPU' connotates it being on the same chip. That isn't what Intel did.

But it is what AMD has already accomplished. Except it isn't an IGP, it's derived from an actual discrete DX11 chip. With 400 SPs. ON CHIP.

Sorry but what exactly is the fine difference between "copy pasting" the IGP onto the chip and putting the whole thing together into one behemoth of a chip? If anything, the latter could be risky for AMD/ATI as Nvidia has very aptly shown with their giant Fermi and low yield.

The 'argument' was me pointing out to Edgy his statement "Merge of CPU & GPU on one chip provides relatively MINOR advantage in power and cost savings and therefore ..." had no merit as no such chip has been released.

The same reality applies to your argument. And your conclusion that 'Despite that, the difference isn't very large'.

Not that your argument above made any sense to me in any case.

As far as most of us can conjecture, it would lead to minor power/cost savings. On what grounds do you conjecture that Fusion will be any more than that? Its all conjecture at this point so you can't point a finger at us and accuse us of having no merit because no such chip has been released when you yourself are in the same situation.

Back on topic, I think if Nvidia can really get Fermi sorted and efficient which I think they will in a few refreshes/die shrinks, a combination of that and the more efficient Intel cpus would still put up a very valid fight against AMD/ATI. Of course its anyone's guess how well Northern Islands will perform. Does ATI perchance have anything similar to Optimus technology? The lack thereof is a dealbreaker with laptops IMO

For desktops which have a far greater performance and heat tolerance, I don't think AMD will see the real benefit in lumping CPU with GPU for a few reasons:

  1. GPUs in desktops can be fairly powerful or can be just enough. Just enough for most people by that time will be Intel cpus+igp. For fairly powerful machines, you probably don't want even a 5770 and a 945t in immediate proximity to each other. Fusion can target the lower-end gaming market fairly efficiently I think but anything else would face stiff competition or technical impracticality.
  2. I doubt you'll really be saving on cost. AMD does still need to charge reasonable amounts of money to pay for the development of Fusion and even if they dropped prices, thats not to say Intel or Intel+nvidia won't drop prices to match.
  3. Intel and Nvidia still have the better marketing departments/vendor relations departments
  4. Probably obvious but no enthusiast would get within a mile of Fusion.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
You have word AMD won't have their drivers ready for Fusion's launch?

I already explained that it takes a lot more than just drivers if you want to use Fusion to accelerate application code.
You need to extend the x86 instructionset to incorporate the GPU instructions aswell.
Then you need to deliver a compiler so that developers can actually use this instructionset...
And then you need to convince developers to rewrite their applications and optimize for Fusion.
If you don't, it's no different from a CPU with IGP, or with a GPU on package, and it's just a cost-cutting feature, and not a very spectacular one at that.
It's not going to help x86 code at all, and it will not be delivering performance competitive with discrete cards either.
 

dzoner

Banned
Feb 21, 2010
114
0
0
A) But isn't there a very valid reason for splitting up the GPU and the CPU to distribute the heat around the laptop more evenly? I'm fairly sure you're going to have one massive hot spot on laptops which is not going to be particularly appreciated if you have power equivalent to what nvidia churns out right next to AMDs less efficient cpus.

B) AMD (not ATI) hasn't really had a stellar track record of late. They've been deeply relegated to budget computing on normal desktops ever since C2D. They only really compete in the server arena because server software is considerably more multi-threaded so AMD can go crazy on cores. I haven't seen any proper reviews of their latest notebook cpus but I'm quite willing to bet they're not up to scratch with Intel's stuff.

C) ATI has done well since the 4xxx series for I think the same reasons Nvidia will in the future do well with fermi. Both the 2xxx and 4xx series were hot and loud (at least the 4xx series gave very decent performance) initially. The 2xxx series became amazing with die shrinks. Fermi might too.

D) And I think you overplay this "cost/performance" stuff. Its fairly clear such a ratio has little in common with the success of a laptop/desktop out of the enthusiast realm. Its all marketing and vendor relations and Nvidia and Intel have that down pat.

E) The real danger to Nvidia I think is if Intel's graphics unit actually became competitive which isn't too likely. And no I don't think Larrabee is anywhere in the mind of JHH. Intel hasn't even so much as churned out a valid (current) roadmap with Larrabee on it AFAIK.

A) Don't know much about notebook cooling, but wouldn't localizing the heat source to a single spot allow for a smaller, more efficient cooler to vent the heat out of the case. And AMD said the STARS core of the Fusion chips has been extensively reworked to include fine grained digital power gating and significant efficiency improvements. They haven't released any data on how they've reworked the graphics core, but it should be a safe assumption they've done something similar to it. These chips should be significantly more efficient and finely match power use to loads making their performance/thermal envelope class leading. 'Hotspots' shouldn't be a problem.

B) I said stellar execution lately. For AMD and ATI divisions. Which is true. And has been their saving grace keeping new (even if only reworked) and price competitive products rolling out the door while Fusion was delayed as Dirk Meyers reworked the roadmap for Fusion. The margins may have been thin, but they do pretty much own the sub $200 cpu market, and have been piling up design wins in the laptop market. They are in excellent shape for the launch of Fusion in 2011, which is looking like it could be prety explosive.

D) Marketing and vendor relationships are not set in stone. Intel had their vendor club taken away and Nvidia's 'bumpgate' fiasco didn't endear them to their vendors. Both contributed to AMD's recent record number of design wins in the laptop market. Nor is Nvidia anywhere the 'name' it once was. Listening to PC Per last night on TWIT, Josh Walrath noted that he's talked to a number of vendors about Fusion, who were all very positive, even exited, about their upcoming Fusion products. On an even playing field, which is what is developing, cost/performance has has A LOT in common 'with the success of a laptop/desktop out of the enthusiast realm'.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
The 'argument' was me pointing out to Edgy his statement "Merge of CPU & GPU on one chip provides relatively MINOR advantage in power and cost savings and therefore ..." had no merit as no such chip has been released.

After the disaster that was AMD's 'native quadcore', you'd think people have learnt that a single monolithical die doesn't have any advantage over two dies in one package...
 

dzoner

Banned
Feb 21, 2010
114
0
0
After the disaster that was AMD's 'native quadcore', you'd think people have learnt that a single monolithical die doesn't have any advantage over two dies in one package...

I was wondering why Intel went with multi-die-on-package instead of monolithical dies for Nehalem and Westmere.

Thanks for the heads up.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
After the disaster that was AMD's 'native quadcore', you'd think people have learnt that a single monolithical die doesn't have any advantage over two dies in one package...

And since then didn't we see successful monolithical dies? Or is that a remark about NVIDIA GPU strategy?

We need to look at what the products are by themselves - Athlon II, core iX, Phenom II, are all successful monolithic die builds.

Just for the sake of it, compare those original Phenom that could barely overclock and reaching the 3GHz was all but impossible to the recent thubans that can reach 4 GHz and are 6 cores (I know there was a die shrink, but even the first Phenom II could only reach 3.6-3.8 GHz and would consume the same or more power than a thuban).

There is a good probability that the low-end/oem discrete market will just disappear - with Llano we are talking about 5570 performance not exactly your regular IGP performance. And that is first version. Would anyone buy a 9500GT/GT240 if they had a 5570 on their rigs, for example?

Now will NVIDIA GPUs disappear? Not likely, at least not for the next several years.

The high-end market and even the performance will still be out of reach of on-die GPUs for some time, but the mainstream and low-end will probably not.

Of course it also depends of how good Intel can do their graphics solutions.

NVIDIA has other markets though. So it shouldn't be going anywhere.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
And since then didn't we see successful monolithical dies? Or is that a remark about NVIDIA GPU strategy?

I never said that monolithical dies can't be successful... just that the fact that Intel uses two dies for the CPU and GPU package is not a disadvantage. Multi-chip modules are a great way to minimize cost, which is what CPUs with integrated GPUs are all about anyway. I was pointing out successful multi-chip modules.
 

A_Dying_Wren

Member
Apr 30, 2010
98
0
0
A) Don't know much about notebook cooling, but wouldn't localizing the heat source to a single spot allow for a smaller, more efficient cooler to vent the heat out of the case. And AMD said the STARS core of the Fusion chips has been extensively reworked to include fine grained digital power gating and significant efficiency improvements. They haven't released any data on how they've reworked the graphics core, but it should be a safe assumption they've done something similar to it. These chips should be significantly more efficient and finely match power use to loads making their performance/thermal envelope class leading. 'Hotspots' shouldn't be a problem.

B) I said stellar execution lately. For AMD and ATI divisions. Which is true. And has been their saving grace keeping new (even if only reworked) and price competitive products rolling out the door while Fusion was delayed as Dirk Meyers reworked the roadmap for Fusion. The margins may have been thin, but they do pretty much own the sub $200 cpu market, and have been piling up design wins in the laptop market. They are in excellent shape for the launch of Fusion in 2011, which is looking like it could be prety explosive.

D) Marketing and vendor relationships are not set in stone. Intel had their vendor club taken away and Nvidia's 'bumpgate' fiasco didn't endear them to their vendors. Both contributed to AMD's recent record number of design wins in the laptop market. Nor is Nvidia anywhere the 'name' it once was. Listening to PC Per last night on TWIT, Josh Walrath noted that he's talked to a number of vendors about Fusion, who were all very positive, even exited, about their upcoming Fusion products. On an even playing field, which is what is developing, cost/performance has has A LOT in common 'with the success of a laptop/desktop out of the enthusiast realm'.

A) That I hadn't heard of but its about time they implement something to that effect which Intel has had for a while. Could be successful. We'll have to reserve judgment on that. I'm not too familiar with notebook cooling myself but at least with two heat spots, the laptop itself can be used to disperse some of the heat.

B) Idk... i wouldn't define stellar execution as being able to price your product really low as the product is in many ways inferior to the competition. They do reasonably well in the sub-$200 market though I would concur although their competition there seems to be i3s, C2Qs and C2Ds

D) They aren't set in stone but Intel and Nvidia do seem to have the upper hand at least in personnel which can deal with these companies. That they are "excited" about an upcoming product really means very little. Its all marketing speech.

And er... what design wins? ATI's been doing well in the mobile gpu market recently i think but AMD hasn't been going anywhere fast and I highly doubt they will with their latest cpus
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
After the disaster that was AMD's 'native quadcore', you'd think people have learnt that a single monolithical die doesn't have any advantage over two dies in one package...

Isn't Nehalem a true Quad Core design? AMD showed benchmarks in the server market that proved that a true Quad Core can be benefitial in certain transactions compared to a sandwiched Quad Core, remember than Core 1 and Core 2 can't communicate directly to Core 3 and 4 and it uses the FSB to communicate, which is much slower, (Core 2 Quad of course) AMD first True Quad Core was a disaster but not because of its inter communication, it was because of its high latency cache, TLB bug plus it has lower IPC compared to Nehalem.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Isn't Nehalem a true Quad Core design?

That's not the point.

AMD showed benchmarks in the server market that proved that a true Quad Core can be benefitial in certain transactions compared to a sandwiched Quad Core, remember than Core 1 and Core 2 can't communicate directly to Core 3 and 4 and it uses the FSB to communicate, which is much slower, (Core 2 Quad of course)

It is not, actually.
Anandtech did some testing with cache2cache a few years ago, to figure out just how much latency there was with core-to-core communication:
http://it.anandtech.com/show/2143/2
As you can see, the reality is that the Xeon is actually MUCH FASTER than the Opteron with on-die communication, and only marginally slower with die-to-die communication, compared to Opteron's core-to-core communication. This means that on average, the Xeon is actually FASTER than the Opteron with core-to-core communication.
Namely:
Core 1-2: 59 ns
Core 1-3: 154 ns
Core 1-4: 154 ns
Core 2-3: 154 ns
Core 2-4: 154 ns
Core 3-4: 59 ns

Average: (59*2 + 154*4)/6 = 122.33 ns
Better than the Opteron's 134 ns(!).
By using a smart scheduler that tries to keep communicating threads on the same die as much as possible, this average can be improved even further in the favour of the Xeon.

The main win for the Opteron is socket-to-socket communication, but that has nothing to do with it being a native quadcore, that's the HyperTransport bus that the Xeon lacked.
 

dzoner

Banned
Feb 21, 2010
114
0
0
B) Idk... i wouldn't define stellar execution as being able to price your product really low as the product is in many ways inferior to the competition. They do reasonably well in the sub-$200 market though I would concur although their competition there seems to be i3s, C2Qs and C2Ds

D) They aren't set in stone but Intel and Nvidia do seem to have the upper hand at least in personnel which can deal with these companies. That they are "excited" about an upcoming product really means very little. Its all marketing speech.

And er... what design wins? ATI's been doing well in the mobile gpu market recently i think but AMD hasn't been going anywhere fast and I highly doubt they will with their latest cpus

B) Execution of designs and plans. Getting actual competitive in the black products to market. Thuban was a stellar example, six cores in a 4 core power envelope released ahead of schedule at a standout price and by all reports selling like hotcakes. Fermi was an example of atrocious execution, power hungry and late to market at an uncompetitive price.

D) The salient point is the relationships are changing, and in AMD"s favor. It's not all marketing speech. A vendor's engineers being enthusiastic about Fusion, which is where such expression would originate as they delved into samples, spreads outwards and there's no reason marketing wouldn't get enthused about marketing a new, game changing, highly marketable and profitable product the engineers are enthused about. They, like the engineers, get tired of the same old, same old.
 
Last edited:

dzoner

Banned
Feb 21, 2010
114
0
0
I never said that monolithical dies can't be successful... just that the fact that Intel uses two dies for the CPU and GPU package is not a disadvantage. Multi-chip modules are a great way to minimize cost, which is what CPUs with integrated GPUs are all about anyway. I was pointing out successful multi-chip modules.

Only successful because AMD faceplanted with Phenom I. They aren't price/performance competitive with Phenom II or Athlon II at the same process node.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
Only successful because AMD faceplanted with Phenom I. They aren't price/performance competitive with Phenom II or Athlon II at the same process node.

Uhhh, yes they are.
Q6600 and above schooled Phenom and Phenom II for a long time, they were still faster clock-for-clock.
Perhaps you need a reminder of Phenom II vs Core2 Quad?
http://www.anandtech.com/show/2702/1
"Compared to the Core 2 Quad Q9400, the Phenom II X4 940 is clearly the better pick. While it's not faster across the board, more often than not the 940 is equal to or faster than the Q9400. If Intel can drop the price of the Core 2 Quad Q9550 to the same price as the Phenom II X4 940 then the recommendation goes back to Intel. The Q9550 is generally faster than the 940, more overclockable at lower voltages, and a high enough default clock speed to keep you happy in the long run."

The fastest Phenom II competed against the low-end Q9400 and Q9550 (and this was an old architecture, as Intel had already launched the Nehalem anyway).
Obviously the Phenom II had to be cheaper, as it couldn't perform. Which is why AMD was bleeding cash so much. You can always compete on price/performance... but at what cost, when your product isn't actually better than the competition?
 

NoQuarter

Golden Member
Jan 1, 2001
1,006
0
76
That's not the point.



It is not, actually.
Anandtech did some testing with cache2cache a few years ago, to figure out just how much latency there was with core-to-core communication:
http://it.anandtech.com/show/2143/2
As you can see, the reality is that the Xeon is actually MUCH FASTER than the Opteron with on-die communication, and only marginally slower with die-to-die communication, compared to Opteron's core-to-core communication. This means that on average, the Xeon is actually FASTER than the Opteron with core-to-core communication.
Namely:
Core 1-2: 59 ns
Core 1-3: 154 ns
Core 1-4: 154 ns
Core 2-3: 154 ns
Core 2-4: 154 ns
Core 3-4: 59 ns

Average: (59*2 + 154*4)/6 = 122.33 ns
Better than the Opteron's 134 ns(!).
By using a smart scheduler that tries to keep communicating threads on the same die as much as possible, this average can be improved even further in the favour of the Xeon.

The main win for the Opteron is socket-to-socket communication, but that has nothing to do with it being a native quadcore, that's the HyperTransport bus that the Xeon lacked.

Isn't the Xeon they benched there a 45nm process and the Opteron benched a 90nm? How do the architectures compare in this aspect on equal process footing?

I agree the multi-die packaged approach seems to have no inherent weakness to the monolithic native quad design though, just curious about that aspect.

The best benefit in AMD's approach seems to have been for consumers getting cheap (unlockable) tri-core CPU's though since the package approach of Intel's give less wasteful binning results (they only glue together 2 die if all the core function properly, otherwise they just split them and sell a dual core and a single core cpu)
 
Last edited:

Edgy

Senior member
Sep 21, 2000
366
20
81
"We don’t get on-die graphics yet because Intel still hasn’t switched over to its make-everything-at-the-best-process-ever strategy. The 32nm fabs are ramping up with CPU production and the 45nm fabs need something to do. Nearly every desktop and laptop sold in 2010 will need one of these 45nm GMA die, so the fabs indeed have something to do."

Edgy - "Merge of CPU & GPU on one chip provides relatively MINOR advantage in power and cost savings and therefore chips like Clarksdales ..."

chip = die = chip

Clarksdale is 1 32nm cpu chip + 1 45nm graphics chip on one processor package.

Fusion is one 32nm chip with both cpu and graphics cores on die.

ergo - there is not yet a released 'Merge of CPU & GPU on one chip' part.

AMD's own strategy for Fusion takes 3 steps (as they were revealed couple of years back) - integration, optimization, then exploitation. Pls read this http://www.anandtech.com/show/2229/3

Initial stage of Fusion launch is integration - basically CPU & GPU into 1 processor package.

It really matters little at integration stage whether they are on same die or not (much like 2 CPUs in one package vs 1 die - remember that debate).

The 'Merge of CPU & GPU on one chip' you're talking about will actually happen at what AMD calls optimization step. This is where they'll add the x86 instruction extension to provide direct access to GPU like they do for CPU currently (plus any additional architectural improvements etc.,).

My point is that initial Fusion'd BD launch will be integration only and that's next year sometime and it's just moving IGP into 1 processor package with CPU cores - it is nothing earth-shattering.

Fully "optimized" Fusion products (BD or its successor) will be much later as it would more than likely require fairly significant instruction set and architectural changes to the CPU design. Seeing as how AMD wants to mimic Intel's tick tock - if that holds true then most likely guess at a time-frame for full optimized Fusion would be 2 years after BD launch.

Hence, luckily for all of us, I think the concern over Nvidia's impending demise is fairly premature.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Isn't the Xeon they benched there a 45nm process and the Opteron benched a 90nm? How do the architectures compare in this aspect on equal process footing?

I agree the multi-packaged die approach seems to have no inherent weakness to the monolithic native quad design though, just curious about that aspect.

The Xeon is 65 nm: http://processorfinder.intel.com/details.aspx?sSpec=SL9YL
Manufacturing process has little to do with this anyway, it purely depends on the architecture (and the clockspeed, obviously).
Here's another one then, different Xeons and Opterons used: http://it.anandtech.com/show/2386/4
Shows basically the same story, Xeons are much faster core-to-core, and not all that much slower via FSB.
The Xeons average at about 117.6 ns, which beats the B1 revision. The B2 revision is slightly faster, but nothing spectacular.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
My point is that initial Fusion'd BD launch will be integration only and that's next year sometime and it's just moving IGP into 1 processor package with CPU cores - it is nothing earth-shattering.

Exactly which IGP compare to the power of a 5570?

Depending on the price that might kill most of the market for cards like the 5570/GF 9500GT/ GT240 and lower.

Why is the i3+GPU not earth-shattering? Cause the GPU is still crap.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Exactly which IGP compare to the power of a 5570?

Depending on the price that might kill most of the market for cards like the 5570/GF 9500GT/ GT240 and lower.

Why is the i3+GPU not earth-shattering? Cause the GPU is still crap.

First things first... AMD has to prove that they can pull it off.
I don't see them getting 5570 performance out of a GPU with shared memory.
The 5570 has 28.8 GB/s bandwidth.
A Phenom II X6 has around 13.6 GB/s bandwidth.
So if you were to put a 5570 GPU onto a Phenom II, you'd get less than half the bandwidth, AND you'd have to share it with the CPU.
A Radeon 5450 may be more realistic, as it only has 12.8 GB/s bandwidth itself.
But a Radeon 5450 isn't really a card that is capable of decent gaming.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
First things first... AMD has to prove that they can pull it off.
I don't see them getting 5570 performance out of a GPU with shared memory.
The 5570 has 28.8 GB/s bandwidth.
A Phenom II X6 has around 13.6 GB/s bandwidth.
So if you were to put a 5570 GPU onto a Phenom II, you'd get less than half the bandwidth, AND you'd have to share it with the CPU.
A Radeon 5450 may be more realistic, as it only has 12.8 GB/s bandwidth itself.
But a Radeon 5450 isn't really a card that is capable of decent gaming.

If you use dual channel DDR3 at 1333/1600 MHz the bandwidth is 21.334/25.6GB/s (don't forget dual-channel allows for 128-bits), quite comparable to the 28.8 GB/s. Additionally it seems that the Llano GPU might have 480 SP vs the 5570 400 SP. Depends of the clock speed of the GPU, but if it is 700-750 MHz compared to the 5570 650 MHz core, doesn't sound that far fetched.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
If you use dual channel DDR3 at 1600 MHz the bandwidth is 25.6GB/s (don't forget dual-channel allows for 128-bits and not 64-bits), quite comparable to the 28.8 GB/s.

Nope.
Look here:
http://www.legionhardware.com/articles_pages/amd_phenom_ii_x6_1090t_be_1055t,2.html
They used 2GB G.Skill DDR3 PC3-12800 (CAS 8-8-8-20).
That's 1600 MHz, dual channel.
As you see, 13.6 GB/s is what they get in the SiSoft memory bandwidth test.
Intel's dual-channel works better, they get 16 GB/s out of it.
But only the triple-channel CPUs get 28 GB/s.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0
Yes GPU will fade away if fusion does whatever OP thinks it will do. 5870 + X6 under a die, interconnect without any bottleneck, power and heat issues. Soon it will retrofit the i/o, PSU as well as the case all into one die. Dream on.

If you are by any means serious about it, please supply some proof that Fusion indeed does what you think it will. As of now, I3 661 is the closest chip in the market that contains GPU within the CPU die. It works, but no one buys them for obvious reasons.

The Fusion Project was initiated when AMD first bought ATI. It was sounding and makes a lot of sense by then. Nowadays we talk about multi GPUs with heatsinks that are larger than the card itself and still ain't able to handle some of the newer games. Making each transistor smaller so more transistors can be placed on the some area make sense, cutting it down and merge it with another parts also make sense, but it really isn't what you are thinking.
 
Status
Not open for further replies.