The Rise and Fall of AMD.

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Makaveli

Diamond Member
Feb 8, 2002
4,976
1,571
136
I really don't see why not. They took a Atom and it is already competitive with ARM offerings in terms of performance and battery life. That's essentially a 'bastard' CPU leveraged for low-power usage. When Intel really starts to target and engineer these, they absolutely can be competitive.

I agree I think people forget how much money and talent intel is sitting on.

Now that ARM has their attention we will see what happens.

This sleeping giant got woken up by athlon 64 and I see it happening again.
 

pablo87

Senior member
Nov 5, 2012
374
0
0
Don't know if anybody else picked up on that but that is a picture of a 386DX-40 PQFP mounted on a substrate for insertion into the 386PGA socket. That is not something AMD ever shipped, it was done by 3rd parties who took advantage of the price differential between 386dx pqfp and 386dx pga, and some of them didn't work very well at all so if you didn't have the equipment to desolder the CPU, you were hooped. Some motherboards actually layed out both pqfp and pga.

What a waste of brain cells.
 

Maximilian

Lifer
Feb 8, 2004
12,604
15
81
How about an IGP that can give playable framerates at decent resolution?

Thats what i thought AMD's APU's would turn out to be. An IGP with ~GTX460 performance would really tear into the discrete cards market, dosent exist yet unfortunately.
 

Makaveli

Diamond Member
Feb 8, 2002
4,976
1,571
136
Thats what i thought AMD's APU's would turn out to be. An IGP with ~GTX460 performance would really tear into the discrete cards market, dosent exist yet unfortunately.

I think Amd did this on purpose right now because it would tear into their own discrete cards.

Plus I think they need to move to lower than 32nm to really start beefing up the apu's.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
I think Amd did this on purpose right now because it would tear into their own discrete cards.

Plus I think they need to move to lower than 32nm to really start beefing up the apu's.

On purpose? They have no choice. You can't get GTX460 performance out of an APU because the ram bandwidth is completely lacking.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,379
126
On purpose? They have no choice. You can't get GTX460 performance out of an APU because the ram bandwidth is completely lacking.

This is the truest thing I've ever seen, given the sensistivity to ddr3 frequency that A8/A10 show. Even the fastest Desktop DDR3 is pathetic compared to GDDR5, and aside from some other source of graphics memory, APU 3d performance doesn't have room to breathe. You could have a theoretical 7970-level APU, but if it's still using shared DDR3, it's not going to be able to take much advantage.
 

Makaveli

Diamond Member
Feb 8, 2002
4,976
1,571
136
On purpose? They have no choice. You can't get GTX460 performance out of an APU because the ram bandwidth is completely lacking.

You are right IDC I guest on purpose was the wrong choice of words.

What would be the reason for the lack of ram bandwidth?

Is it a lack of die space or something else?
 
Last edited:

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
On purpose? They have no choice. You can't get GTX460 performance out of an APU because the ram bandwidth is completely lacking.


Not to mention the die space needed for the required transistor count.

Everyone having happy happy fun fun delusional times over igpus conveniently ignores physics.

Since you talked about the 460, that was ~2B transistors. The i7-3770 is only 1.4B transistors.

Somehow, magically, AMD is going to borrow arm pixie dust and defy physics and somehow make these things worth buying for games? ;)

Even the 7750 is more transistors than IB.

edit: subject verb agreement is good.
 
Last edited:

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
Not to mention the die space needed for the required transistor count.

Everyone having happy happy fun fun delusional times over igpus conveniently ignore physics.

Since you talked about the 460, that was ~2B transistors. The i7-3770 is only 1.4B transistors.

Somehow, magically, AMD is going to borrow arm pixie dust and defy physics and somehow make things things worth buying for games? ;)

Even the 7750 is more transistors than IB.

Don't forget they will up CPU IPC and the whole CPU/GPU will run at a 15w TDP. :D
 

tweakboy

Diamond Member
Jan 3, 2010
9,517
2
81
www.hammiestudios.com
Hey Mark, you still on the FW900 ? I used that thing for 10 years @ 2304x1440@80hz

Very nice but it was doing weird stuff as usual and gonna die, I sold it for 30 bucks. The cover was off too,, the tint cover I took off cuz this was my third ,, replcaement,,so it was shiny and you could see yourself in the monitor. I think 10 years on a 22.5 inch ruined my eyes and it was difficult to read alto more real estate and looks nice. Im @ 1080p now,,,,,, I can sit 5 feet away I dont hnave to go close to the monitor... thx
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
Don't forget they will up CPU IPC and the whole CPU/GPU will run at a 15w TDP. :D


It really, really makes little sense for performance to tie the two together.

Combining the two die spaces in to a singular die is just going to raise cost. Sure, you may be able to increase performance (if you could manage to sell the resulting $2k/processor beast).
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
I'm honestly surprised we aren't seeing more progress on faster access to RAM for integrated GPUs at this point.
It runs completely counter to the financial fundamentals of an iGPU product is the problem.

People buy iGPU's because they don't want to spend money to get a discrete card.

Spending gobs of money on a quad-channel mobo and then populating it with DDR3-4000 ram is not the best way to avoid paying $150 for a mainstream discrete video card.

You are right IDC I guest on purpose was the wrong choice of words.

What would be the reason for the lack of ram bandwidth?

If it a lack of die space or something else?

Bandwidth basically comes down to the ram type and the integration.

It would need to be soldered onto the mobo for signal integrity purposes, the mobo would need to be expensive because of the design aspects (video card PCB's are not free, neither would a mobo designed to replace a video card PCB).

And you'd need lots of memory channels on the CPU.

In other words all those things that go into making a modern discrete video card which make that video card cost $150-$300 would need to go into the mobo and CPU.

You don't save anything by shoving the GPU into the CPU, all the supporting infrastructure for enabling that GPU to push pixels still has to be put into place if you still want the GPU to push pixels. And that costs money.
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
Bandwidth basically comes down to the ram type and the integration.

It would need to be soldered onto the mobo for signal integrity purposes, the mobo would need to be expensive because of the design aspects (video card PCB's are not free, neither would a mobo designed to replace a video card PCB).

And you'd need lots of memory channels on the CPU


What about moving the RAM to the same package or even to the same die?

Sure, it isn't something that AMD could afford (Trinity with RAM would probably rival Interlagos), but Intel can, specially when you think about the real state 14mn will get them.
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
See my previous post about giant die cost.

Who could afford to purchase it?

AMD could "afford" to manufacture it. The problem is few people could afford to buy it.

A bigger die just means worse yield and fewer potential parts per wafer, and so increased cost per functional die (and therefore increased cost to you and me). Making a bigger die isn't really a function of being able to afford to.

edit: for example, if we assume a 50% (totally made up) defect rate in a 100mm^2 die, then a 200mm^2 die, while only making half the parts per wafer, would also have twice the failures, and so would only end up with 1/4 working parts per wafer and have to cost 4x as much. Make it 400mm^2 and then that becomes 1/16th the parts, and 16x the cost just to break even. If the failure rate is only 20%, you're still looking at 8x the cost for 400mm^2 vs 100mm^2
 
Last edited:

HypX

Member
Oct 25, 2002
72
0
0
Both Samsung and Apple dont have any relations to ARM as such. Its only Qualcomm. Samsung and Apple is just interested in selling smartphones.

Thats basicly the main problem for ARM, most of the socalled ARM supporters dont have any reason to use ARM if a better product is out there.

They probably can't move off ARM right now, as it would mean a large porting effort. "Selling smartphones" is a multibillion dollar business for them after all.

There's not much of an alternative to ARM for their particular business models either. MIPS and PowerPC offer nothing in particular over ARM at this point, and Intel is not likely to license out x86 anytime soon.

Since all of the major ARM players already have their own processor development teams that they probably won't just scrap, the easiest direction for them would be to continue to improve their own chip designs until they become viable for desktops and eventually servers.
 
Last edited:

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
AMD could "afford" to manufacture it. The problem is few people could afford to buy it.

A bigger die just means worse yield, and so increased cost per functional die (and therefore increased cost to you and me).

AMD cannot afford to sell such a chip at their current margins :)

I know about the die issue, but is it that big? IVB 2C shrunk to 14nm would have a size similar to Brazos, around 70mm^2, and IVB 4C something like 110-120mm^2.

They could make a 4C 240mm^2 with a big iGPU and some ram included, and a 150mm^2 2C SKU with a smaller iGPU and some RAM too.

If same die were a problem, they could move everything to the same package and kill the MB problem IDC spoke about.
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
I would imagine the failure rate per area goes up with each die shrink, so it's not exactly that simple, but someone with process knowledge should address that, not me. I can only speak from a purely geometric and statistical standpoint.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
If we just had serial bus based memory....

But besides that, 64bit DIMMs needs to retire. I get too many PPro flashbacks with 4xEDO when seeing LGA2011.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
What about moving the RAM to the same package or even to the same die?

Sure, it isn't something that AMD could afford (Trinity with RAM would probably rival Interlagos), but Intel can, specially when you think about the real state 14mn will get them.

The easiest way to answer any of these gedanken type hypothetical questions is to take the reference back to discrete GPUs.

If it would be the right answer for an iGPU then it would be the right answer for a discrete GPU as well.

So, reframe the question and ask yourself "why doesn't Nvidia just move the GDDR3/4/5 ram from the vcard PCB and into the GPU package itself?"

And then your answer becomes readily self-evident: $$$

The current method of manufacturing discrete GPU video cards came to exist because it is the lowest-cost pathway to creating graphics performance.

Moving ram on-package will surely improve performance, that is why off-die sram caches came to be integrated on-die with our CPUs...but at a cost and it required the process technology to evolve to enable it.

I have no doubt that in time on-package dram will become commonplace (IBM uses it now as does a few consoles) but when it does it will also become commonplace for discrete GPUs as well, so it is not a technology that is going to preferentially elevate iGPU performance while the rest of the market stands still.

The iGPU market exists solely because it is the lowest cost alternative to buying a discrete GPU. You give the iGPU away for free as a benefit of having bought the CPU, and you have it re-purpose the system's ram as a very poor man's graphics ram.

Anything you would do to eliminate those performance deficiencies will raise price, eliminating the entire motivation the market has for buying the iGPU product from the supplier.
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
So, reframe the question and ask yourself "why doesn't Nvidia just move the GDDR3/4/5 ram from the vcard PCB and into the GPU package itself?"

And then your answer becomes readily self-evident: $$$

The current method of manufacturing discrete GPU video cards came to exist because it is the lowest-cost pathway to creating graphics performance

Thanks IDC.

From what I see even if Intel decided to push the solution, the others could just swallow a bit of their margins and do the same thing and keep status quo ante.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,379
126
I'm glad people have come to understand the fundamental performance limitations of the APU/IGP design. I clearly remember being basically called a heretic when I declared Fusion and Larrabee a boondoggle upon announcement. People thought that somehow performance would magically increase over discrete, ignoring physics and logic. Limitations of die size, thermals, power delivery, and especially bandwidth cement with supporting mainboard pcb complexity to make it less efficient for high performance video results compared to even a modest contemporary cpu with discrete gpu.
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
I'm glad people have come to understand the fundamental performance limitations of the APU/IGP design. I clearly remember being basically called a heretic when I declared Fusion and Larrabee a boondoggle upon announcement. People thought that somehow performance would magically increase over discrete, ignoring physics and logic. Limitations of die size, thermals, power delivery, and especially bandwidth cement with supporting mainboard pcb complexity to make it less efficient for high performance video results compared to even a modest contemporary cpu with discrete gpu.


Maybe it's more that those people just aren't posting in this thread. I'm pretty sure that they exist still. It's probably the same crowd who thinks a processor that works with the very limited software in a phone magically is appropriate when an order of magnitude (or two) more power is needed (like in a laptop or desktop).
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
I'm glad people have come to understand the fundamental performance limitations of the APU/IGP design. I clearly remember being basically called a heretic when I declared Fusion and Larrabee a boondoggle upon announcement. People thought that somehow performance would magically increase over discrete, ignoring physics and logic. Limitations of die size, thermals, power delivery, and especially bandwidth cement with supporting mainboard pcb complexity to make it less efficient for high performance video results compared to even a modest contemporary cpu with discrete gpu.

I have gotten flak for talking down the IGP's as "midget MMA figthers"...you are not alone.