ITT: We list current generation CPUs that are in most need of improvement

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

MiddleOfTheRoad

Golden Member
Aug 6, 2014
1,123
5
0
Yes, that is the point I was trying to make. No APU, even with GDDR5 is sufficient for a decent 1080p experience in the most demanding current games. It just seems pointless to go to the extra effort of pairing GDDR5 with an APU, when you need a discrete card anyway to play a lot of current games at 1080p.

Well, honestly.... I think that may change after they move to FinFET. At 28nm, probably not enough GPU real estate for 1080p gaming -- but if they can manage some additional Shaders on the next gen APU.... I think the Zen based APU's may have the horsepower to make 1080p gaming very pleasant.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
No APU, even with GDDR5 is sufficient for a decent 1080p experience in the most demanding current games.

That is true.

With an APU, I think the target should be older games (or less demanding new games).

And in those cases 4GB GDDR5 should definitely provide a boost.

This, in contrast, to playing newer, more demanding games where the user could benefit from 8GB. But having the extra shared RAM might not matter that much anyway because the APU's CPU and iGPU are too weak.
 

escrow4

Diamond Member
Feb 4, 2013
3,339
122
106
Here is Witcher 3 gameplay on a 2.6 Ghz Core 2 duo system with 3GB RAM and HD6670 1GB GDDR5:

https://youtu.be/QcXPkPYDl3s?t=314

Looks good to me. (Very smooth gameplay)

That isn't smooth. It isn't 60FPS. Its dipping below 30FPS, there is near constant micro hitching. It isn't buttery smooth. And yes I can tell. The problem with APUs is the same as trying to master everything but you end up being average in everything in the end. It will be a while before someone comes with a perfect allrounder combo APU that doesn't bottleneck.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
It provides a unified socket enabling a full range of power levels for desktop chips, along with all the pinouts necessary to support even very large APUs, as well as all the power planes necessary for sophisticated power management.

It's the best of AM3+ and FM2+ rolled into one socket.

AMD is moving towards HBM2, not GDDR5 or any other GDDR variant.

And Bristol Ridge *is* Carrizo, on a more-mature GF28A. Hopefully they will have tweaked the power/clockspeed management so that we get desktop-like behavior rather than mobile APU behavior.

Based on what I am seeing with Witcher 3, maybe AMD shouldn't even be putting these APUs in a desktop socket.

For example, with Bristol Ridge maybe it would have been better off as a higher power DDR4 enabled BGA chip. (perhaps with more pins than Carrizo). This with a better labeling system to outwardly identify the cTDP the processor is using than we saw with Carrizo.

Intel did this with Haswell where they had two BGA sockets (BGA 1168 and BGA 1364). BGA 1168 was for the lower power chips and the second one (BGA 1364) was for the higher power chips (including 65W Haswell i7-4770R GT3e).

P.S. Not sure if the higher pin count was due to the higher power requirement of BGA 1364? Or rather simply that the die size of the chips using BGA 1364 was larger.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
The problem with APUs is the same as trying to master everything but you end up being average in everything in the end. It will be a while before someone comes with a perfect allrounder combo APU that doesn't bottleneck.

They could be good as mobile chips though. (or perhaps a small PC gaming console)

And in these cases I think people are willing to accept the various compromises because it is not their primary gaming computer.
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
21,637
10,855
136
HBM is DRAM with TSVs in it (among other things) and we don't really know when it will become cheaper than GDDR5 or GDDR5X.

Whenever production of HBM/HBM2 devices scales up to the same level as production of GDDR5 devices in previous generations, that's when you get the cost advantage.

Main thing about HBM is that it can't be used with Kaveri and Godavari APUs.

It doesn't matter what can or can't be used with Kaveri. GDDR5 can't be used with it, either. Kaveri is a dead-end product. It's over, finished, finito, done. AMD isn't investing anything more in it. They pulled the 870k, 880k, and 7890k. FM2+ is EoLed except maybe for some Bristol Ridge Athlon parts.

But AMD has already made an investment in GDDR5. (Otherwise it wouldn't be on the Kaveri memory controller).

No, they specifically have not made any investment in GDDR5 for Kaveri. GDDR5 support for Kaveri was stillborn. They made no attempt at implementation. There's no evidence that existing Kaveri chips could even support it given a board that had soldered-on GDDR5. Just because the capability is baked into the memory controller doesn't mean it's tested or functioning. Nobody's produced production-ready UEFI samples to enable that functionality.

With this mentioned, one thing to consider is how long AMD intends to support GCN 1.x with driver updates? That will partially determine how long we could see GDDR5 boards released.

No, it won't. You will never see GDDR5 support on FM2+. The last thing you'll see on FM2+ is Bristol Ridge as an Athlon (which, incidentally, will not have any support for GDDR5 in the IMC, experimental or otherwise).
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Assuming normalization of costs (that is, once HBM has reached market maturity so that R&D costs and scarcity of parts are no longer major cost factors), HBM would inevitably be cheaper to implement than GDDR5.

That's never going to happen. The Interposer alone makes sure of that.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Assuming normalization of costs (that is, once HBM has reached market maturity so that R&D costs and scarcity of parts are no longer major cost factors), HBM would inevitably be cheaper to implement than GDDR5.

That's never going to happen. The Interposer alone makes sure of that.

To be honest, I am not even sure why HBM even got involved in this discussion. Do even know when it is coming to consumer (non-server) APUs?

And whenever it does come to consumer APUs, how do we know it won't suffer from the same problem GDDR5 had: high costs.

It is even conceivable that the same cycle returns in the future:

1.) Consumer Zen APUs launch without HBM (due to the high costs of HBM)

2.) Then two years later HBM prices drop down so it is actually economical enough to use in small quantities.

3.) But by then APU supporters wave it off because they are looking at something even better than HBM on the horizon. (perhaps DRAM stacked directly on the processor die).
 
Last edited:
Aug 11, 2008
10,451
642
126
You are right. I dont think even AMD has said when or if HBM will come to consumer APUs. It seems to me that it is the only chance for AMD to get a strong showing in the consumer segment though. Zen without any igp will be a very niche market, and APUs are never going to be a compelling product without some kind of edram or high bandwidth memory, although fast DDR4 may eliminate some of the memory bandwidth bottlenecks.
 

DrMrLordX

Lifer
Apr 27, 2000
21,637
10,855
136
To be honest, I am not even sure why HBM even got involved in this discussion.

Simple.

You claim AMD should "fix" Kaveri by making it work with GDDR5.

I counter-claimed that AMD is working on HBM/HBM2, and that both technologies will come with a lower cost of implementation than GDDR5. Of course, there are other salient arguments in opposition to your idea, with the HBM2 angle being but one such argument.

Also, the interposer is an added expense, but factored into the total cost of implementation, HBM comes out cheaper than GDDR5.
 
Aug 11, 2008
10,451
642
126
Any documentation about the cost of HBM being less than DDR5? Since there is no HBM yet in APUs and no DDR5 either, seems like pretty much speculation on both counts. The problem with HBM for APUs is that they are a budget solution, and you will need at least 8gb (or more by the time they come out), unless AMD can somehow integrate HBM for the APU and DDR4 for the rest of the system memory. The cost of HBM will come down eventually, but when, if ever, it will be cheap enough for a budget APU seems still open to conjecture.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
You are right. I dont think even AMD has said when or if HBM will come to consumer APUs. It seems to me that it is the only chance for AMD to get a strong showing in the consumer segment though. Zen without any igp will be a very niche market, and APUs are never going to be a compelling product without some kind of edram or high bandwidth memory, although fast DDR4 may eliminate some of the memory bandwidth bottlenecks.

1.) I think if AMD's principle goal with Zen APU is 15W laptop, dual channel DDR4 @ 3200 (which yields 51.2 GB/s bandwidth) will probably be fine.

And the current GCN 1.2 does reduce bandwidth requirements (see below for examples), so we have to assume whatever iGPU Zen uses will be at least this good:

--R9 380 (GCN 1.2) has 1792sp @ 1000 Mhz with 179.2 GB/s. Therefore each sp @ 1000 Mhz has 1 GB/s bandwidth.

--R9 380X (GCN 1.2) has 2048sp @ 970 Mhz with 182 GB/s. (this works out to each sp @ 1090 Mhz having 1 GB/s bandwidth. Another way of looking at it would be .92 sp @ 1000 Mhz having 1 GB/s bandwidth).

2.) Another thing to consider is that the current Carrizo/Bristol Ridge die is pretty large already @ 245mm2.

And FinFET uses 20nm BEOL, so xtor budget would be only 2x (assuming perfect scaling and AMD keeping the Zen APU die size the same 245mm2).
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Another thing I wonder about on the Zen APU is the cache size and impact on not only the CPU, but iGPU performance as well.

Apparently increasing L2 cache can improve bandwidth efficiency of GPUs as well:

http://international.download.nvidi...tional/pdfs/GeForce-GTX-750-Ti-Whitepaper.pdf

Maxwell also boasts a dramatically larger L2 cache design; 2048KB in GM107 versus 256KB in GK107. With more cache located on chip, fewer requests to the graphics card DRAM are needed, thus reducing overall board power and improving performance.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
The problem with HBM for APUs is that they are a budget solution, and you will need at least 8gb (or more by the time they come out), unless AMD can somehow integrate HBM for the APU and DDR4 for the rest of the system memory.

I believe HBM would work fine with DDR4 SO-DIMMs or DIMMs.

And it would probably be good enough that a person could get by with one 8GB stick of RAM rather than 2 x 4GB RAM.

However, the question so far is whether or not it would be needed in the laptops.

If not needed in a laptop, that would leave HBM as sort of a desktop only play for Zen consumer APU in the same way I believe GDDR5 was essentially intended desktop only for Kaveri.

And unfortunately for a desktop APU if that HBM adds much cost I see it having a hard time against CPU + dGPU.
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
21,637
10,855
136
Any documentation about the cost of HBM being less than DDR5?

http://www.anandtech.com/show/9390/the-amd-radeon-r9-fury-x-review/6

That's the closest I can dig up on a moment's notice. Bottom line is that HBM's costs are related to the novelty of the tech involved (r&d). Once HBM2 ships in large quantity for numerous devices, costs will normalize. The interposers are coming off fabs using older processes.

Since there is no HBM yet in APUs and no DDR5 either, seems like pretty much speculation on both counts.

We won't see that until the "big" enterprise APU shows up from AMD sometime in 2017.

The problem with HBM for APUs is that they are a budget solution, and you will need at least 8gb (or more by the time they come out),

Two possibilities: one is that AMD will sell chips with 8-16 Gb HBM2 as an alternative to system RAM, RAM slots, and all the traces + power circuitry necessary to handle DRAM. The cost savings from removing all those motherboard components should be sufficient for the HBM2 solution to be cheaper for the buyer.

The other possibility is that there will be a smaller amount of HBM2 - 1-2 Gb or so - working either as a frame buffer or as l4.

unless AMD can somehow integrate HBM for the APU and DDR4 for the rest of the system memory.

Shouldn't be too hard. Current APUs already partition system memory into frame buffer and system RAM.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Regarding the future of Zen APUs:

1.) I do hope they are able to get enough laptop wins that desktop is not necessary for the silicon.

However, if some Zen APUs do make it to desktop, I would hope AMD is able to price them low enough that we don't see the same unfavorable situation we saw with Kaveri/Godaveri vs. CPU + dGPU.

2.) With Zen APU either out of desktop or priced low on desktop, hopefully AMD can lower prices on the lowest end dGPUs (currently occupied by Oland).

------This would once again make them competitive with Nvidia in the entry level area discrete card area.

------ It would also allow the possibility of dual graphics with desktop Zen APU actually making sense (instead of running a bigger card solo).

-------Having Radeon graphics (either in form of lower priced desktop APUs and/or low priced dGPU) should make developing for Linux more attractive.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Two possibilities: one is that AMD will sell chips with 8-16 Gb HBM2 as an alternative to system RAM, RAM slots, and all the traces + power circuitry necessary to handle DRAM. The cost savings from removing all those motherboard components should be sufficient for the HBM2 solution to be cheaper for the buyer.

The other possibility is that there will be a smaller amount of HBM2 - 1-2 Gb or so - working either as a frame buffer or as l4.

For option #1, I wonder if it is possible to make a laptop chip out the Sever APU?

Maybe AMD could take the 8 best cores out of 16 (possibly disabling SMT in some cases). So 8C/8T (or 8C/16T) vs. Skylake/Kaby Lake 4C/8T?

Then have different level of cTDPs?

Certainly with 8GB or 16GB of HBM (replacing system RAM) and the large iGPU (vs. having dGPU) it would be compact. (Though the cooler will likely add a lot of bulk back)
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Much smaller as in... $25?! Let's see how that would look like in $10 increments, not even counting all available SKUs:

A10-7850K +$25
A10-7700K +$15
A8-7650K +$5
A8-7600 -$5

Is this what you have in mind? $5 video card?

Assuming the CPU has the same specs (eg, A10-7850K vs. Athlon x 4 860K) I think $25 for a 512sp iGPU is reasonable enough, but $20 would be much better. If the throttling on the CPU were addressed or fixed in some way then AMD could bump that iGPU price premium to a higher level (re: now the iGPU would be purely additive in effect to CPU the same way a dGPU is).

Same goes for a 384sp iGPU. If the CPU specs are the same $15 would be appropriate. If the CPU throttling were fixed then the 384sp iGPU becomes more valuable and justifies a higher price premium than $15.

If the CPU is a lower spec than Athlon x 4 860K then deduct value accordingly. (eg, A8-7650K has a lower spec than Athlon x 4 860K so maybe deduct $5. This would make the A8-7650K $10 more expensive than the Athlon x 4 860K).

For APUs in 65W (like A8-7600) the baseline comparison should be different than the 95W APUs due to greater difficulty in binning to achieve any given clockspeed.

Going by the E3 v5 Xeons I think we can get an idea of how Intel prices their Skylake GT2 iGPU (in comparison to how AMD does things).

If we compare E3-1275 v5 to E3-1270 v5, we can see both have the same CPU specs and TDP, but the one with the GT2 iGPU is $11 more than the one without the iGPU (comparing tray price to tray price).

Looking at Anandtech Skylake review with GT2 iGPU vs. R7 240 (320sp @ 720Mhz/780 Mhz) I figure GT2 is somewhere around the performance of 256 GCN stream processors (give and take). Also AFAIK, Intel CPUs don't throttle when the GT2 iGPU is in usage.

So based on that data, I believe the iGPU pricing scheme I laid out in my previous post (quoted above) is more than fair to AMD.
 
Last edited: