@PeterScott I have to agree with you on this as making "Super" APU for PC would be a costly item to make with a limited niche. I'm also thinking that most folks who need more then 4c/8t will want a video card instead of a limited iGPU anyway.
Bingo. That was my point actually. The OP left out the most obvious response of whether it will ever happen at all. We have been hearing speculation for years from the AMD camp about this killer APU. I thought the pointless speculation had finally died, but I guess not.Maybe never?
I doubt AMD could even spare the money or the manpower to developed such a product with a very limited niche market anyway. For that matter why would they want to?Bingo. That was my point actually. The OP left out the most obvious response of whether it will ever happen at all. We have been hearing speculation for years from the AMD camp about this killer APU. I thought the pointless speculation had finally died, but I guess not.
In fact it has been quite the opposite. AMD has seemed to push the majority of their resources into Ryzen chips without an igpu at all. They are still stuck on 4 cores for APUs.
It's KabyLake G. Not Kady anything.But that's the whole point of products like KadyG. You use it to save the cost of a full discrete you setup in a laptop. There is a considerable market for laptops in the less than $1000 price range with mid-range gpus that can push decent frame rates at 1080p and maybe a bit above. Kady G can already do that to an extent. A possible 3400g/h, which is speculated to be the 12 nm refresh for Raven ridge, would be the natural choice for the CPU die, an 8x PCIe gen 4 interface would work for the pgpu link, and there would be a space for a hbm stack or two as well.
The advantage here is that the Apu can be used in it's various target markets without the mcm/interposer, or with the interposer/mcm, and the same package can be used with a higher end CPU that doesn't have an iGPU for higher end products or for the sff/aio/stb/console market. It gives maximum market flexibility with maximum investment reuse. And, since most of the configuration changes are in the processor package, the only changes that the notebook manufacturers will need to make are for cooling or I/O layout needs.
This is far from niche and has broad applicability in many different segments. My personal opinion on why KadyG isn't making better market penetration are two fold. First is cost. It appears that Intel wants a lot of money for this product. That is the result of their segmentation strategy pushing KadyG's price north of their already expensive Iris Pro products. The other issue is the decision to make the "Vega" GPU for KadyG a Polaris product with a HBM interface with the thermal issues included. Add in the high cost of HBM and it's going to be hard for vendors to make margin initially.
I will have price the Kaby-G but isn't a mid-range dGPU and CPU cheaper to buy then this is?It's KabyLake G. Not Kady anything.
Really there is no cost advantage for Kaby-G, and it isn't an APU.
An actual APU, that is small in size, with normal memory requirements is a good and cost effective product.
Kaby-G is just a discrete CPU and a discrete GPU coexisting on a small PCB, saving nothing in buying them separately.
Has there been a decision whether MCM packages can be considered an "APU"? For starters I think "APU" is more a brand thing... but I haven't made up my mind about the MCM vs single die thing because it's not really come up until now.It's KabyLake G. Not Kady anything.
Really there is no cost advantage for Kaby-G, and it isn't an APU.
An actual APU, that is small in size, with normal memory requirements is a good and cost effective product.
Kaby-G is just a discrete CPU and a discrete GPU coexisting on a small PCB, saving nothing in buying them separately.
Both of these assume IF is much faster than PCIe, but I haven't seen much info comparing them in reality. I know IF is scalable, but so is PCIe up to a certain point.Compared to a CPU + dGPU on one package (like Kabylake-G)....an APU (either monolithic die or MCM via Infinity fabric):
1.) Can shift the entire TDP to either CPU or iGPU.
Think about how nice it would be do Blender purely on the iGPU (with the full TDP available) while having high bandwidth access to system memory (rather than having to go over PCIe bus like a dGPU would).
Likewise imagine being able to get the full TDP budget to the CPU for OpenShot. (or the full TDP budget to the iGPU for Hit Film Video editor*)
*Unlike OpenShot which uses the CPU only for rendering, Hit Films does the opposite and uses only the GPU for rendering.
2.) Allows the system DDR4 and HBM to be used by either CPU or iGPU in a high bandwidth fashion**. (So in a system with 8GB DDR4 and 4GB HBM the CPU or iGPU each has access to 12GB memory).
**Kabylake-G's dGPU only has access to the DDR4 system RAM over PCIe bus (which is much lower bandwidth than an APU using Infinity fabric to connect GPU to CPU)
If you consider Kaby-G an APU, then it is basically a useless term that doesn't differentiate anything significant.Has there been a decision whether MCM packages can be considered an "APU"? For starters I think "APU" is more a brand thing... but I haven't made up my mind about the MCM vs single die thing because it's not really come up until now.
It's a wash IMO. You don't have to connect the CPU to GPU using the main motherboard because you are using a tiny motherboard to connect them and to the main motherboard. That really isn't likely any kind of cost savings outside of pennies. Some more traces on the Motherboard are of insignificant cost.And I'd argue there are direct cost savings with a MCM solution from simplicity of motherboard design, if not simply from logistical a perspective. Though this obviously has to be balanced with increased cost from complexity of the MCM.
Intel are struggling to penetrate the market with kaby lake G
Interesting, it does allow power sharing (like an APU):So you think Kaby Lake-G will fail as a product?
I've read great reviews of Ultrabooks based on it, e.g. this PCWorld review.
"Today though, there are few takers of Kaby Lake G. In fact, only two vendors have shipped it: HP with the Spectre x360 15, and Dell with its XPS 15 2-in-1. Some we’ve spoken to have painted that as a failure of Kaby Lake G to catch on, while others have speculated politics to be the cause. Whatever the truth, it’s a shame, because applied the right way in the right laptop, Kaby Lake G is a road worth taking."
That is a very good thing to see....and an advantage over a Intel CPU and Nvidia dGPU.With Kaby Lake G, the power and thermal needs of the CPU, GPU, and the RAM for the GPU are all managed as one. If the module’s under a heavy graphics load, the CPU can back off. If the CPU is under a heavy load, the GPU can back off. That initially led many to assume that Kaby Lake G was paired with a lower power 15-watt “U” series chip. Kaby Lake G is instead based on an “H” part, which is rated at 45 watts and can run at 56 watts until it heats up.
Both of these assume IF is much faster than PCIe, but I haven't seen much info comparing them in reality. I know IF is scalable, but so is PCIe up to a certain point.
I think of IF as an AMD in-house and more versatile version of PCIe, which can work over the exact same pins as PCIe. For example I understand my Ryzen + Vega system is technically capable of using IF to connect the two over the PCIe x 16 hardware. IIRC the IF connection would be slightly faster than PCIe v3, but it wouldn't be heaps faster simply because it's IF vs PCIe like you're implying. So is the internal IF in an APU wider or clocked higher? And will this always be the case?
Yes that's what I understand also. And this is how Epyc works in a multi CPU setting, and even how the intercommunication between the chiplets in each "CPU" work. Same for MCM IF in Threadripper which also uses "re-purposed PCIe lanes", but I assume it's similar (if not identical) to the IF in Raven Ridge (where we can't use the term "re-purposed PCIe lanes").IIRC off die IF uses re-purposed PCIe lanes, so it is just PCIe running a different protocol.
Kabylake-G isn't a "discrete CPU and discrete GPU on a small PCB". No, its on a single package, which is the first step in integration.Kaby-G is just a discrete CPU and a discrete GPU coexisting on a small PCB, saving nothing in buying them separately.
Power sharing has so far been used improve performance/watt by lowering power, not by improving performance. Kabylake-G is the first attempt at doing the latter.Interesting, it does allow power sharing (like an APU):
Where is the option for "It will be a long time, if ever, before we see a consumer APU with HBM"?
Maybe never?
I agree #1 is likely. (It would be a high end part).Since Kaby G is out, I wouldn't say that.
But if you want to put it into categories, lets see.
1) AMD APU with HBM
2) APU with HBM that's common, like Raven Ridge
#1 is probably likely in the near future, but similar to Kaby G it should remain high end. #2 is far less likely. Separate memory stacks will always cost more. The problem in the value space is always that extra packaging adds cost, even if the memory stack is very cheap.
In this speculation thread, I have used HBM as a generic term for High Bandwidth Memory, with no regard to the specific version, or generation, of the particular HBM specification.If it is not HBM2, it is probably not worth it.
If you consider Kaby-G an APU, then it is basically a useless term that doesn't differentiate anything significant.
Well, that's merely what Intel did.the whole point of the APU was fusing those into a single die
And contributing to that, it is likely that AMD charges a pretty sum for the GPU as well, to ensure the chip doesn't directly compete with its own mainstream APU products. On top of that, the EMIB packaging technology is cutting-edge. And HBM2 is relatively expensive.My personal opinion on why [Kaby Lake-G] isn't making better market penetration are two fold. First is cost. It appears that Intel wants a lot of money for this product.
You are just mincing words.Kabylake-G isn't a "discrete CPU and discrete GPU on a small PCB". No, its on a single package, which is the first step in integration.
Less arguing and more speculating, please.You are just mincing words.
There's a PCWatch article about this.My thinking is that Intel had a brief window of opportunity for this product.
We still need accuracy even when speculating.Less arguing and more speculating, please.![]()