Hardware wise its quite easy, but when we get too software it gets complicated you would need too teach software, games and OS too use this kind of CPU so its easier too do as things are done today.Hypothetically speaking. If AMD wanted to use one Zen2 CCX for running games on, and one Jaguar-derived cluster (modified to add Infinity Fabric) for running OS and background tasks, would it be doable?
I was thinking for consoles. Current gen already reserves certain cores for OS, this would be an extension of that.Hardware wise its quite easy, but when we get too software it gets complicated you would need too teach software, games and OS too use this kind of CPU so its easier too do as things are done today.
Never thought that out of all people, i would turn into a grammar nazi (since i make lots of spelling mistakes as well) but this is too disturbing.Hardware wise its quite easy, but when we get too software it gets complicated you would need too teach software, games and OS too use this kind of CPU so its easier too do as things are done today.
‘To’ or ‘too’?
These are both commonly confused words but differ greatly in use and meaning. To can be used as a preposition:
We took the train to London.
It can also be used with a verb stem as part of a verb phrase:
I would like to see you soon.
This is not to be confused with too which can be used to describe something being done excessively:
You’re driving too fast.
It can also be used in place of ‘also’ or ‘as well’:
I would like some dinner too.
Of course, neither to nor too should be confused with two, the number between one and three.
It's not just you. I was disturbed by it too. In fact, I was glad that it was commented on already.Never thought that out of all people, i would turn into a grammar nazi (since i make lots of spelling mistakes as well) but this is too disturbing.
https://en.oxforddictionaries.com/usage/to-or-too
He might not be a native English speaker. How many here can write in another language?It's not just you. I was disturbed by it too. In fact, I was glad that it was commented on already.
--- With Asymmetrical big.Little:"The cores are all the same processor, but the big core runs faster and uses more power; the little core runs slower but has the advantage of using less battery power." Not only can the same piece of code run on either the big or little core, but it can also move between cores without any interruption or performance hit.
"When you run an application, the kernel schedules the threads of the application that you interact with on the big, fast cores," said Pulapaka. "But the threads in the background, like services, that aren't related to user activity are kicked out and migrated to run on the small core. When you're launching Outlook, every thread in Outlook runs on the big core. But your services, your indexer, your background tasks, all these ancillary threads like indexing — they run on the little core. That way they don't consume a lot of power, but the things you care about are snappy and instantaneous. This is how you get good battery life and good user experience."
Excavator also sucks in every way, compared to Zen, it will never be reused. Get over your CMT promoting nonsense, that you have been spreading here baselessly for 4+ years (how new bulldozer derivatives will arrive in all possible forms manners). Where are your promised magic FD-SOI bulldozer successors with CMT and SMT?https://channel9.msdn.com/Events/Build/2018/BRK2438
big.Little on Windows
It wouldn't be Zen and Jaguar. It would be Zen and Excavator. Jaguar will never be seen in the consumer market ever again. Zen superseded it anyway, now just for the Excavator supersede in ULP.
Umm. Stoney Ridge Refresh2 in consumer products are selling in larger quantity than Raven Ridge2 w/ Vega 3. So, it not being used or re-used is not correct. Bristol Ridge custom is being used in several replacements to Kyoto and Berlin. So, again re-used is not correct, so hmph. Excavator has larger resources for branches, retire(in-flight), FPU, etc. So, Excavator is feasibly on par with Zen as is. 14-nm FinFET Bulldozer which I have specs for in regards to new FPU arch and new register files arch. Literally and physically makes Zen look like Jaguar/Bobcat in comparison. Zen2 HPC however leaves the Jaguar supersede to get into more servers.Exacavator also sucks in every way, compared to zen, it will never be reused.
Athlon 64 often times had much less transistors and a smaller die size and beat the Pentium 4's it was competing against. Die size does not equal performance.Zen1 => poor offering in comparison to 20nm Excavator.
Zen2 => on-par with 14FF Bulldozer. (As it uses the new IFU, FPU, RegFile from 14FF Bulldozer anyway which Zen skipped over.)
// 14FF BD = ~20 mm squared vs 14FF Zen = 5.5 mm squared. Yes, Bulldozer will some how be slower.
Exactly this. If they can use a tiny cluster of Jaguar derived cores running at a low frequency to handle background OS stuff instead of having to use a Zen core, it frees up more die area for shaders.Jaguar has the benefit of being small. Like really small. Like you could fit a dozen 7nm Jaguars inside a single Zen core.
Maybe they could translate some of the power efficiency gains over to a Jaguar successor, but I doubt they would do that.
Or you can just run background tasks on a tiny fraction of one of your Zen cores, and waste no silicon on Jaguars.Exactly this. If they can use a tiny cluster of Jaguar derived cores running at a low frequency to handle background OS stuff instead of having to use a Zen core, it frees up more die area for shaders.
Yeah, the real benefit would need to come from power efficiency gains. Adding the cores for area's sake is, well, counter productive.Or you can just run background tasks on a tiny fraction of one of your Zen cores, and waste no silicon on Jaguars.
You computer probably runs several hundred threads in the background while you think it is doing nothing with the CPU idling at 1 or 2%.
Make your own thread to keep all your nonsense in. At this point its outright trolling for you to keep posting it, especially with how you try to constantly strain to connect it to the thread you post it in. I can just put you on ignore but it doesn't prevent you from muddling up other threads by derailing them.Umm. Stoney Ridge Refresh2 in consumer products are selling in larger quantity than Raven Ridge2 w/ Vega 3. So, it not being used or re-used is not correct. Bristol Ridge custom is being used in several replacements to Kyoto and Berlin. So, again re-used is not correct, so hmph. Excavator has larger resources for branches, retire(in-flight), FPU, etc. So, Excavator is feasibly on par with Zen as is. 14-nm FinFET Bulldozer which I have specs for in regards to new FPU arch and new register files arch. Literally and physically makes Zen look like Jaguar/Bobcat in comparison. Zen2 HPC however leaves the Jaguar supersede to get into more servers.
Zen1 => poor offering in comparison to 20nm Excavator.
Zen2 => on-par with 14FF Bulldozer. (As it uses the new IFU, FPU, RegFile from 14FF Bulldozer anyway which Zen skipped over.)
// 14FF BD = ~20 mm squared vs 14FF Zen = 5.5 mm squared. Yes, Bulldozer will some how be slower.
Zen2 mismatch will only come into play two ways:
Symmetrical core => CCX aimed at high-perf and CCX aimed at low-power
Much like what Apple and Qualcomm is doing.
Zen2 hi-perf => 7.5T3F/9T4F critical lib and 6T2F non-critical RVT lib.
Zen2 low-pow => 5T1F/6T2F critical lib and 6T2F non-critical LVT lib.
Asymmetrical core => One die on FinFETs(with a core that has high IPC) another die on FDSOI(with a core that has low static/dynamic EPI).
CEA-Leti and GlobalFoundries, and ARM.
I argued for a similar point (but with ARM cores, where they'd be able to get people porting mobile games and apps easier) in a Switch like hybrid. I don't see it happening though (either situation). It'd make backwards compatibility easy, and instead of having it juggle VMs and stuff, you'd just have the Jaguar cores always running the OS and apps. You could put it in compatibility mode where it'd let you play Xbox One games without any effort in changing them (but if devs want to enable more, like higher/smoother framerates, etc they can and that'd be recompiled to use the Zen cores).Exactly this. If they can use a tiny cluster of Jaguar derived cores running at a low frequency to handle background OS stuff instead of having to use a Zen core, it frees up more die area for shaders.
His argument is more that if you put in some Jaguar cores that handle the OS and apps at all times (so less VM juggling, and keeps that stuff responsive), so you could put 1-2 (or more) fewer Zen cores, and then put those transistor savings into adding another 1-2CUs on the GPU. For all the talk of "rebalancing", the fact is, gaming still needs a far beefier GPU, especially if they want to push for 4K rendering. Or they could put some dedicated hardware to help (like Microsoft adding the DX12 draw call scheduler/handler in the One X), where it'd be better than either the CPU or GPU at doing the task, and it'd be a task that is necessary already. Maybe they'll put some tensor cores or something in for AI for instance.Yeah, the real benefit would need to come from power efficiency gains. Adding the cores for area's sake is, well, counter productive.
It might make the most sense to remove the most critical bottlenecks first. For an APU with high graphics performance, that is memory bandwidth. Then you talk about shaders, then you try for increased CPU resources. In all cases power consumption is to be optimized.His argument is more that if you put in some Jaguar cores that handle the OS and apps at all times (so less VM juggling, and keeps that stuff responsive), so you could put 1-2 (or more) fewer Zen cores, and then put those transistor savings into adding another 1-2CUs on the GPU. For all the talk of "rebalancing", the fact is, gaming still needs a far beefier GPU, especially if they want to push for 4K rendering. Or they could put some dedicated hardware to help (like Microsoft adding the DX12 draw call scheduler/handler in the One X), where it'd be better than either the CPU or GPU at doing the task, and it'd be a task that is necessary already. Maybe they'll put some tensor cores or something in for AI for instance.
Apparently it is good enough that Intel is building at least one product using it (Lakefield, Icelake+Tremont). Not sure if it is something AMD needs at this point.What's the state of big.LITTLE on Windows? That is the bigger question to ask in this context. I think this might be a possibility if Windows 10 on ARM really takes off.
The consoles already have cores specifically reserved for OS. At launch PS4 games were only allowed to run on 6 of the cores (though later support was added to request some time on the 7th core), leaving two cores entirely devoted to OS tasks. The idea is to avoid any inconsistency in performance- no context switching over to an OS job for a fraction of a second then back to the game, adding a nasty spike in latency. It's partly how consoles can get smooth performance out of such terrible CPU cores.Or you can just run background tasks on a tiny fraction of one of your Zen cores, and waste no silicon on Jaguars.
You computer probably runs several hundred threads in the background while you think it is doing nothing with the CPU idling at 1 or 2%.
Well luckily we now have such powerful CPU cores on PC that we don't have to worry about this.The consoles already have cores specifically reserved for OS. At launch PS4 games were only allowed to run on 6 of the cores (though later support was added to request some time on the 7th core), leaving two cores entirely devoted to OS tasks. The idea is to avoid any inconsistency in performance- no context switching over to an OS job for a fraction of a second then back to the game, adding a nasty spike in latency. It's partly how consoles can get smooth performance out of such terrible CPU cores.
I don't really expect to see it in PCs (except maybe in mobile targeted SoCs, like Intel's supposed Lakefield chip). But I was wondering whether we might see it in the next generation consoles.Well luckily we now have such powerful CPU cores on PC that we don't have to worry about this.
The only reason to use less powerful secondary cores, is if you are going to to do some kind of Big-Little arrangement to save battery life.