The Intel Atom Thread

Page 197 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Seeing how Intel is doing Goldmont +, makes me wonder why they didn't continued with the mobile project.... it has ARM A75 performance....

Goldmont+ is more like A73 performance with a much wider design (2 wide vs. 4 wide). Intel did add quite a few gates for these gains making the whole thing much less power efficient. Thing is, Intel did this because they were out of the mobile project.
Aside from this, Intel UHD 600/605 is not anywhere near competitive with even last years SoC offerings from likes of Qualcomm, Samsung and Apple. Putting in a larger Intel GPU would further degrade power efficiency.
 
Last edited:

mikk

Diamond Member
May 15, 2012
4,133
2,134
136
Intel did add quite a few gates for these gains making the whole thing much less power efficient.


Muss less power efficient? From where did you get this? TDP didn't increase and first tests are claiming that the power efficiency is much improved.

It's amazing that Intel has been able to significantly increase the efficiency of the processor, so the system is still 20% more economical than its predecessor on Apollo Lake basis despite 45% more power under full load.
https://translate.google.com/transl...x_mit_4_kern_gemini_lake_prozessor&edit-text=


Aside from this, Intel UHD 600/605 is not anywhere near competitive with even last years SoC offerings from likes of Qualcomm, Samsung and Apple. Putting in a larger Intel GPU would further degrade power efficiency.

A larger GPU doesn't automatically result in a degradation of power efficiency, the opposite is expected. Because with a bigger GPU it can be clocked way lower which is usually the more efficient way. But of course with Intels Gen9 Intel isn't able to compete with others because this architecture is outdated, almost 3 years old.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Muss less power efficient? From where did you get this? TDP didn't increase and first tests are claiming that the power efficiency is much improved.
https://translate.google.com/translate?sl=de&tl=en&js=y&prev=_t&hl=sv&ie=UTF-8&u=https://www.elefacts.de/test-57-asrock_j4105_itx_im_test__mini_itx_mit_4_kern_gemini_lake_prozessor&edit-text=

I am no talking about TDP, but about efficiency. The linked article did not provide any efficiency numbers. An indication is, that both standby and idle power going up. The max power measurement is useless, as this essentially shows that the thermal regulators are working.

Besides at 9W board power at idle and 4W at standby, you cannot even think about putting this into a smartphone.

A larger GPU doesn't automatically result in a degradation of power efficiency, the opposite is expected. Because with a bigger GPU it can be clocked way lower which is usually the more efficient way. But of course with Intels Gen9 Intel isn't able to compete with others because this architecture is outdated, almost 3 years old.

Except of course they would need to do both increase EU count and frequency.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
So the Goldmont chip has ~71mm2 die size. Interesting they reduced the die size from Cherry Trail by 15-17mm2. Goldmont Plus is ~100mm2 based on pictures so what FanlessTech said earlier was right(9.5*9.9mm).

A larger GPU doesn't automatically result in a degradation of power efficiency, the opposite is expected.

That's not necessarily true. Not in low clocked GPUs like the one used in Apollo/Gemini Lake. The statement is generalized based on knowledge over a decade ago, hence misleading.

Modern chips have a very narrow range of operation. 0.65-0.7V is the transistor threshold voltage, yet high end GPUs only run at 1.1V peak, and high end CPUs run at 1.3V peak. As you get closer and closer to the threshold voltage, the reduction in clock speed is non-linear in relation. You may struggle to get 200MHz at 10-20mV over the minimum voltage. Yet, making it merely 100mV higher in voltage nets you GHz range of operation.

5x frequency for 15% increase in voltage. That's a very acceptable trade-off.
 
Last edited:
  • Like
Reactions: Dayman1225

Brunnis

Senior member
Nov 15, 2004
506
71
91
I am no talking about TDP, but about efficiency. The linked article did not provide any efficiency numbers.
So, the question still stands: where did you get your info from, i.e. that efficiency is down? I'm not saying it isn't, but if you make the claim you must have some data to support it.

An indication is, that both standby and idle power going up. The max power measurement is useless, as this essentially shows that the thermal regulators are working.
You know full well that those standby/idle numbers are pretty much as useless as the maximum number for this purpose.

Besides at 9W board power at idle and 4W at standby, you cannot even think about putting this into a smartphone.
Again, we can't really draw any good conclusions from testing of a complete system, so it's hardly worth arguing over these figures.
 
Last edited:

mikk

Diamond Member
May 15, 2012
4,133
2,134
136
I am no talking about TDP, but about efficiency. The linked article did not provide any efficiency numbers. An indication is, that both standby and idle power going up. The max power measurement is useless, as this essentially shows that the thermal regulators are working.


40-45% higher IPC and similar clock frequency with a 20% decrease in power consumption is no indicator for an improved efficiency? If there is no efficiency increase with Goldmont+ the power consumption would increase quite much because of the much higher IPC. In this case some would wonder why Intel didn't increase their TDP numbers because it's substantial. Feel free to prove your point in this case with other reviews.

Besides at 9W board power at idle and 4W at standby, you cannot even think about putting this into a smartphone.

This isn't a smartphone Soc/device, it's a 10W desktop version of Goldmont+ running on an ITX board with DDR4, you won't see Idle numbers on such a system compared to a smartphone or even mobile device. It's silly to compare idle numbers from such a desktop system with a smartphone.
 

Nothingness

Platinum Member
Jul 3, 2013
2,394
731
136
This isn't a smartphone Soc/device, it's a 10W desktop version of Goldmont+ running on an ITX board with DDR4, you won't see Idle numbers on such a system compared to a smartphone or even mobile device. It's silly to compare idle numbers from such a desktop system with a smartphone.
And it's also silly to try to estimate the CPU efficiency with full platform power consumption as @Brunnis wrote especially when it's a SoC with a GPU and the full load test is running both a CPU and a GPU stress test at the same time. You simply can't draw any conclusion about the CPU core efficiency with those results.
 

Dayman1225

Golden Member
Aug 14, 2017
1,152
974
146
I assume this was at MWC, but it appears there is another Intel/Spreadtrum powered Phone there from a company named "Senwa", with an Octocore Airmont based CPU.

Intel Phones - Ian Cutress
Octacore Airmont - Ian Cutress

DXNHG0PX0AIHwlq.jpg
 

dark zero

Platinum Member
Jun 2, 2015
2,655
138
106
I assume this was at MWC, but it appears there is another Intel/Spreadtrum powered Phone there from a company named "Senwa", with an Octocore Airmont based CPU.

Intel Phones - Ian Cutress
Octacore Airmont - Ian Cutress

DXNHG0PX0AIHwlq.jpg
Let me bet... Leagoo all over again?

Intel should give them better modems and more bands... oh and Asus should join it too.

BTW... Airmont octa cores are nothing if they have a crappy GPU... Who in the hell are putting a Mali T820 on it? *looks at Leagoo T5C*

Here is a video how the Airmont cores performs.... along the crappy Mali T820...
https://www.youtube.com/watch?v=mXL_lH4hrsA

Intel should had continued with Broxton. Along continued to use Power VR GPU
 
Last edited:

Dayman1225

Golden Member
Aug 14, 2017
1,152
974
146
Let me bet... Leagoo all over again?

Intel should give them better modems and more bands... oh and Asus should join it too.

BTW... Airmont octa cores are nothing if they have a crappy GPU... Who in the hell are putting a Mali 820 on it? *looks at Leagoo T5C*

Intel should had continued with Broxton.

These are meant to be insanely cheap so don't expect high end anything from them

Intel plans to give them better modems, they just announced another partnership with SpreadTrum to supply 5G modems for their SoCs, they showed a Early 5G Modem at MWC in a 2in1 that supported 2G-5G and upto 5Gbps, now they got to shrink it!
 

dark zero

Platinum Member
Jun 2, 2015
2,655
138
106
These are meant to be insanely cheap so don't expect high end anything from them

Intel plans to give them better modems, they just announced another partnership with SpreadTrum to supply 5G modems for their SoCs, they showed a Early 5G Modem at MWC in a 2in1 that supported 2G-5G and upto 5Gbps, now they got to shrink it!
Even being cheap doesn't mean that was supposed to be a bad quality GPU...
And is a mid tier phone. If it was using SoFia it would be understandable
 
  • Like
Reactions: Dayman1225

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Intel should had continued with Broxton. Along continued to use Power VR GPU

Yea, and if they continued, x86 software development on Android might have been no-issue by now.

But they were really not ready by their standards in 2016. Intel's unspoken motto is dominate the markets they are trying to enter, and lack of proper cellular modem and performance being behind the market leader wouldn't have worked to achieve that. If they were a smaller company with nothing but Atom-based cores to rely on, sure, probably. The smaller companies also tend to be much more efficient in achivement/$ because as the company gets bigger there are more bureaucratic layers and unnecessary departments.
 
Mar 11, 2004
23,069
5,545
146
I am really baffled that Intel stopped Atom development, seemingly right about the time they got serious about their modem. Which, maybe they haven't really stopped it, they just aren't doing their own commercial development of it, waiting for their modem to develop to where it can be paired. Which that might also be partly behind their GPU push now as well (where Intel has been seeing that GPU is necessary moving forward, and being able to integrate their own will benefit them, and they can scale it up, which is actually I think a big reason for Nvidia's success, they basically went and looked at the basic level they need for good mobile GPU, then scaled it up, so they're chips across the board reap the efficiency and density benefits).

Plus, with Apple ditching Imagination, it might've been ripe for a buy. Quad core Atom, with integrated Intel modem, and PowerVR GPU might have been interesting. Still have a hunch it would be a hard sell for companies. But it might've helped hold Microsoft off of pushing for ARM on Windows.

How hard would it be to make a CPU that was a hybrid (meaning it could do both ARM and x86, and by that I mean individual core, not having a block of x86 and block of ARM)? The thought being that there should be a fair amount of shared pieces, and so if it could handle code from both, so the pieces for pushing for performance could benefit both (versus having a block of x86 and high performance ARM cores), you could put more towards pushing performance. I kinda wonder if that wasn't what AMD was trying to do with K12 and Zen. Even if it was a situation where each core had to be fully switched, but you could adjust as desired might have been significant (so say for a 4 core, you could have 4 ARM or 4 x86, or a mix, like 2 per at the same time, so it wouldn't be like you'd simultaneously do 4 x86 and 4 ARM, but if they got it working almost like multi-threading that would be even better).
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Plus, with Apple ditching Imagination, it might've been ripe for a buy. Quad core Atom, with integrated Intel modem, and PowerVR GPU might have been interesting. Still have a hunch it would be a hard sell for companies. But it might've helped hold Microsoft off of pushing for ARM on Windows.

It could have helped Intel but for the industry and for the consumers its certainly better that we now also have ARM as options for PCs. Aside from server chips there are currently only low-power ARM SoCs available, because there was no market demand for higher performance tiers. This could potentially change. Imagine likes of a 90W TDP A11 :)

How hard would it be to make a CPU that was a hybrid (meaning it could do both ARM and x86, and by that I mean individual core, not having a block of x86 and block of ARM)?

I believe the architectures are too different if you think along the lines of sharing the back-end. Its not just the instructions, its the memory model, the exception model, how barriers work, what kind of atomics (e.g. load-linked-store-conditional) with what semantics (release or acquire), the virtualization architecture, the memory protection and security features etc. As conclusion if for some reason you need both ISAs in one device - supporting one ISA in HW and SW emulation of the other is a reasonable approach.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136

NostaSeronx

Diamond Member
Sep 18, 2011
3,686
1,221
136
Intel is probably going to use their own IP for GPU. At this point with Gen9.. they support ASTC and Vulkan/DX12.3/etc. Only thing Intel needs to do is reduce the power which a 10nm node would do anyway. 10nm HPM (Spreadtrum) => Risk production Q2 2018.
 

dark zero

Platinum Member
Jun 2, 2015
2,655
138
106
Intel is probably going to use their own IP for GPU. At this point with Gen9.. they support ASTC and Vulkan/DX12.3/etc. Only thing Intel needs to do is reduce the power which a 10nm node would do anyway. 10nm HPM (Spreadtrum) => Risk production Q2 2018.
Any info about the architecture used?
ARM or x64?
 

Dayman1225

Golden Member
Aug 14, 2017
1,152
974
146
Intel is probably going to use their own IP for GPU. At this point with Gen9.. they support ASTC and Vulkan/DX12.3/etc. Only thing Intel needs to do is reduce the power which a 10nm node would do anyway. 10nm HPM (Spreadtrum) => Risk production Q2 2018.

Do know if ST is still using Intel on 10nm?

I assume this is the roadmap you are referring too also?
8BwlDZo.png


Any info about the architecture used?
ARM or x64?
Intel has presented A75 cores on their 10nm process >3.3GHz so I could be that.
DWU9K_hWsAE5Rpe.jpg:large
 
Mar 11, 2004
23,069
5,545
146
It could have helped Intel but for the industry and for the consumers its certainly better that we now also have ARM as options for PCs. Aside from server chips there are currently only low-power ARM SoCs available, because there was no market demand for higher performance tiers. This could potentially change. Imagine likes of a 90W TDP A11 :)



I believe the architectures are too different if you think along the lines of sharing the back-end. Its not just the instructions, its the memory model, the exception model, how barriers work, what kind of atomics (e.g. load-linked-store-conditional) with what semantics (release or acquire), the virtualization architecture, the memory protection and security features etc. As conclusion if for some reason you need both ISAs in one device - supporting one ISA in HW and SW emulation of the other is a reasonable approach.

I agree. I actually wish Microsoft had been getting on that years ago, it might've made Win10 on phones and lower end tablets have a chance (which a shame we didn't get the Android porting/running native Android stuff either), plus it'd open up competition in the PC space. Absolutely, would be interesting. Quite a few, including people like John Carmack, have said that the issue with mobile chips is their thermal and power constraints. Its why I think an ARM CPU based chip is a very serious possibility for the next gen of consoles. The CPU could easily be an upgrade over even the CPU in the the PS4 Pro or One X, but wouldn't require a ton of development and they could own more of the IP. They could even ape Nintendo and make a portable hybrid (maybe even go with a discrete GPU in the dock, where they basically shut off the GPU next to the SoC to let the CPU have all the thermal headroom).

I figured it would be difficult (and not worth it, plus with AMD not having the resources and Intel not wanting to help ARM). I was going to make a comparison to x64 with Athlon 64 (enabling 64 and 32 bit), but know that's not nearly the level multiple architectures would require. Yeah emulation (which isn't that what Microsoft's ARM on Windows is doing?). I figured that the most likely would be pairing large powerful x86 with some medium/small ARM, where the ARM would be good enough for whatever you need (since assumption would be you're mobile or have some instances where you're operating under some constraint), while the x86 being the focus of the performance.


Ah, I guess I was mistaken with them stopping the mobile phone focused chips? I thought they had largely stopped developing the architecture itself too though? And that its basically them just building stuff to customers desire (so core count for instance, which is why we got the 8 core ones going in stuff like was it routers?).

He maybe reffers to the phone chips...
BTW... Intel lost the chance to get Power VR... Mediatek has the chance now.

Yeah. I thought there was a larger change in development though. Like Intel basically said they weren't really advancing Atom cores themselves any more. They still offer them but they're not pushing them and had basically said they weren't doing much more than producing them based on customer desire.

Yeah, but makes me wonder if they maybe should have looked at that as a way of staying in with Apple (even before they became aware of Apple ditching them).

Intel is probably going to use their own IP for GPU. At this point with Gen9.. they support ASTC and Vulkan/DX12.3/etc. Only thing Intel needs to do is reduce the power which a 10nm node would do anyway. 10nm HPM (Spreadtrum) => Risk production Q2 2018.

Yeah, I think they're recent changes make that more than probably. Kinda curious to see what they do short term. Probably not too much, just doing their own iGPU, and then maybe pairing with AMD on that compact mobile focused thing until Intel gets their own up and working (I still think that was likely targeting Apple, and then Apple probably wasn't impressed so then they released it as a chip, maybe they're hoping to see if that might develop a market in non Apple PCs, and then develop it more so that Apple would then consider it). My biggest wonder is if Raja might throw AMD/RTG a bit of a bone and develop things in line with where he was pushing them, so that there might be some shared development (by that I mean, the similarities make developing for either easier for the other for other developers; although maybe there would be some possibility for sharing development in a collaboration?), which would have benefits for Intel's GPU as well, and both would be looking to undo Nvidia's dominance, so they could both benefit, especially if they focus in different areas (mobile APU, while letting AMD have more desktop focused APU and maybe consoles; and then Intel pushes bigger for enterprise stuff, while leaving consumer dedicated GPU and even pro like the FirePro stuff), where they could kinda chase different things. I'm sure they'd still compete directly.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
I figured it would be difficult (and not worth it, plus with AMD not having the resources and Intel not wanting to help ARM). I was going to make a comparison to x64 with Athlon 64 (enabling 64 and 32 bit), but know that's not nearly the level multiple architectures would require. Yeah emulation (which isn't that what Microsoft's ARM on Windows is doing?). I figured that the most likely would be pairing large powerful x86 with some medium/small ARM, where the ARM would be good enough for whatever you need (since assumption would be you're mobile or have some instances where you're operating under some constraint), while the x86 being the focus of the performance.´

I do believe that supporting heterogenous architectures are extremely hard to manage at OS level. One of the features of big.LITTLE is that you can transparently migrate threads/contexts between little and big cores. If they are of different microarchitecture, this will be close to impossible. Without context migration having a heterogenous architecture does not make much sense because you would have to decide at compile time where to run which code.
Emulation is a different matter, because you can actually migrate emulated code between big and little cores.