The Intel Atom Thread

Page 199 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Dayman1225

Golden Member
Aug 14, 2017
1,152
973
146
Fanless Gemini Lake BRIX - FanlessTech

ViiXVvA.png

Odd of the omission of the HDMI 2.0 Port...
 

ksec

Senior member
Mar 5, 2010
420
117
116
That'll end up $110 in US dollars. It's a $10 increase from the J4205-ITX.

Cant wait to see it being tested out. But I was rather hoping it will be under $100, Given Ryzen 3 2200G + MB is $140.

( I seriously hope Apple dont use this in the cheap Macbook Air though....)
 

evident

Lifer
Apr 5, 2005
11,894
496
126
Did you miss the part where I linked FanlessTech saying it was 110 Euro? That article was basing it if the MSRP for the chips, which OEMs don't pay.
Yep, missed your following post. Was falling asleep and drugged up on cold meds, and angry these boards arent available yet after being announced almost 6 months ago. I ended up with an i3 kaby lake for a pfsense setup i would have otherwise tried to use this for and am pretty happy anyways.
 
  • Like
Reactions: Dayman1225

Nothingness

Platinum Member
Jul 3, 2013
2,371
713
136

Brunnis

Senior member
Nov 15, 2004
506
71
91
Strangely, the delivery dates for NUCs with Gemini Lake continue to slip. The dates I'm getting now are April 5th, which means they're about to slip out of the Q1 launch window. I guess you could technically call them launched, though. :p
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136

One thing that isn't mentioned is how the battery life advantage is gained by using expensive components and a big battery.

Typical Apollo Lake laptops use 35WHr capacities, and they are usually priced at the lowest part of the market at $300. Apollo Lake does quite well in battery life at the $500-plus price mark, just that not many people want to buy it at that price.

If you make an Apollo Lake laptop that cost $500 or more and gave it a 50WHr battery like the Envy X2, I bet you it would get the battery life the ARM version does.
 
  • Like
Reactions: Dayman1225

Brunnis

Senior member
Nov 15, 2004
506
71
91
Hard to draw any real conclusions as to the actual single threaded performance difference between the N3450 and the SD835 when not using emulation. The SD835 performance cores run at 11 % higher frequency than the N3450 burst frequency. The Octane result is a tie and the PDF Viewer Plus is just 6 % faster on the SD835. Basemark Web 3.0 is ~50% faster on the SD835, but that benchmark consists of several GPU benchmarks (where the SD835 should have the advantage) and some may also take advantage of more than 4 threads (the SD835 has eight cores).

Despite the above, the reviewer still thinks the SD835 system actually feels a lot faster and that the results corroborate that observation...
 

dealcorn

Senior member
May 28, 2011
247
4
76

I understand the position that the comparison is unbiased, but that is silly. Substantially all new Intel deployments runs X64 rather than X32 because it is inherently faster and supports more memory which sometimes makes it even faster. Real world user experience should compare ARM emulated X32 to Intel X64 because that is what the consumer has an actual choice between. Intel still wins, but by more.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Real world user experience should compare ARM emulated X32 to Intel X64 because that is what the consumer has an actual choice between. Intel still wins, but by more.

And the article points out compatibility issues with the translation layer active.

When Microsoft releases a new Operating System, it has minor compatibility issues with current software. I've seen it happen with Vista, Windows 7, and 10. So you have the chip using the same ISA, same company, and an operating system that's built upon previous code, and it has minor issues. Can you really expect code translation to do anywhere near that? The ~30% performance with code translation isn't new. That was true with DEC Alpha's FX32, it was true with IA32EL on Itanium, and x86 transition Apple had to go through.

Here's a good example of history repeating itself. Actors change, and few details too. But its a repeat of Windows RT. As it was back then, it was Intel developing Atom cores that cemented RT from succeeding. By drastically improving battery life in the Medfield generation, it allowed the improved Bay Trail platform to fend off RT. If they did not, RT may have found a niche.

And if Intel stagnated and stayed with Airmont cores, it would have looked much better for the SD835 Win10 devices. x86 performance of SD835 looked to be par with Braswell and Bay Trail platform. But Intel did not, and they are about to release Goldmont Plus generation, which is yet another substantial improvement over Goldmont.
 
Last edited:

Dayman1225

Golden Member
Aug 14, 2017
1,152
973
146
Strangely, the delivery dates for NUCs with Gemini Lake continue to slip. The dates I'm getting now are April 5th, which means they're about to slip out of the Q1 launch window. I guess you could technically call them launched, though. :p

Rest of CFL and its 14nm Chipsets are taking its damn sweet time too (I know they are coming very soon, but still) this along with the fact that Intel has/is moving most of their portfolio to 14nm and there is more to move in the future - do you think its possible that Intel is having capacity issues?

Intel did say them selves that they would be increasing capacity for 14nm over 2018:
Ashraf said:
 

Nothingness

Platinum Member
Jul 3, 2013
2,371
713
136
Substantially all new Intel deployments runs X64 rather than X32 because it is inherently faster and supports more memory which sometimes makes it even faster..
This is simply not true. Most of the time IA32 code is faster than x86-64 due to less memory footprint and hence less cache thrashing. Memory support of course is the main reason why apps move to 64-bit.
 

Brunnis

Senior member
Nov 15, 2004
506
71
91

Nothingness

Platinum Member
Jul 3, 2013
2,371
713
136
Are you completely sure about that? For example, testing by Phoronix (on Ubuntu) seems to contradict that statement:

https://www.phoronix.com/scan.php?page=article&item=ubuntu-1710-x8664&num=1
Many of their tests are using SIMD where 64-bit can make a vast difference. And honestly Phoronix is not the best place to look at benchmark results...

I just gave a quick try of SPEC 2000 176.gcc with gcc 7.2.0 -O3 -march=native. 32-bit is ~10% faster than 64-bit on both a Xeon X5670 (Westmere) and a Xeon E5-2650 v2 (Sandy Bridge).
 
  • Like
Reactions: Dayman1225

Brunnis

Senior member
Nov 15, 2004
506
71
91
Many of their tests are using SIMD where 64-bit can make a vast difference. And honestly Phoronix is not the best place to look at benchmark results...

I just gave a quick try of SPEC 2000 176.gcc with gcc 7.2.0 -O3 -march=native. 32-bit is ~10% faster than 64-bit on both a Xeon X5670 (Westmere) and a Xeon E5-2650 v2 (Sandy Bridge).
Thanks. Good to know. It would still be interesting to see a modern, more comprehensive test done on the subject.
 

Brunnis

Senior member
Nov 15, 2004
506
71
91
Thanks for the link! Pretty disappointing benchmark selection (although single threaded performance looks nice, as we already knew). Also disappointing that memory support might be spotty, just like on Apollo Lake. The 2400 MHz HyperX memory is exactly what I have lined up for my NUC7PJYH when it arrives. Looks like I may have to return that...

I don’t know what’s up with the memory controller on Apollo Lake and Gemini Lake, but between the spotty memory support, low bandwidth efficiency and slow memory access, something about the design seems to be quite different compared to the Core based chips.
 
  • Like
Reactions: Dayman1225

Brunnis

Senior member
Nov 15, 2004
506
71
91
An observation: BIOS of NUCs with processors based on low-cost architecture does not have a visible "Performance" tab. This means no adjustment of memory like such: https://images.anandtech.com/galleries/6270/IMG_20180328_204536.jpg , unless the function is hidden by requiring a search.
Yep, that's my understanding as well. It's a shame, since if your memory won't run at rated spec, you can't simply dial things down like you normally can.

Turns out the store I bought my HyperX 2400 MHz memory refuses to take it back, since I'm out of the return window (due to the NUC being late...). It didn't help that I explained the situationen to them, that I haven't opened the packaging and that I just wanted them to exchange my modules for the lower specced ones. As an owner of an e-commerce business myself, I think that's pretty bad customer service, especially since I have placed a decent amount of orders with them previously.

Anyway, I will probably attempt to use the memory in the NUC. If it fails, I'll exchange it with a pair of 4GB no-frills Crucial sticks (DDR4 2133 MHz CL15) that I currently have in a Skylake NUC (I'm guessing the Skylake NUC will accept the HyperX sticks). CL15 timings are not great, but the real world performance difference will probably be negligible anyway.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I don’t know what’s up with the memory controller on Apollo Lake and Gemini Lake, but between the spotty memory support, low bandwidth efficiency and slow memory access, something about the design seems to be quite different compared to the Core based chips.

The spotty support is a disappointment, I will acknowledge that.

However, the rest is fairly logical because its a smaller core processor with much less out of order execution resources and a memory subsystem designed for efficiency over pure performance. You will never see a lower class processor perform equal on memory performance compared to a higher performing one. Architects surely realize this and putting a memory controller that's too powerful is like putting a McLaren engine on a Miata. Both are considered sports cars, but aimed at totally different sectors.

4 Goldmont cores with the System Agent equivalent block can fit in a space that's roughly equal to the area of a single Skylake-class core. The chip is very respectable considering that.
 

Brunnis

Senior member
Nov 15, 2004
506
71
91
The spotty support is a disappointment, I will acknowledge that.

However, the rest is fairly logical because its a smaller core processor with much less out of order execution resources and a memory subsystem designed for efficiency over pure performance. You will never see a lower class processor perform equal on memory performance compared to a higher performing one. Architects surely realize this and putting a memory controller that's too powerful is like putting a McLaren engine on a Miata. Both are considered sports cars, but aimed at totally different sectors.

4 Goldmont cores with the System Agent equivalent block can fit in a space that's roughly equal to the area of a single Skylake-class core. The chip is very respectable considering that.
Yeah, that's what I'm thinking as well. However, I still find it surprising that a Core 2 Duo without an integrated memory controller can produce some 30-40 % lower memory latency than this thing.

Anyway, this already low performance probably lessens the effect of low latency RAM.

EDIT: By the way, I created a support discussion on Intel's forums regarding support for the 2400 MHz HyperX memory. I don't expect much to come of it, but you never know.

Link: https://communities.intel.com/message/535252

EDIT 2: Interestingly, the company I bought the HyperX memory just called me up and told me they'd replace the memories for me. I took a quick decision to keep them anyway, but thanked them for the good customer service. I figured I might as well give them a go in the NUC and otherwise, if they don't work, use the Crucial sticks I mentioned earlier. The difference will be negligible anyway.
 
Last edited: