Question Alder Lake - Official Thread

Page 88 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
This makes me wonder: Is there a process technology simulator based on accurate "physics at the atomic scale", where you get to play at the nanometer scale? Perhaps this is something the big players use internally, to simulate how a certain process might help them in achieving their performance/power goals.

There's a limit to that benefitting you obviously. The more parameters you add the more computational power you need.

Like they said you can have a transistor-level emulator and it can run Pong but requires a 3GHz Core 2 class chip to do so and runs at 5-10 fps. It's part of the reason some game emulators are much slower than others - it simulates them more accurately than the much faster ones.

So you might get an accurate single transistor simulator, but if you want it working with big enough circuit so it acts like a crude CPU, maybe it's not possible.

How do you think that would do as a home theater PC?

3) The top Alder Lake N chip.

Alderlake-N either seems overkill for the segment or isn't a direct replacement of Jasper Lake. 8 Skylake class cores will encroach a lot of the core territory, but as a low power core it really doesn't need 8 of them.

-N might also be for non consumer applications such as embedded.
 
Last edited:
Jul 27, 2020
16,340
10,352
106
There's a limit to that benefitting you obviously. The more parameters you add the more computational power you need.

Like they said you can have a transistor-level emulator and it can run Pong but requires a 3GHz Core 2 class chip to do so and runs at 5-10 fps. It's part of the reason some game emulators are much slower than others - it simulates them more accurately than the much faster ones.

So you might get an accurate single transistor simulator, but if you want it working with big enough circuit so it acts like a crude CPU, maybe it's not possible.
Intel/AMD build supercomputers for simulations. Why not simulate these things on a purpose-built supercomputer or maybe they do get computing time on existing supercomputers for this purpose?
 

Saylick

Diamond Member
Sep 10, 2012
3,172
6,410
136
Intel/AMD build supercomputers for simulations. Why not simulate these things on a purpose-built supercomputer or maybe they do get computing time on existing supercomputers for this purpose?
If I am not mistaken, they do simulate the circuit using FPGAs. They start with pure software simulations, then simulate using hardware via FPGAs, then tape-out and get a working sample in the lab. They get that running, work out any kinks, and a few iterations later they end up with the production stepping.

As for using a supercomputer, that would be software emulation at best. FPGAs are going to be an order of magnitude faster, if not more. The logic blocks in the FPGA won't ever run at full speed (only a few hundred MHz at best) but it's still much faster than simulating on CPUs. I wouldn't be surprised if AMD/Nvidia buy Xilinx FPGAs and link them up to simulate their largest chips, while Intel already has Altera in-house. Now that Xilinx is part of AMD, I guess they'll have better access to FPGAs than the rest.

Edit: here's a post I made a few years ago on how chips get designed:
 
Last edited:
  • Like
Reactions: igor_kavinski

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Intel/AMD build supercomputers for simulations. Why not simulate these things on a purpose-built supercomputer or maybe they do get computing time on existing supercomputers for this purpose?

The biggest difference between a fast modern PC and a supercomputer is that the latter has many cores.

And lots of things in the real world aren't parallelizable or are very amenable to it. Also greatly increases complexity if you want it accurate. If it was easily doable then it would have been done.

But yea, it can be done to a certain level.
 

Hitman928

Diamond Member
Apr 15, 2012
5,324
8,015
136
Intel/AMD build supercomputers for simulations. Why not simulate these things on a purpose-built supercomputer or maybe they do get computing time on existing supercomputers for this purpose?

Because you'd probably be dead before your CPU finished simulating, assuming you didn't have to worry about memory limitations to begin with. The amount of compute resources needed for something like TCAD is in a whole other galaxy compared to what it takes to simulate the same thing with a circuit simulator.
 
  • Wow
Reactions: igor_kavinski

dullard

Elite Member
May 21, 2001
25,069
3,420
126
Alderlake-N either seems overkill for the segment or isn't a direct replacement of Jasper Lake. 8 Skylake class cores will encroach a lot of the core territory, but as a low power core it really doesn't need 8 of them.

-N might also be for non consumer applications such as embedded.
You basically said that the whole Alder Lake-U line is overkill and/or pointless. Alder Lake-N should be similar to Alder Lake-U but without the P cores and with lower GPU specs. They might be Skylake class, but each Alder Lake-N core will run at about 1/10th (or maybe 1/20th) the power available to each Skylake core (assuming you are comparing them to desktop Skylake). So, don't expect it to be that great.

I'm currently using an i3-2365M for my HTPC. It has 2 cores / 4 threads and I far too often peg it at 100% CPU usage. A common use that I have is Pandora running in the background with a slideshow of every photo that I've ever taken in the foreground. Pandora spawns 4 threads, and one thread that I think runs the ads pegs one core at full use. Then for some reason randomly choosing a photo out of 100k photos fully uses up the other core. With both cores at 100%, if anything else comes up, then it is a lag fest. If I programmed it myself, a random number generator and a photo display wouldn't take that much CPU power to run a slideshow, but I've been to lazy to do that.

My second use case is Kodi for all my entertainment. The i3-2365M can barely handle it if I don't do anything fancy. My third use case is MakeMKV which is heavily multi-threaded. 8 cores probably is a bit overkill for all but MakeMKV, but it seems pretty future proof for now. The i3-2365M has lasted me 9 years, hopefully I can get 10 years out of this next HTPC.
 
Last edited:

dullard

Elite Member
May 21, 2001
25,069
3,420
126
Wouldn't an undervolted i3-12100 with a cheap mobo serve your use case?
It probably would do just fine. You are basically describing the i13-12100T which is for sale in countries outside the US right now (same price but no undervolting hassle). But for $80 more, the 12500T has 2 more cores and much stronger graphics. It just seems more future proof, especially since I don't know what type of TV I'll have over the next 10 years (4K, 8K?).

If I were to splurge a bit more, a 2.5G ethernet port would be nice on the motherboard. That way I could quickly transfer the movies and photos around, at least with my desktop for starters until I upgrade my wife's computer. That wouldn't be available on a cheap motherboard. I'm eyeing this one: https://www.asrock.com/mb/Intel/H670M-ITXax/index.asp but I'm open right now to other motherboards (especially since the 3 CPUs I listed are all completely different: desktop, mobile, and embedded).
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
You basically said that the whole Alder Lake-U line is overkill and/or pointless.

No I did not. The inclusion of the big cores are a big distinction which allows the -U to stand out.

The current N series don't benefit from having 8 cores. And it'll have to clock much lower to double cores at the same power, unless they are expanding the segment, like to 25W, but why? Seems heavily redundant and I know Intel doesn't do that.

The much increased perf/clock and likely better sustained clocks are more than enough for this segment.
 

dark zero

Platinum Member
Jun 2, 2015
2,655
138
106
Ok, is about time I left the forum and I came back and I want to share impressions of each tier of Alder Lake I saw...

Desktop Tier: For me is the best jump Intel did since Sandy Bridge, however this time AMD has something to counter them. The consumer will win this time. But I see this chip like Nehalem and the bigger jump might be Raptor Lake which seems that the core count might get increased. Still 24 threads in the config of 8 big cores with SMT and 8 "small" cores is pretty much interesting.

Still, the Celeron tier does not get love... same with Pentium which deserves finally Quad Core no HT design.

H Tier:
They manage to defeat AMD in terms of Thread count (and performance too) but I see only a Hexa Big Core with 8 "Small" cores. I expect that Raptop Lake goes full 24 thread (meaning big and small octa each one). Still, a welcoming change.

P Tier: This is interesting. The Core i7 1280P deserves to be called Core i9 since the core and thread count is noticably higher than the other parts. Core i3 having 12 threads is interesting too. The most balanced chips I saw in time.

U Tier: Mixed feelings... since the old U tier has more big core count in the high tier. Also there is the issue with the Celeron going Single Core without any Turbo Boost. Even the Athlon Silver has some boost in their iterations. Maybe they might fix it with Raptor Lake with more core count.

Alder Lake -N: For me this is the most interesting boost for the small cores.
Having supposedly a Skylake tier performance it can be interesting seeing how can perform by themselves. Also being the minimum a quad core design is welcoming too. Going Octa is interesting since it can predate the Celeron Desktop and even in some cases the Pentium Gold market with that.
I expect a dissadvantage and maybe can be poor PCIe lanes. That's because having a decent Octa core with Skylake tier performance can be good by adding a GT 1030 or even a GTX 1050 and have a decent experience with it.
Let's wait how Intel can manage it.

And sorry if is a long post, but those were my impressions.
Maybe I am wrong in some aspects, but I am learning about the current processors again.
Thanks for reading.
 
Last edited:
  • Like
Reactions: hemedans

diediealldie

Member
May 9, 2020
77
68
61
Intel/AMD build supercomputers for simulations. Why not simulate these things on a purpose-built supercomputer or maybe they do get computing time on existing supercomputers for this purpose?

That's not really possible. You could use a supercomputer if the system can be analyzed by FEA-like(Finite element method - Wikipedia ) approaches in a reasonable time.
But gate-level simulation(which can work like FEA) is almost 10 million times slower than real-world IC(and it's getting slower and slower). and modern supercomputer can provide x10,000 times faster speed compared to normal computers. Even with that, supercomputers cannot use full throughput due to their design. And that's where FPGA kicks in. You can simulate a part of a chip without putting if, else...etc statements. it just 'works' when they're routed.
 
  • Like
Reactions: lightmanek

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,573
14,526
136
Jul 27, 2020
16,340
10,352
106
The really bad power scaling of Celeron G6900 is evidence that something other than the GC cores is gulping up gobs of power. Is it possible that they fused off the other two GC cores in the die in such a way that they are not able to function but still using power during active usage of the functional cores?
 

jpiniero

Lifer
Oct 1, 2010
14,629
5,247
136
The really bad power scaling of Celeron G6900 is evidence that something other than the GC cores is gulping up gobs of power. Is it possible that they fused off the other two GC cores in the die in such a way that they are not able to function but still using power during active usage of the functional cores?

I imagine the G6900 is extremely overvolted to catch the last .01% of chips.
 

dark zero

Platinum Member
Jun 2, 2015
2,655
138
106
2 Cores without HT. Yikes. Why? I wouldn't recommend that even for web surfing.
And even worse if Alder Lake -N is going with the minimun of 4 cores... with Skylake performance.

Heck, now I am thinking... even the Pentium Gold might suffer against a potential Octa core Pentium Silver chip.