Discussion Quo vadis Apple Macs - Intel, AMD and/or ARM CPUs? ARM it is!

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

moinmoin

Diamond Member
Jun 1, 2017
4,954
7,672
136
Due to popular demand I thought somebody should start a proper thread on this pervasive topic. So why not do it myself? ;)

For nearly a decade now Apple has treated their line of Mac laptops, AIOs and Pro workstations more of a stepchild. Their iOS line of products have surpassed it in market size and profit. Their dedicated Mac hardware group was dissolved. Hardware and software updates has been lackluster.

But for Intel Apple clearly is still a major customer, still offering custom chips not to be had outside of Apple products. Clearly Intel is eager to at all costs keep Apple as a major showcase customer.

On the high end of performance Apple's few efforts to create technological impressive products using Intel parts increasingly fall flat. The 3rd gen of MacPros going up to 28 cores could have wowed the audience in earlier years, but when launched in 2019 it already faced 32 core Threadripper/Epyc parts, with 64 core updates of them already on the horizon. A similar fate appears to be coming for the laptops as well, with Ryzen Mobile 4000 besting comparable Intel solutions across the board, with run of the mill OEMs bound to surpass Apple products in battery life. A switch to AMD shouldn't even be a big step considering Apple already has a close work relationship with them, sourcing custom GPUs from them like they do with CPUs from Intel.

On the low end Apple is pushing iPadOS into becoming a workable mutitasking system, with decent keyboard and, most recently, mouse support. Considering the much bigger audience familiar with the iOS mobile interface and App Store, it may make sense to eventually offer a laptop form factor using the already tweaked iPadOS.

By the look of all things Apple Mac products are due to continue stagnating. But just like for Intel, the status quo for Mac products feels increasingly untenable.
 
  • Like
Reactions: Vattila

DrMrLordX

Lifer
Apr 27, 2000
21,640
10,858
136
Yes, Rate-1. Anand is measuring single-thread performance with spec, there's no multithreaded tests at all.

How rude of them. Good catch though.

You missed, that these were rate-1 benchmarks...essentially single thread. Of course now i can see where your confusion and distrusting SPEC is coming from - you were assuming these were multi-threaded benchmarks were an R3900X should be much faster.

Dagnabbit, why did they go and do that?

Anyway, the general point was, nobody runs just SPEC in a real benchmark suite. Or SPEC + Geekbench.

If or when Apple finally does roll out A-series SoCs in systems running MacOS, rest assured that Mac users are going to want to see it benched in the software they use most.
 
  • Like
Reactions: Tlh97

DrMrLordX

Lifer
Apr 27, 2000
21,640
10,858
136
That person you quote is downvoting every single post in this thread that is not confirming his beliefs.

That's why the admins got rid of the dislike post feature. People would just bomb posts to harass individual users. Now if someone downvotes your post, it doesn't stick with you in the form of a big red bar, so it doesn't really matter.
 

JasonLD

Senior member
Aug 22, 2017
485
445
136
Until Apple starts making ARM transition on their Mac products (It seems like next year according to all the rumors), it is all wild guesses on how A14 will perform against x86 counterparts with equal power/cores.
 

Thunder 57

Platinum Member
Aug 19, 2007
2,675
3,801
136
That's why the admins got rid of the dislike post feature. People would just bomb posts to harass individual users. Now if someone downvotes your post, it doesn't stick with you in the form of a big red bar, so it doesn't really matter.

Yea, glad they made those changes. It could get pretty bad, especially in P&N.
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
Right now Apple is readying entry level products, that are BARELY enough for desktop ecosystem needs, and that still lose to x86 parts.

I'm far from an Apple or ARM fanboy but still I'm having trouble to accept this. We can see that A12/A13 are very fast single-threaded / IPC wise at least in the benchmarks available.

So why should they suddenly still be much slower than x86 when but into a laptop/desktop? Even if the SOC has amperage limits and clock speed limits it would still run at least as fast as in an ipad pro meaning higher ST performance than anything from x86. On top of that entry-level x86 (macbook air) is still a dual-core with 1.1ghz base and 3.2 turbo. So even the frequency isn't that much higher than A13.

Hence I'm absoutley failing to see how even the A12 is slower than that pathetic dual-core i3? let alone a new 8/4 SOC extra made for a laptop? What is so different about the desktop space? Iphones have very fluid UI?
 

dacostafilipe

Senior member
Oct 10, 2013
772
244
116
Its funny, because you ask what is in the Bloomberg article news, while also providing... Bloomberg Articles, with Mark Gurman being the author of them. So Gurman has been the author of ALL of reports of Apple working ARM chips for Mac.
Look at all the Bloomberg articles you're quoting, all written by the same Mark Gurman and Ian King.

Exactly my point!

The same guy, telling us those stories for years, with nothing released as of now, but somehow the information he provides is relevant enough that we need to believe him?

I can't sorry.
 

coercitiv

Diamond Member
Jan 24, 2014
6,211
11,947
136
The same guy, telling us those stories for years, with nothing released as of now, but somehow the information he provides is relevant enough that we need to believe him?
You might want to go back to what I originally wrote as it seems we misunderstood each other:
either we take the Bloomberg report entirely into consideration, or we discard it for being sketchy. Picking and choosing sounds less like validating leaks and more like having the cake and eating it too.

My position on the matter hasn't changed in the last 3 years: I think Apple is building their next gen computer in plain sight with the iPad Pro. I also think the platform of choice is iOS, and it would make sense for Apple to start making affordable productivity oriented devices based on this hardware and software ecosystem. I expect them to build up from the basic user towards the professional, not replace everything in a (relatively) short time span.

But the reasoning above is just my opinion, and the Bloomberg article claims to be a leak. All I said was I can see why some people would chose to believe this leak with both the good and the bad bits, but I cannot understand why someone would pick and chose parts of the leak, that would be little better than wishful thinking.
 
Last edited:
  • Like
Reactions: Tlh97 and Glo.

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Hence I'm absoutley failing to see how even the A12 is slower than that pathetic dual-core i3? let alone a new 8/4 SOC extra made for a laptop? What is so different about the desktop space? Iphones have very fluid UI?

Well, remember that this theoretical Apple SOC would be up against more advanced x86 CPUs than what are currently available. Supposedly the transition would start by next year, so that means Zen 3 for AMD and possibly Alder Lake for Intel.

Also, the type of applications you run on desktop are typically much more multithreaded. It's unlikely that Apple will be able to keep their high single threaded IPC without sacrificing anything to increase multithreaded IPC, assuming they want to scale up their design to accommodate more CPU cores that are going to be utilized more rigorously than in a mobile platform.

Plus, so many desktop applications use SIMD heavily, at least in PC world. SVE2 supposedly won't be implemented until several years from now, so that means an Apple desktop SOC launching next year would still be using Neon which would be disadvantageous I would imagine.
 

Etain05

Junior Member
Oct 6, 2018
11
22
81
I downvoted you because you seem to misinterpret the quote from the Bloomberg article.

"The transition to in-house Apple processor designs would likely begin with a new laptop because the company’s first custom Mac chips won’t be able to rival the performance Intel provides for high-end MacBook Pros, iMacs and the Mac Pro desktop computer. "

The quote doesn’t say that Apple’s chips won’t be able to rival the performance of all Intel chips in MacBook Pros, iMacs and the Mac Pro...but that they won’t be able to rival the performance of “high-end MacBook Pros, iMacs and the Mac Pro”, or, said otherwise, the very top of the line MacBook Pros, iMacs and the Mac Pro. It is obvious that the quote doesn’t refer to all MacBook Pros, iMacs and the Mac Pro models with all chip configurations, because we are talking about chips with completely different performances.

The lowest end MacBook Pros and iMacs have respectively:

Core i5 quad-core (eight generation)
Core i3 quad‑core (eight generation) if we consider the retina one, or Core i5 dual‑core (seventh generation) if we consider the old non-retina one too

and they go all the way up to:

Core i9 8‑core (ninth generation) [MacBook Pro and iMac]
Xeon W 18-core [iMac Pro]
Xeon W 28-core [Mac Pro]

Obviously a 8-big/4-small Apple chip would be far above any quad-core Intel chip, considering that a big Apple core almost matches the performance of one big Intel core (at its highest clock-speed). Mark’s statement would be absurd and nonsensical if it included all MacBook Pro etc chips. So he is referring to the top of the line. And that is obviously true. A 8-big/4-small Apple chip cannot surpass the performance of a i9 8-core Intel chip, because the Intel chip has SMT, making it 16-threads, and the per core performance of an Intel core is slightly above the performance of an Apple core (at least the one in the A13), and the small cores in Apple’s supposed 12-core chip would barely equal the performance of one single big Apple core, so it’d be like 9 big Apple cores vs 8 Intel cores which have SMT. The advantage for Intel’s chip is clear, without even mentioning the Xeons (of course, we’d have to take in account that Intel’s chip would in no way hold the same per core performance measured by Andrei, since that’s the single-core turbo clock-speed, not the all-core one, but still, Apple’s chip will have a lower all-core clock speed too).

To that you add the fact that Apple clearly does not intend to make such a long and hard transition just to get back to the starting point and offer the same performance Intel already provides, but higher performance and efficiency, to make the transition worthwhile, and it’s clear that Apple will develop chips with a higher number of (big) cores than the Intel chip it wants to replace, which is after all something that the article itself suggests.

I also downvoted you because I really cannot believe that any intelligent human being would suggest that Apple will transition just some of its Macs. That is the most nonsensical thing I have read in this entire thread. Is there really someone here crazy enough to think that Apple will spend immense amounts of energy, time, money and what not only to transition just some of its Macs to ARM, and then stop? That it will go to extreme lengths to make sure that macOS and its own Mac apps run on ARM, and then permanently support 2 architectures and ask its developers to support 2 architecture too? That is pure lunacy. It is so beyond absurd that I just have to assume these suggestions are actually trolling, and not real comments.

Apple will transition the entire Mac line (from the lowliest MacBook to the highest Mac Pro) in 3 years (but actually I’d say 2) at most, or the transition won’t happen at all. Any other suggestion seems pure fantasy to me.

There are many people here that doubted the transition would happen at all, and now because of the overwhelming evidence that it is happening, they cannot deny it any longer, and they now simply suggest that it will be a partial transition, where Apple will stop without going ARM for its top of the line too, because clearly Apple cannot compete with mighty Intel and AMD. This is just the next stage. Sooner or later they will have to accept that it is happening, and that the entire Mac line will be ARM, not that far into the future.

I also find it ridiculous that some are now denigrating SPEC2006 just because it is an old benchmark. Those people denigrating it could at the very least make the effort to describe their problems with the benchmark, or go look at results on SPEC2006, then compare them to results on SPEC2017 (which I hope no one denigrates for being an “old” benchmark) and bring some proof that a chip comparison using SPEC2006 would be incorrect because a similar chip comparison using SPEC2017 gives vastly different results (and I am specifically talking about single-core performance, since we have 0 data regarding multi-core comparisons between Apple and Intel on SPEC). Alas, the only thing they do is complain here on the forum.

Lastly, the only other possibility is that these rumoured ARM Macs are actually running iPadOS, instead of macOS, like some suggested here. I disagree with them too, but I never downvoted them because that seems at least plausible. But suggesting that Apple will make a partial transition, or that Apple’s chips (even this rumoured 8-big/4-little) are way inferior in performance compared to all Intel chips in MacBook Pros, iMacs etc. is a pure lie, when every proof we have suggests the very opposite. And I usually downvote lies.
 
  • Like
Reactions: Viknet

Glo.

Diamond Member
Apr 25, 2015
5,711
4,559
136
That's why the admins got rid of the dislike post feature. People would just bomb posts to harass individual users. Now if someone downvotes your post, it doesn't stick with you in the form of a big red bar, so it doesn't really matter.
Any voting option on any forum, shows only one thing.

Personal Biases.

I never use it, because if I would be downvoting posts that I do not like or do not agree with I would be acting like mentally 5 year old kid that is butthurt about something.

I do like the "Like" option on posts. Sometimes can give funny interactions, especially for insanely outlandish opinions :).
 

Glo.

Diamond Member
Apr 25, 2015
5,711
4,559
136
I'm far from an Apple or ARM fanboy but still I'm having trouble to accept this. We can see that A12/A13 are very fast single-threaded / IPC wise at least in the benchmarks available.

So why should they suddenly still be much slower than x86 when but into a laptop/desktop? Even if the SOC has amperage limits and clock speed limits it would still run at least as fast as in an ipad pro meaning higher ST performance than anything from x86. On top of that entry-level x86 (macbook air) is still a dual-core with 1.1ghz base and 3.2 turbo. So even the frequency isn't that much higher than A13.

Hence I'm absoutley failing to see how even the A12 is slower than that pathetic dual-core i3? let alone a new 8/4 SOC extra made for a laptop? What is so different about the desktop space? Iphones have very fluid UI?
We have seen ONLY SPEC2006 comparison between those cores. Not the actual performance of the cores, in long running multithreaded wrokloads.

I've seen multiple videos on Video Editing on iPads vs MBPs that has shown that iPad is actually faster in transcoding. But it was not due to the cores themselves. It was due to iPads having hardware encoders, that software on the iPads is capable of using.

But what about Compiling the code on an iPad? Are the cores fast enough to compete with Intel? What about stuff like this, that requires sheer compute horsepower, that is sustainable in long workloads, and not just short bursts of Instructions which is exactly where Apple ARM design excells?

Apple designed their architecture(and iPad's Software) for those short bursts of single threaded, lightly threaded code execution. What will happen when they will be exposed into much more complex compute stuff that is long running?

You see why Apple may want to use ARM for basic, entry level computers, but not for MacBook Pros, higher-end iMacs and Mac Pro? Because the workloads are completely different.

Everything that "normal" people do today is lightly threaded: Web browsing, watching videos, listenning to music, even gaming is lightly threaded.

But stuff like editing a video for Netflix, 3DCinema, creating complex 3D animations, Music Creation, is a completely different task.

Apple is smart enough to see the difference in workloads, and is smart enough and capable enough to offer entry level products, for people who do not need x86 to handle those long running, highly threaded workloads, because they do not do anything like it on their computing device. And move from x86 to ARM on those entry level products will allow Apple to increase margins, which is the best thing in the entire world! For Apple...

Those are the reasons why I believe the picture is very simple:
Apple is releasing two, maybe three ARM based computers, for people who do not need that sheer horsepower. They are releasing entry level products in the main product categories: iMac(desktop), MacBook(Laptop). Both will be passively cooled(how cool passively cooled iMac is?! [sorry, I had to...]).

But the x86 will not be replaced by ARM in Apple cosystem, fully. x86 will have its place in Apple ecosystem, and in computing in general.

Im not sure its going to be Intel, tho... ;)
 
  • Haha
Reactions: mikegg

Glo.

Diamond Member
Apr 25, 2015
5,711
4,559
136
I downvoted you because you seem to misinterpret the quote from the Bloomberg article.

"The transition to in-house Apple processor designs would likely begin with a new laptop because the company’s first custom Mac chips won’t be able to rival the performance Intel provides for high-end MacBook Pros, iMacs and the Mac Pro desktop computer. "

The quote doesn’t say that Apple’s chips won’t be able to rival the performance of all Intel chips in MacBook Pros, iMacs and the Mac Pro...but that they won’t be able to rival the performance of “high-end MacBook Pros, iMacs and the Mac Pro”, or, said otherwise, the very top of the line MacBook Pros, iMacs and the Mac Pro. It is obvious that the quote doesn’t refer to all MacBook Pros, iMacs and the Mac Pro models with all chip configurations, because we are talking about chips with completely different performances.

The lowest end MacBook Pros and iMacs have respectively:

Core i5 quad-core (eight generation)
Core i3 quad‑core (eight generation) if we consider the retina one, or Core i5 dual‑core (seventh generation) if we consider the old non-retina one too

and they go all the way up to:

Core i9 8‑core (ninth generation) [MacBook Pro and iMac]
Xeon W 18-core [iMac Pro]
Xeon W 28-core [Mac Pro]
And you realize that above that MacBook in Apple lineup will be MacBook Air with dual core Core i3 and quad core Core i5, and 14 inch MacBook Pro with whatever the hell Apple will want to put in it that fits the thermal envelope? You do realize that above that 8/4 design in iMac lineup will be whatever the hell 10th or 11th generation Intel CPUs, including Core i5's which have less cores than 8?

You do realize that entry Level iMac currently has Dual core, mobile, 17W ULV CPU from two(?) generation ago?

What if the only thing you are experiencing is your own Cognitive Disonance, that simply Apple ARM designs cannot hold a candle to workloads that x86 has to handle?
 

dacostafilipe

Senior member
Oct 10, 2013
772
244
116
But the reasoning above is just my opinion, and the Bloomberg article claims to be a leak. All I said was I can see why some people would chose to believe this leak with both the good and the bad bits, but I cannot understand why someone would pick and chose parts of the leak, that would be little better than wishful thinking.

I certainly respect your opinion, but I have to disagree on how one should evaluate the article.

Parts of the story are from other people/sources on the web (codenames) and others are just too easy to guess (core count increase).

So, IMO, I'm okay with believing in parts of the articles but not in others (dates, products, ...)
 

moinmoin

Diamond Member
Jun 1, 2017
4,954
7,672
136
There is a good reason why nobody in the industry makes non-mobile chips on mobile process.
That's actually incorrect: For all Zen family dies up to now AMD actually uses process nodes originally intended for mobile. So AMD already does the balancing act, using high density nodes for high energy efficiency as well as pushing that to scale up to frequencies not seen in mobile. We had a thread on this topic wrt TSMC's N7.

That Apple didn't design their SoC in a similar way so far just tells us that they so far saw no worth in doing so, nothing else.
 
  • Like
Reactions: lightmanek

Glo.

Diamond Member
Apr 25, 2015
5,711
4,559
136
That's actually incorrect: For all Zen family dies up to now AMD actually uses process nodes originally intended for mobile. So AMD already does the balancing act, using high density nodes for high energy efficiency as well as pushing that to scale up to frequencies not seen in mobile. We had a thread on this topic wrt TSMC's N7.

That Apple didn't design their SoC in a similar way so far just tells us that they so far saw no worth in doing so, nothing else.
Uh, AMD Uses 7.5T libraries, which is High Performance variant. Apple uses 6T librarie4s which are High Density/low power variant of the N7 process...
 

moinmoin

Diamond Member
Jun 1, 2017
4,954
7,672
136
Uh, AMD Uses 7.5T libraries, which is High Performance variant. Apple uses 6T librarie4s which are High Density/low power variant of the N7 process...
That's exactly where you are wrong, AMD used the 6T libraries for Zen 2. Read the thread I linked.

Specifically @Vattila's post where he also included following two official slides:

photo007_o.jpg


photo013_o7mjm8.jpg
 
Last edited:
  • Like
Reactions: lightmanek

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
Well, remember that this theoretical Apple SOC would be up against more advanced x86 CPUs than what are currently available. Supposedly the transition would start by next year, so that means Zen 3 for AMD and possibly Alder Lake for Intel.

Also, the type of applications you run on desktop are typically much more multithreaded. It's unlikely that Apple will be able to keep their high single threaded IPC without sacrificing anything to increase multithreaded IPC, assuming they want to scale up their design to accommodate more CPU cores that are going to be utilized more rigorously than in a mobile platform.

Plus, so many desktop applications use SIMD heavily, at least in PC world. SVE2 supposedly won't be implemented until several years from now, so that means an Apple desktop SOC launching next year would still be using Neon which would be disadvantageous I would imagine.

Currently the low-end macbook air is dual core. So maybe it will go against a quad-core but still with double the cores. Even then I can't see a 8/4 lose to low end macbook air especially not in MT.

I think apple can scarifce some ST speed an still stay clearly ahead. We are talking mobile here. The base frequency of the intel i3 in low end macbook air is 1.1ghz!!! Apple could even reach clock parity at least in MT.

Do browsersa and office really use SIMD? I doubt it...

You haven't convinced me and I would still be shocked if such a 8/4 Apple SOC wouldn't obliterate 4-core intels mobile cpu. That would IMHO be a huge failure from apple.


We have seen ONLY SPEC2006 comparison between those cores. Not the actual performance of the cores, in long running multithreaded workloads.

But stuff like editing a video for Netflix, 3DCinema, creating complex 3D animations, Music Creation, is a completely different task.

true but how runs that stuff on a macbook air regularly? My point was how a dual or quad core intel mobile CPU could beat a 8/4 core Apple SOC. I simply don't see it. Hence this new macbook won't be the lowest end.
 

Ajay

Lifer
Jan 8, 2001
15,468
7,872
136
That's why the admins got rid of the dislike post feature. People would just bomb posts to harass individual users. Now if someone downvotes your post, it doesn't stick with you in the form of a big red bar, so it doesn't really matter.
Yeah, I was one of those who complained vociferously in the feedback forum after some pencil neck went and disliked a score of my posts because I disagreed with him.
Now, I've also been downvoted like Mark. Social media 'features' FTW :rolleyes:
 

Ajay

Lifer
Jan 8, 2001
15,468
7,872
136
That's exactly where you are wrong, AMD used the 6T libraries for Zen 2. Read the thread I linked.

Specifically @Vattila's post where he also included following two official slides:

photo007_o.jpg


photo013_o7mjm8.jpg
So that's a bit counter-intuitive. Typically higher track height allows for higher drive currents and faster clocks. I suppose adding more fins per cell in planar fashion would achieve the same - perhaps that was the best choice for using the feature size reduction of 7nm. Also, what is C sub ac? Capacitive induced impedance?
 

moinmoin

Diamond Member
Jun 1, 2017
4,954
7,672
136
So that's a bit counter-intuitive.
I'll just quote @coercitiv's answer to that linked post:
In hindsight it makes sense: Rome came with a big jump in core count, hence power and area were definitely the priority given the rather low clocks they needed to saturate TDP. Renoir is also about power & density. That leaves desktop in an awkward place, but no more than it's been in years.
 

randomhero

Member
Apr 28, 2020
181
247
86
So that's a bit counter-intuitive. Typically higher track height allows for higher drive currents and faster clocks. I suppose adding more fins per cell in planar fashion would achieve the same - perhaps that was the best choice for using the feature size reduction of 7nm. Also, what is C sub ac? Capacitive induced impedance?
I remember from one interview with one of AMD's higher up (Norrod I think) that they expected lower clocks than they got with 7nm. It was assumed that AMD used HP libraries at the time.
So I understand your confusion. Doubly so.
 
  • Like
Reactions: Ajay

NostaSeronx

Diamond Member
Sep 18, 2011
3,686
1,221
136
Also, what is C sub ac? Capacitive induced impedance?
"A node capacitance, Cac, comprises both the switched, or ac, capacitance, and the effective capacitance resulting from crossover current."
"Therefore, what is needed in order to obtain a real-time estimation of the chip's power consumption is the operational supply voltage, which is set digitally, the operational frequency, which is known, the fused-in leakage value found during testing, and a measurement of the number of nodes in the design switched in a particular clock cycle along with the node capacitance, Cac. The latter term, Cac, is not a straight-forward value to measure on a semiconductor chip."

Basically, leakage at the pipeline stage.
 

Ajay

Lifer
Jan 8, 2001
15,468
7,872
136
"A node capacitance, Cac, comprises both the switched, or ac, capacitance, and the effective capacitance resulting from crossover current."
"Therefore, what is needed in order to obtain a real-time estimation of the chip's power consumption is the operational supply voltage, which is set digitally, the operational frequency, which is known, the fused-in leakage value found during testing, and a measurement of the number of nodes in the design switched in a particular clock cycle along with the node capacitance, Cac. The latter term, Cac, is not a straight-forward value to measure on a semiconductor chip."

Basically, leakage at the pipeline stage.
That's some damn fine google-fu. I need to retake a digital electronics class because I can't envision how this leakage is occuring as a physical system (crossover current should only affect the longer metal lines, not the flip flops themselves, IIRC).
 

Hitman928

Diamond Member
Apr 15, 2012
5,323
8,009
136
That's some damn fine google-fu. I need to retake a digital electronics class because I can't envision how this leakage is occuring as a physical system (crossover current should only affect the longer metal lines, not the flip flops themselves, IIRC).

Cac is capacitance in the system seen during dynamic transitions. This includes the parasitic capacitance of the transistors (mainly gate capacitance) as well as the routing. So when you have a logic gate, the capacitance of the following gate (plus routing) is what you have to drive in order to actually pass your signal. Again, this capacitance is parasitic and induces power loss (leakage). Additionally, the faster you want to go, the more dynamic leakage you will have due to the frequency response of capacitive impedances. This is different from static leakage which is the leakage that occurs while the logic gate is in steady state or "holding" the signal.
 
  • Like
Reactions: lightmanek and Ajay