Discussion Intel current and future Lakes & Rapids thread

Page 785 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
The short answer is it is a current amplifier used for impedance matching between circuits or for long wire runs. It is more complicated than that as there multiple buffer types and some may amplify voltage rather than current, but that's the simplest way to explain it that covers most cases.
Yeah, current amplifier doesn't really work in this particular case. I'd just leave it at saying that it boosts the signal without changing its value. Commonly implemented as just a pair of inverters.
 
  • Like
Reactions: Tlh97 and Hitman928

Hitman928

Diamond Member
Apr 15, 2012
5,262
7,890
136
What's a buffer and what is it used for?

As @Exist50 said, in digital logic designs it is typically implemented as a pair of inverters. An inverter is a logic gate (circuit building block) made up of traditionally 2 transistors that switches the signal from 1 (hi) to 0 (low) or vice versa. So, if you put two inverters in series and size them correctly, you get the same data at the output of the 2 inverters as the input with a small time delay but a much cleaner signal. The reason buffers are needed is because each logic gate (there are many different kinds) needs to be able to drive the next stage at whatever frequency you are targeting. In digital circuits, the next stage is going to be another logic gate (ignoring some exceptions) made up of transistors.

Transistors, however, aren't perfect switches and they have gate capacitance. So in basic terms, every transistor has a capacitor at its input that has to be driven or filled up to pass the signal. The bigger the transistor, the bigger the capacitor. The faster the frequency, the less time you have to fill the capacitors of the next stage to pass the signal in time. The more transistors in the logic gate, the more total capacitance. There are also resistances in play, but we'll ignore that for now. When talking about digital circuits, the capacitors dominate the equation anyway for the most part. So, if your logic gate can't drive the next stage fast enough for your target frequency, you can add a buffer (or multiple) to drive (or fill up the capacitor of) the next stage faster. The buffer itself will have gate capacitance also but it will be much smaller than the stage you are using it to drive, so your original logic circuit can fill it up quickly and then the buffer propagates the signal cleanly to the next stage.

I am pasting signals below (please forgive the hand drawn quality) as a conceptual example. If black is the ideal signal you want to pass (going from low to hi) but red is the best your logic gate could do, then blue would be the buffered signal. Notice that the blue signal actually starts to transition after the red signal (due to the delay of the signal through the buffer) but the signal is much cleaner at the output and ultimately the signal is usable by the next stage sooner than without the buffer because of how much quicker it is able to drive (or fill) the next stage. Hope this helps!

1679407602163.png
 

Luddite

Senior member
Nov 24, 2003
232
3
81
Will Intel try to calm the temps on Raptor Lake refresh? Or will it be more house lights flickering power and space heater thermals? Also, do you think Raptor Lake refresh will get cranked to 6.0 GHz on the i9?
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
That would be great. but isn't ARL designed for Intel 20A? But you also have to keep in mind that Intel has never released a new node in 12 months from the last node. Closest they ever came in recent history was 120nm to 90nm, that was 13 months.

How did you get 13 months for 90nm? Even if you take 0.13u for Northwood, it was Jan 2002. That's 25 months. Nevermind 13....

0.13u - Pentium III-M: Summer 2001
90nm - Prescott Feb 1, 2004 - 30-31 months(90nm had issues)
65nm - Pentium 955EE Late December 2005 - 23 months
45nm - Core 2 QX9650 Late October 2007 - 22 months
32nm - Core i7 980 March 2010 - 29 months
22nm - 3770K Apr 2012 - 24 months(22nm also had delays)
14nm - Core M September 2015 - 29 months(14nm was 6+ month delayed)

It was also rumored way back that 22nm was having issues and they were considering on another "bridge" after Sandy Bridge before 22nm transition. Someone also said that 22nm had serious, serious issues but things changed rapidly in the last 6 months of development.
 
Last edited:
  • Like
Reactions: lightmanek

Hulk

Diamond Member
Oct 9, 1999
4,224
2,013
136
How did you get 13 months for 90nm? Even if you take 0.13u for Northwood, it was Jan 2002. That's 25 months. Nevermind 13....

0.13u - Pentium III-M: Summer 2001
90nm - Prescott Feb 1, 2004 - 30-31 months(90nm had issues)
65nm - Pentium 955EE Late December 2005 - 23 months
45nm - Core 2 QX9650 Late October 2007 - 22 months
32nm - Core i7 980 March 2010 - 29 months
22nm - 3770K Apr 2012 - 24 months(22nm also had delays)
14nm - Core M September 2015 - 29 months(14nm was 6+ month delayed)

It was also rumored way back that 22nm was having issues and they were considering on another "bridge" after Sandy Bridge before 22nm transition. Someone also said that 22nm had serious, serious issues but things changed rapidly in the last 6 months of development.

That was a mistake.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
However, if they are comparing the performance of the final node for each process and there are inter "+'s" then those numbers seems quite reasonable as I posted a comparison of Intel nodes using max clocks of each Intel node.

Ok, but assumptions are hypothetical and you are assuming the worst part of their history will continue(14nm to 10nm being 4 years) and I guess if that's your stance I can't argue with you.

Also, the IEDM/ISSCC process numbers are comparing with the latest iterations of the process.

Also @Exist50

While transistor drive numbers closely correlated back in the days when clocks were far lower, that's not true anymore. So I can believe in their numbers being true, but you won't see the 20% gain turning a 5GHz chip into a 6GHz one. Instead you'll see blocks within the core being 20% faster, or it could be used to save power instead.

There's a reason the world record for clocks is only 9GHz. Because at 5GHz it's past the point where the performance of the individual transistor determines the final frequency. Design, heat density all matter.


9GHz, but at -250C(Celcius), Hyperthreading disabled, lucky draw chip, all 16 E cores disabled. It doesn't count!

If you think in a simple manner that drive current = clocks, then 9GHz will be doable with 18A. 5GHz = Intel 7, 6GHz = Intel 4, 7.08GHz = Intel 3, 8.142GHz = 20A, 9GHz = 18A

Instead, an 18A chip will have similar clocks but larger caches, OoOE buffers, better instruction latency, while keeping the clocks the same. That's not as exciting but still very important. That's why a simple ARM chip can do 20% higher clocks, but you can see even from the Intel 4 presentation at clocks just under 4GHz, the gain is reduced to 10% or so.

But I bet you for like future E cores, iGPUs, VPUs, you'll see exciting gains. It's like what they say for computers. "Democratization of compute". The guys who are ahead run into the fateful end of scaling soon, thus much cheaper chips and those that use far less power aren't as far behind as they used to be. Making powerful computers common. Like a Lamborghini that arrives at the traffic light 2 seconds earlier but over a course of an hour long trip they are maybe 1% ahead with a car that's 10x as expensive and uses 3x the fuel.

Back in the days yes when they said 20% better transistor, it translated roughly into clock speed gains. But even then, if you are using the gains for transistor, you have a product that's roughly the same perf/clock.
 
Last edited:
  • Like
Reactions: lightmanek

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Based on the fact that that the atom cores in Meteor Lake seemed to scale pretty much the same (slightly better) as Redwood Cove cores, I don't think the Atom cores scaled any better on their nodes.

Atoms also get greater gains for performance/clock. Greater gain with better density scaling is what high end CPUs would "kill" to get, but that's just a fact of life. Also 10% better density every generation compounds over time quickly. Alder/Raptor is just a sign of things to come with Hybrid. Each successive generation will widen the advantages over the non-hybrid ones. I bet you it is one of the reasons why Apple chips became so good today.

Let's say in 3 generations with each generation getting 5% advantage in favor for the E cores, it's 15% faster relative to the P cores. And the E cores are also 15% smaller in relation to the P's. Raptor's E cores are already a bit better compared to Alder's E cores. That's huge.
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
While transistor drive numbers closely correlated back in the days when clocks were far lower, that's not true anymore. So I can believe in their numbers being true, but you won't see the 20% gain turning a 5GHz chip into a 6GHz one. Instead you'll see blocks within the core being 20% faster, or it could be used to save power instead.

There's a reason the world record for clocks is only 9GHz. Because at 5GHz it's past the point where the performance of the individual transistor determines the final frequency. Design, heat density all matter.
When fabs quote performance improvements, those do genuinely represent clock speed, but the comparisons are usually very stretched for the best number. For example, the gains are greater at low voltage, so they'll quote results from e.g. 0.65V, often compare libraries that aren't quite 1:1, etc. Lots of BS there.
 

Hulk

Diamond Member
Oct 9, 1999
4,224
2,013
136
Ok, but assumptions are hypothetical and you are assuming the worst part of their history will continue(14nm to 10nm being 4 years) and I guess if that's your stance I can't argue with you.

I don't consider this back-and-forth arguing, just discussing a topic that interests both of us.

Looking at your timeline we see ~2 years between nodes until 10nm at which point it would be either 42 months for Palm Cove or 57 months until Alder Lake. If we consider Palm Cove the "release" for 10nm then that's 3-1/2 years for 10nm and nearly 5 years for Intel 4 (and counting).

If we consider Ice Lake the release of 10nm then that's nearly 5 years for 10nm and what would be 4 years for Intel 4 if we're looking at an August release.

So we have two progressions to consider.

About 2 years per node until 2015 then 3-1/2 years then 4 years (and counting).

Or

About 2 years per node until 2015 then 5 years and then 4 years (if we have an August release).

If you extrapolate from the 1st timeline, then 3 to 4 years seems like a reasonable estimate for what comes next after Intel 4, which would put it '26 or '27 if we see Intel 4 this summer.

If you extrapolate from the second then 10nm could be seen as an outlier and we could maybe see see a regression for the next node of 3 years.

Either way, historical data seems to suggest Intel is needing more time to work out the kinks in newer, smaller nodes than in the past and 2 years between nodes isn't feasible anymore. The reality seems more like 3 or 4 years between nodes.

Based on this data and assuming Intel 4 is how in volume this summer or fall (which also might be a stretch) then the next node most likely won't appear until fall of 2026.

I'm simply looking at historical data to make predictions based on something more than a feeling. It's simply a "game" of prediction that fun to look back on.

Let me ask you this. When do you think Intel 4 will be released? And by released I mean a normal release where you can readily buy parts be they either mobile and/or desktop? Not a Palm Cove paper launch is what I'm saying.

When you do think the next new node release will be and will that be Intel 20A?
 
  • Like
Reactions: Tlh97 and Kepler_L2

Hulk

Diamond Member
Oct 9, 1999
4,224
2,013
136
Raja's gone. Not exactly a surprise after he was kicked out of a leadership role, but it's interesting they still haven't found a replacement graphics lead.

While I understand a manager does not have to have a low level understanding of what the people being managed actually do, for a large and complex operation like oversight on Intel fabs wouldn't a solid understanding of fabrication help when it comes to making decision as to where and when to allocate various resources, which really is the prime role of a leader?

How would you describe the difference in technical knowledge between Stuart Penn and Raja Koduri when it comes to microprocessor design and fabrication?

The way I see it is this guy is going to have leaders from various parts of the fab operation coming to him and trying to convince him of what his priorities need to be. Knowing he doesn't really know what they are talking about they can pretty much say anything technical to convince him that their department needs more resources or the whole project is going to bottleneck. Seems like it might be good to have someone who has a good understanding of fabs might have been a good idea.

From what I've seen through my life most people with a technical background, meaning at least 4 semester of Calculus and physics, plus whatever science degree or MS they may have pick up economics very quickly. Coming from an economic/accounting background and picking up the tech side is not so easy.

I'll give you a personal example. In college as ME's we had to take engineering economics. It was a fast paced course and honestly it was pretty easy. We learned about using Lagrange multipliers to optimize various economic problems. Now Lagrange multipliers are generally taught in Calc III, which we had all had. Business students would never see Calc II much less Calc III so how would they ever learn about Lagrange multipliers, which are an important part of business? The point is that a technical background can allow you to jump to other fields very quickly.

Now I also understand that being a "people person" is also a big part of management and "techies" are not generally people people so that is a good counter argument.

I like to see people with tech backgrounds running tech companies. Lisa Su is a great example. Does she surround herself with other tech people or business types? It's an honest question, not trying to be snarky.
 

Kocicak

Senior member
Jan 17, 2019
982
973
136
I do not think that mr. Pann is inept.

...

Pann previously served as chief business transformation officer and general manager of Intel’s Corporate Planning Group. As part of this role, he established the company’s IDM 2.0 Acceleration Office (IAO) to guide the implementation of an internal foundry model. IAO closely collaborates with all Intel business units and functional teams to support the company’s internal foundry model.

In June 2021, Pann returned to Intel, where he had started his career in 1981. Prior to his return, he was chief supply chain officer and chief information officer at HP for six years. At HP, Pann was responsible for the company’s supply chain, which delivers nearly 100 million products to customers each year.

Before joining HP in July 2014, Pann served as corporate vice president and general manager of Intel’s Business Management Group, where he was responsible for pricing, revenue and forecasting functions for the company’s microprocessor and chipset operations. He also co-managed the geographic operations teams for the Intel sales force and was responsible for order management and external-facing supply chain programs. Pann held several management positions within the company’s sales organization before moving into an operations role in 1999 as the director of Microprocessor Marketing and Busines Planning.

Pann earned a bachelor’s degree in electrical engineering from Michigan Technological University and an MBA from the University of Michigan.


BTW when I saw (or read) an interview with Dr. Kelleher I got very worried for some reason... She did not appear to be a good person to lead technology development to me. But perhaps her qualities are somewhat hidden and not easily apparent.
 
Last edited:

Hulk

Diamond Member
Oct 9, 1999
4,224
2,013
136
I do not think that mr. Pann is inept.




BTW when I saw (or read) an interview with Dr. Kelleher I got very worried for some reason... She did not appear to be a good person to lead technology development to me. But perhaps her qualities are somewhat hidden and not easily apparent.

It's a long way from "inept" to not being that special person with the combination of talents to put Intel's foundry division back on track. My comment was meant to suggest the latter.
 

DrMrLordX

Lifer
Apr 27, 2000
21,629
10,841
136
Raja's gone. Not exactly a surprise after he was kicked out of a leadership role, but it's interesting they still haven't found a replacement graphics lead.


I said this in another thread but:

noooooooooooo Raja!

How could they?

Seriously though, it would be funny if his replacement was the woman (whose name I forget) responsible for running Intel's graphics driver team.
 
Jul 27, 2020
16,279
10,316
106
Seriously though, it would be funny if his replacement was the woman (whose name I forget) responsible for running Intel's graphics driver team.
Lisa Pearce. She's probably more of a software person so don't think she can replace Raja. Unless Pat is like, Hey Lisa. Just be the interim chief of the consumer GPU division until we find someone willing to jump ship from Nvidia or Apple. Don't worry, Intel has you till you retire of old age!
 

Hulk

Diamond Member
Oct 9, 1999
4,224
2,013
136
What if they already are on track in technology itself, and now they need somebody to organize and plan the manufacturing the best?

Then they have the right guy. We'll see how the new node rollouts go.
 

DrMrLordX

Lifer
Apr 27, 2000
21,629
10,841
136
Lisa Pearce. She's probably more of a software person so don't think she can replace Raja. Unless Pat is like, Hey Lisa. Just be the interim chief of the consumer GPU division until we find someone willing to jump ship from Nvidia or Apple. Don't worry, Intel has you till you retire of old age!

Yeah, her. I don't see why not, unless it would distract her from her current duties (and she has a lot on her plate).
 

BorisTheBlade82

Senior member
May 1, 2020
663
1,014
106
B) the interconnect cost for Turin is relatively high (afterall they use way more chiplets, and I'm pretty sure iFOP is less efficient than EMIB)
Of course you are absolutely right: EMIB is at least 5x more efficient than IFoP.
But you need to be aware that interconnect efficiency is measured in pJ/bit (transferred). So while IFoP is around 2 pJ/bit and EMIB only around 0.3 pJ/bit, the difference in bandwidth is even higher. IFoP has only 64Gbyte/s per link while SPR has what, around 1 GByte/s per tile connection, in order to get at least a bit of performance from accessing remote L3?

So it is not unlikely, that in absolute numbers SPR has a higher interconnect consumption than even the top-end EPYC Genoas.

/edit: Oh, and also the Jury is still out regarding AMD using something else than IFoP for Zen5.
 
Last edited:

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
Who would want to lead a ship with a hole?
There's always some upper-mid level manager who thinks if they were put in charge, they could do just as good a job as Jensen etc. Or someone who wants more autonomy, or just flat out better pay. The trouble is finding someone both willing to move and qualified for the role.
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
BTW when I saw (or read) an interview with Dr. Kelleher I got very worried for some reason... She did not appear to be a good person to lead technology development to me. But perhaps her qualities are somewhat hidden and not easily apparent.
I don't know anyone directly tied to Intel's fab side, but I haven't heard anything bad about Kelleher. She's usually described as very direct, almost blunt. Which is probably a good thing, given the history of Intel's fabs.
What if they already are on track in technology itself, and now they need somebody to organize and plan the manufacturing the best?
I think that's probably closer to the job description. Nominally, he shouldn't be responsible for the technical execution of the fabs, but rather getting and coordinating with IFS customers. On paper, doesn't sound like a bad choice, but time will tell.
 

BorisTheBlade82

Senior member
May 1, 2020
663
1,014
106
Favourite quote:
There’s really something to be said about working smarter, not harder. AMD avoids the problem of building an interconnect that can deliver high bandwidth and low latency to a ton of connected clients. This makes CCD design simpler, and let EPYC go from 32 cores to 64 to 96 without worrying about shared cache bandwidth.