News Intel updated Loihi, and nobody's talking about it (except Dr. Cutress)

DrMrLordX

Lifer
Apr 27, 2000
21,620
10,830
136

Okay, nobody around here anyway.

Pretty solid journalism from Dr. Cutress, though I am disappointed that he failed to mention that Intel 4 is actually Intel's old 7nm process renamed, and also failed to mention why anyone should be surprised that Loihi 2 would be fabbed on a pre-production of Intel 4. Hint: IT'S DELAYED. Intel 4 apparently isn't even suitable for Ponte Vecchio since the relevant chiplets have been migrated to TSMC N5(or N3; I'm having problems keeping track). Context is important folks. But back to Loihi 2.

I'm really confused as to what it is that Intel hopes to accomplish with Loihi 2 (or what they hoped to accomplish with Loihi in the first place). Intel owns Habana labs. Habana Labs is currently selling training solutions based on their Gaudi products:


It seems like Loihi and Loihi 2 exist in parallel with Gaudi, and that Intel is . . . competing with itself?

On top of all that, Intel's efforts with Ponte Vecchio point to the very distinct possibility that, should Ponte Vecchio finally bear fruit (on TSMC nodes no less, lulz), that Raja's team will render both Gaudi AND Loihi 2 obsolete by producing a dGPU-based training product that can outperform all of Intel's other AI training hardware.

If Intel really needed a different pipecleaner for Intel 4/Intel 7nm, wouldn't it make as much sense to just, you know, push Loihi to the side for awhile (read: forever) and let Altera make some FPGA prototypes on it instead?
 
  • Like
Reactions: Tlh97 and moinmoin

dullard

Elite Member
May 21, 2001
25,055
3,408
126

Okay, nobody around here anyway.

Pretty solid journalism from Dr. Cutress, though I am disappointed that he failed to mention that Intel 4 is actually Intel's old 7nm process renamed, and also failed to mention why anyone should be surprised that Loihi 2 would be fabbed on a pre-production of Intel 4. Hint: IT'S DELAYED. Intel 4 apparently isn't even suitable for Ponte Vecchio since the relevant chiplets have been migrated to TSMC N5(or N3; I'm having problems keeping track). Context is important folks. But back to Loihi 2.

I'm really confused as to what it is that Intel hopes to accomplish with Loihi 2 (or what they hoped to accomplish with Loihi in the first place). Intel owns Habana labs. Habana Labs is currently selling training solutions based on their Gaudi products:


It seems like Loihi and Loihi 2 exist in parallel with Gaudi, and that Intel is . . . competing with itself?

On top of all that, Intel's efforts with Ponte Vecchio point to the very distinct possibility that, should Ponte Vecchio finally bear fruit (on TSMC nodes no less, lulz), that Raja's team will render both Gaudi AND Loihi 2 obsolete by producing a dGPU-based training product that can outperform all of Intel's other AI training hardware.

If Intel really needed a different pipecleaner for Intel 4/Intel 7nm, wouldn't it make as much sense to just, you know, push Loihi to the side for awhile (read: forever) and let Altera make some FPGA prototypes on it instead?
Gaudi (Training for AI) and Loihi (AI that doesn't need training) are in two completely different areas with completely different applications. AI that uses training is the brute-force method: massive power required, massive work up-front required, using massively parallel calculations. Neuromorphic computing is an attempt to do the same amount of work in a few watts as compared to megawatts for parallel computing. Any company that wants a future should invest in R&D in multiple areas. That way, they can grab a piece of the market no matter which way the market ends up going. That is what Intel is trying with neuromorphic computing: hoping to be a part of computing in the future.

Dr. Cutress wrote an entire article on Intel 4. https://www.anandtech.com/show/1682...nm-3nm-20a-18a-packaging-foundry-emib-foveros Why would he need to repeat that article in this article, unless you just want to bash Intel over a marketing term. Ian also wrote exactly why Loihi 2 is on Intel 4:
  • "a small silicon die size"
  • "help iterate through the potential roadblocks in bringing a process up"
  • "neuromorphic hardware requires the high density and low static power"
  • "The 128-core design also means that it has a consistent repeating unit, allowing the process team to look at regularity and consistency in production"
  • "no serious expectation to drive that product to market in a given window"
  • "without having to worry as much about defects"
And again, if Ponte Vecchio works for AI training, how would that make neuromorphic computing (which doesn't use AI training) obsolete?
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
21,620
10,830
136
Gaudi (Training for AI) and Loihi (AI that doesn't need training) are in two completely different areas with completely different applications. AI that uses training is the brute-force method: massive power required, massive work up-front required, using massively parallel calculations. Neuromorphic computing is an attempt to do the same amount of work in a few watts as compared to megawatts for parallel computing. Any company that wants a future should invest in R&D in multiple areas. That way, they can grab a piece of the market no matter which way the market ends up going. That is what Intel is trying with neuromorphic computing: hoping to be a part of computing in the future.

Fair enough, but my point still stands: both projects have the same end goal. By acquiring assets like Habana Labs and then (on top of that!) working on HPC dGPUs, it looks like Intel is eagre to render its own investments obsolete. There is such a thing as too much hedging.

Dr. Cutress wrote an entire article on Intel 4. Why would he need to repeat that article in this article

Not everyone has read that article. Intel deliberately renamed their processes to dodge bad press wrt delays. They don't want you to remember past articles about Intel 7nm which is why they renamed 10ESF to Intel 7 and Intel 7nm to Intel 4.

unless you just want to bash Intel over a marketing term.

Lengthy anti-Intel diatribes are hardly necessary. Even a footnote indicating that Intel 4 is also Intel 7nm would suffice. Links back to his Intel 4 article in said footnote would also have been helpful.

Ian also wrote exactly why Loihi 2 is on Intel 4:

. . . without providing any context as to what Intel 4 actually is or what SHOULD have been the pipecleaner in late 2021 on Intel 4 (Ponte Vecchio). Yes, it's obvious that tiny little ICs with repeating features that will never see market would make sense since, for whatever reason, Intel can't or won't bring Intel 4 together sufficiently to produce some of the dice for Ponte Vecchio.

And again, they could have also done some FPGA prototypes just as easily as a pipecleaner. I, and many others, figured Intel had already killed Loihi some time ago when they bought out Habana Labs.

And again, if Ponte Vecchio works for AI training, how would that make neuromorphic computing (which doesn't use AI training) obsolete?

See above.
 
  • Like
Reactions: Tlh97 and moinmoin

2blzd

Senior member
May 16, 2016
318
41
91
I read the article and have no where near the the grasp/understanding or knowledge that you have, and it was pretty clear to me what it was for. Literally spelled it out for you. And yea, I don't see any reason to bring up Ponte Vecchio..it seems like you just wanted Ian to talk badly about Intel?
 

DrMrLordX

Lifer
Apr 27, 2000
21,620
10,830
136
I don't see how that has any relevance to a Loihi article.

You serious? Quote from article:

We were surprised to hear that Loihi 2 is built on a pre-production version of Intel 4 process.

Why is he surprised? He never bothers to say.

it seems like you just wanted Ian to talk badly about Intel?

I want Ian to stop doing Intel's work for them. In the original version of the article, Dr. Cutress even when so far as to say that Loihi was to be fabbed on "Intel 4nm" (lol).
 

Rannar

Member
Aug 12, 2015
52
14
81
I want Ian to stop doing Intel's work for them. In the original version of the article, Dr. Cutress even when so far as to say that Loihi was to be fabbed on "Intel 4nm" (lol).
He already answered that in the comments. Too many renamings, re-renamings and process names which not connected to any real nm. Easy to misspell Intel 4 as Intel 4nm. Not on purpose.
 
  • Like
Reactions: Tlh97 and 2blzd

2blzd

Senior member
May 16, 2016
318
41
91
I think this is going to be the norm for awhile...Every article that brings up Intel's new naming conventions will be met with the same responses and questions about it not being the true process size..

I think its just best to accept it and move on. That way one can focus on whats really important, the performance, not some arbitrary # that's inconsistently measured throughout the industry by its players.
 
  • Like
Reactions: scannall

DrMrLordX

Lifer
Apr 27, 2000
21,620
10,830
136
Wouldn't you be surprised that an Intel 4 chip is available for public use (albeit in low volume with less reliability) before Intel 7?

Not really. Intel was scheduled to be producing compute chiplets on 7nm/Intel 4 for Ponte Vecchio/Aurora about . . . now? Also "Intel 7" is the fourth variation of their 10nm node. It's really just an iterative improvement to an existing node (10SF).

Too many renamings, re-renamings and process names which not connected to any real nm. Easy to misspell Intel 4 as Intel 4nm. Not on purpose.

Ehhhh that's not all he said. Personally I don't think the situation was handled all that well. Every time someone "misspells" Intel 4 or similar, it's just more proof that the Intel marketing department still earns its keep. Oh look Tom's did it too:


If TSMC renamed N5 to N4 and N4 to N3 and N3 to N2 . . . there would be no end of (justifiable) outrage. Just like their 12nm process name being bs since it was not that far off from older 16nm nodes in feature size:


At the very least, this is why it's helpful to have a competent editor proof-read your articles before publishing them.

Every article that brings up Intel's new naming conventions will be met with the same responses and questions about it not being the true process size..

Thing is, it's not about the process/feature size at all. Intel 7nm was no more descriptive than Intel 4 (depending on whom you ask). The issues with Intel specifically are:

1). They didn't even change the nodes. They just straight-up renamed nodes already in development to something else to make it harder for people to have coherent conversations about said nodes' development histories.
2). The name changes mean that the relationship in feature size/transistor density in previous nodes (32nm -> 22nm -> 14nm/14nm+/14nm++/14nm++++++ -> 10nm/10nm+/10SF) is now broken. You can't look at Intel 7 or Intel 4 and make any meaningful inferences about its overall feature size or transistor density by comparing it to older node names. It's even worse than the TSMC 12nm situation.
3). After years of slinging mud at TSMC for mislabeling their own nodes (credit to TSMC for removing "nm" from their node names), Intel decided to just . . . join them?

On another note, I've found very little benchmark data on Loihi (Loihi 2 is too new for benchmark data). There's one piece of benchmark data, helpfully provided by Intel:


Anyone got a better (read: 3rd party) source for how Loihi stacks up against traditional AI training hardware?
 
Last edited:
  • Like
Reactions: moinmoin

dullard

Elite Member
May 21, 2001
25,055
3,408
126
Anyone got a better (read: 3rd party) source for how Loihi stacks up against traditional AI training hardware?
I don't think I got through to you earlier. Loihi is NOT AI training hardware. So, there would be absolutely no point in comparing Loihi to AI training hardware. You are basically asking how a specific car (performing AI tasks) compares to a school for engineers (training AI chips that could later be used to perform tasks). The comparison of a car to a school does not even make any sense.

It is clear to me from the rest of your post that you just want to bash Intel's process naming. Would you mind renaming this thread to Intel 4 name complaints? Then maybe starting a Loihi thread?

Until then, here are some benchmarks: https://arxiv.org/pdf/1812.01739v2.pdf Figure 1 shows the ultimate benefit of Loihi, it is 23 times more energy efficient than a CPU and 109x more energy efficient than a GPU. So, it has promise. But, the drawback is in Table 1: Loihi was the slowest of all the tested methods in terms of results per second. Loihi 2 is an attempt to fix that latter part. If Intel's claims of 2 to 10 times faster are true, then Loihi 2 would be competitive in performance and still by far use the least amount of power. In some cases where the 10x performance is true, then Loihi 2 would be both the highest performing and the most energy efficient.
 
Last edited:

Doug S

Platinum Member
Feb 8, 2020
2,254
3,485
136
If TSMC renamed N5 to N4 and N4 to N3 and N3 to N2 . . . there would be no end of (justifiable) outrage. Just like their 12nm process name being bs since it was not that far off from older 16nm nodes in feature size:



This is all kind of silly because if TSMC renamed N5 to N1 and started referring to it as "1 nanometer" that wouldn't increase their revenue at all. Uneducated consumers aren't the ones making the decisions to book capacity at TSMC or Samsung. The people making those decisions aren't going to be fooled by such things anymore than you can fool your little sister with a bet for "dollars" and paying in "doll hairs" once she's older than five.
 

DrMrLordX

Lifer
Apr 27, 2000
21,620
10,830
136
I don't think I got through to you earlier. Loihi is NOT AI training hardware. So, there would be absolutely no point in comparing Loihi to AI training hardware.

You're correct since it looks like Applied Brain Research, Inc. was benching inference rather than training. But at least in the case of some of the hardware, it can be used for both (albeit on different scales). And in the case of Habana Labs, they have inference hardware as well, which, again . . . competes with Loihi.

edit: also IBM is looking at a "hybrid" approach to inference/training hardware which is interesting? To say the least?


As to whether or not I wished to "bash" Intel's naming scheme, why bother? I already told you, all Dr. Cutress needed to do was footnote some stuff and be done with it, since rationally we should all have been skeptical of Intel's name-change anyway (what shocks me is that you aren't, apparently?). It's just the part of my initial post that seems to have distracted you the most.

This is all kind of silly because if TSMC renamed N5 to N1 and started referring to it as "1 nanometer" that wouldn't increase their revenue at all. Uneducated consumers aren't the ones making the decisions to book capacity at TSMC or Samsung. The people making those decisions aren't going to be fooled by such things anymore than you can fool your little sister with a bet for "dollars" and paying in "doll hairs" once she's older than five.

Maybe you should explain that to Intel's marketing department.
 
Last edited:

Doug S

Platinum Member
Feb 8, 2020
2,254
3,485
136
Maybe you should explain that to Intel's marketing department.

Intel is different from TSMC because they DO sell directly to consumers. Unlike with TSMC Intel's marketing about process does reach consumer minds - at least indirectly via review sites and more tech savvy friends.