• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

News Intel GPUs - we've given up on B770, where's Celestial already

Page 92 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
At this rate it will launch together with Battlemage. A bit sad that company like Intel needs to use Chinese/Koreans as alpha testers for their drivers. It doesn't look like hardware is the issue so I'd imagine this might not push Battlemage release date.

Battlemage is "2023-2024" so Alchemist won't intercept with BM at all.
 

You see this is why we voted against paying the board. Intel Board is nothing but absoulte TRASH, that has no idea what timing and speculation is.

Delays again...

They completely missed the HOT segment where they could of sold dookie just because its a GPU, even if it performed bad, but was priced so people could afford it, and it was at least as good as a 3050.

But no...

The entire Intel Board needs to be laid off.... even if we have to pay them to get the hell out...
We need a Lisa Su counter part @ intel, because current Intel is NOT working.
They are taking the entire company under, soon its gonna be a Cyrix moment all over again watch.
 
Delays again...

The bad guys are already out. It wasn't just because of the "bad guys", but because they FIRED 20K employees or something over stupid reasons like "oh you're too old" or "I don't like you". The latter of which was BK's legacy. Gelsinger hired 13K employees, not just high profile ones like the Nehalem architect.

They are still in the process of changing the company. You don't change a 100K+ place overnight, especially when the problems were brewing for more than a decade.

Also, the article for the delay is dated May 10th. It's June 1 now.
 
They completely missed the HOT segment where they could of sold dookie just because its a GPU, even if it performed bad, but was priced so people could afford it, and it was at least as good as a 3050.

But no...

The entire Intel Board needs to be laid off.... even if we have to pay them to get the hell out...
We need a Lisa Su counter part @ intel, because current Intel is NOT working.
They are taking the entire company under, soon its gonna be a Cyrix moment all over again watch.

You can't just jump into a new technology space by willing it.

Pat Gelsinger appears to be trending Intel in the right direction bringing the focus back to engineering, but he's only been there since 2021.

If you want to compare him to Lisa Su, you need to give him comparable amount of time.

Lisa Su took over AMD in 2014. It really wasn't until 2020 until their GPU division become competitive (RDNA) again. So she gets 6 years, but you won't give Gelsinger 2 years?

Plus AMD already had discrete GPUs as core business, and it's a new business for Intel, so if anything Intel should get more time.
 
The main thing is that Intel isn't likely to remain interested for long... especially if they feel like they would need to fab the gaming GPUs externally to have something sellable.

It's an easy thing to kill to cut costs.
 
The main thing is that Intel isn't likely to remain interested for long... especially if they feel like they would need to fab the gaming GPUs externally to have something sellable.

It's an easy thing to kill to cut costs.

Unlikely since big APUs are most likely to happen in the future.
 
The main thing is that Intel isn't likely to remain interested for long... especially if they feel like they would need to fab the gaming GPUs externally to have something sellable.

It's an easy thing to kill to cut costs.

Intel should have been more forward looking back in 2006, and snapped up ATI.

It's kind of stunning that it took them this long to realize the importance of GPUs.

I don't think anyone was realistically expecting great thing from their first generation.

Hopefully they are in it for the long haul, and can make a credible product in a few years.
 
It's kind of stunning that it took them this long to realize the importance of GPUs.
Intel knew all along the importance of GPUs. Intel is the company that put iGPUs in all its consumer chips. AMD is just catching up to that. Intel saw the danger of GPUs in datacenters early on, that why they pushed development on Larrabee/Xeon Phi since 2006.

The big mistake was first starving then shuttering Xeon Phi without a direct replacement available in the products portfolio.
 
Intel knew all along the importance of GPUs. Intel is the company that put iGPUs in all its consumer chips. AMD is just catching up to that. Intel saw the danger of GPUs in datacenters early on, that why they pushed development on Larrabee/Xeon Phi since 2006.

The big mistake was first starving then shuttering Xeon Phi without a direct replacement available in the products portfolio.
Intel saw GPUs as trivial. Basic gaming, video output, etc. They thought that they could keep x86 as the center for parallel computing, which they did see as important, no doubt adding patented instructions to ensure dominance and competitor lock-out. The rapid scaling of GPUs must have sent shockwaves through management.
 
Intel saw GPUs as trivial. Basic gaming, video output, etc. They thought that they could keep x86 as the center for parallel computing, which they did see as important, no doubt adding patented instructions to ensure dominance and competitor lock-out. The rapid scaling of GPUs must have sent shockwaves through management.
Larrabee/Xeon Phi was intended to be Intel's x86-based answer to the rapid scaling of GPUs. The Aurora exascale supercomputer was originally to be built around Xeon Phi and to be finished back in 2018(!). Intel's major mismanagement in that area was shuttering the Xeon Phi line before having an equivalent replacement. And the equivalent replacement still hasn't launched, and we are in 2022!

(@jpiniero, I still don't know what you were referring to with "Skylake Servers". Standard server chips never were and never could have been a replacement capable of running Aurora alone. Care to explain your response?)
 
Larrabee/Xeon Phi was intended to be Intel's x86-based answer to the rapid scaling of GPUs. The Aurora exascale supercomputer was originally to be built around Xeon Phi and to be finished back in 2018(!). Intel's major mismanagement in that area was shuttering the Xeon Phi line before having an equivalent replacement. And the equivalent replacement still hasn't launched, and we are in 2022!

(@jpiniero, I still don't know what you were referring to with "Skylake Servers". Standard server chips never were and never could have been a replacement capable of running Aurora alone. Care to explain your response?)
When did Larrabee start being conceptualized? 2004, 2005 timeframe? GPUs had something like 100-200 shader cores. Intel thought they could stay in the game with x86. Can see the Oracles at Intel saying, "control the language and you control the world".

I guess it's the same human failing that doomed so many companies. The inability to obsolete your own products. In this case, it's not even that, as both CPU & GPU profitably coexist. Just myopic stupid greed, "WE WANT ALL", and you end up way behind.

edit: Corrected shader core count
 
Last edited:
(@jpiniero, I still don't know what you were referring to with "Skylake Servers". Standard server chips never were and never could have been a replacement capable of running Aurora alone. Care to explain your response?)

Intel did get several HPC deals which use just Skylake Server and no accelerators. I think they realized they would never be able to catch nVidia in performance because of how big the CPU cores are, hence the GPU project.
 

"I always think we're 30 days from going out of business," Huang says. "That's never changed. It's not a fear of failure. It's really a fear of feeling complacent, and I don't ever want that to settle in."

Intel is the opposite of that. Until a few years ago, they thought themselves invincible and so big that no one could challenge them. That's the beginning of any company's downfall.
 
When did Larrabee start being conceptualized? 2004, 2005 timeframe? GPUs had something like 100-200 shader cores. Intel thought they could stay in the game with x86. Can see the Oracles at Intel saying, "control the language and you control the world".

I guess it's the same human failing that doomed so many companies. The inability to obsolete your own products. In this case, it's not even that, as both CPU & GPU profitably coexist. Just myopic stupid greed, "WE WANT ALL", and you end up way behind.

edit: Corrected shader core count
And no plan B. Which I find most mind-boggling of all. Integral part of myopic overconfidence I guess.
 
Remember, it was 10nm delay that led to demise of Xeon Phi. The same management that led to losing the leadership of their strongest area won't have foresight to do things like create a replacement.

The process delay actually started with 14nm. It was 6 month delayed. That's why the first product was Core M, and it was pretty disappointing.
 
And no plan B. Which I find most mind-boggling of all. Integral part of myopic overconfidence I guess.
Speaking of no plan B, the same thing happened with 10nm. There was really no fallback to hedge if 10nm didn't meet schedule.

They knew 10nm was super aggressive with their scaling targets and they thought they could do it without EUV (to give them credit, it wasn't mature enough anyways when 10nm targets were established). However, they really shot themselves in the foot by being headstrong and thinking that if they just threw more resources at the problem, they could get 10nm resolved. In hindsight, they should have either pivoted away sooner from the aggressive scaling targets and/or decide to use EUV. Meanwhile, the stars align for TSMC, who were previously a year or two behind Intel, because EUV starts to mature right when they started setting up scaling targets that could benefit from EUV. They start making orders for EUV machines while Intel still thinks they don't need them yet.

What Intel should have done is have two separate teams: one designs 10nm without EUV with relaxed scaling, and another designs 10nm with EUV but the original scaling targets. Whichever one looked more likely to meet schedule, that's the one that gets used first. Ideally it would be the traditional DUV option. Then, when the EUV option is viable, it gets used. This is no different than what TSMC did by introducing small changes across iterative nodes. The point is, TSMC develops multiple nodes simultaneously. Intel did not.
 
The dream of an Intel GPU seems to be fading away into irrelevancy every day now.

A marketing department shout fading to an echo and then into vaporware.
 

iU32KiRmXParKACThubVwA-970-80.png.webp


Look, a desktop card! Shame about the drivers
 
Back
Top