News Intel GPUs - hot damn, a price cut!

Page 16 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
This is really poor logic. Amd could sell this card for 300ish usd and likely make more margin then rx480 did.

That's the problem here, this is a mid range card with high end pricing. As an actual soc it looks fine.
No, it isn't. My logic is based on the actual price, yours is based on some fantasy price that doesn't exist, yet you say my logic is flawed? Get real
 

DrMrLordX

Lifer
Apr 27, 2000
20,511
9,598
136
Poor Mike Clark and his team. Forever to be overlooked?
Not by anyone that really matters in the industry. The whole Zen team will be like that, I think. Those for whom you should weep are the old PD, SR, and XV teams . . . at least those who weren't poached by the Zen squad.

As an aside, Raja seems very extravagant. Anyone wonders at the politics at play now inside Intel GPU division.
As long as things are going his way, he should do just fine. Wait for the first person to put their back up or tell him, "no" and things might not be so great. Also he doesn't even have to deliver any product until 2020 or 2021 or so. AND his product gets to be the 7nm pipecleaner for Intel in 2021. He should be thrilled.
 

ksec

Senior member
Mar 5, 2010
420
117
116
No, it isn't. My logic is based on the actual price, yours is based on some fantasy price that doesn't exist, yet you say my logic is flawed? Get real
You are assumming no price / performance improvement were the actual fault of those company and in reality the three year gap you mention mostly have to do with Wafer pricing, yield, Node and DRAM price hike.

That is excluding market competition and situation.
 
Last edited:
  • Like
Reactions: Tlh97

ksec

Senior member
Mar 5, 2010
420
117
116
Poor Mike Clark and his team. Forever to be overlooked?

As an aside, Raja seems very extravagant. Anyone wonders at the politics at play now inside Intel GPU division.
Thank You. May be one of the idea why he didn't fit in AMD? Dr Sui runs a very tight ship, they don't have billions to throw away on everything. And as far as I can tell, making Semi-Custom were the right choice at the time.

Which is why I don't understand why he gets all the Fanfares, he seems to be working towards his personal goal much more than AMD. And what happens if he fails in Intel making GPU a thing. He blames Intel again?
 

Glo.

Diamond Member
Apr 25, 2015
5,245
3,813
136
Thank You. May be one of the idea why he didn't fit in AMD? Dr Sui runs a very tight ship, they don't have billions to throw away on everything. And as far as I can tell, making Semi-Custom were the right choice at the time.

Which is why I don't understand why he gets all the Fanfares, he seems to be working towards his personal goal much more than AMD. And what happens if he fails in Intel making GPU a thing. He blames Intel again?
If Intel will be able to manufacture those GPUs be it in their own factories or at TSMC/Samsung, they will release good GPUs. They won't be better than AMD's/Nvidia's, but not worse then their offerings either.
 

ksec

Senior member
Mar 5, 2010
420
117
116
If Intel will be able to manufacture those GPUs be it in their own factories or at TSMC/Samsung, they will release good GPUs. They won't be better than AMD's/Nvidia's, but not worse then their offerings either.
I seriously doubt they are doing it in TSMC or Samsung. The advantage of Fabbing GPU means Intel's cost per transistor is a lot lower than its competitors, hence they could afford to have larger die size. The additional die space will likely compensate for their Drivers not working as well as AMD and Nvidia in all areas.

And it is one reason why Intel is rushing to increase capacity in all Fabs.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
You are assumming no price / performance improvement were the actual fault of those company and in reality the three year gap you mention mostly have to do with Wafer pricing, yield, Node and DRAM price hike.

That is excluding market competition and situation.
Why would I care about why the price is higher? Totally irrelevant as a consumer
 

zinfamous

No Lifer
Jul 12, 2006
109,260
26,842
146
Didnt hear it being called humble pie when AMD hired Keller.
Keller is an engineer-for-hire. He doesn't really "belong" to anyone. They guy already had a history with AMD, Apple, AMD again, Tesla...I forget who else.

He works on contracts for a specific project and when they expire, he leaves. No one is really hiring him away from anyone else.
 

DrMrLordX

Lifer
Apr 27, 2000
20,511
9,598
136
HBM at $200 eh? Interesting! Looks like they're going to be aggressive on price. Overall that may be good news for the market.

Woops nevermind. Stupid autotranslate. I'm still expecting them to launch at $500 but whatever.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,583
3,638
136
Are they doing that for power-efficiency reasons, or performance (perhaps) reasons? I thought that NVidia went to a software-based scheduler around Fermi/Kepler time, to save power.
Power efficiency is pretty much performance nowadays. Probably area reasons as well.

Any word if the Intel dGPUs are going to use Tile-based Rendering?
Intel Gen 11 uses what's called Position Only Shading Tile based Rendering or "PTBR". It has a separate pipeline that runs parallel with the regular pipeline to find out what needs to be culled or not.
 

Glo.

Diamond Member
Apr 25, 2015
5,245
3,813
136
So it means that TGL_LP has 768 ALUs, and Gen12HP has 1536 ALUs?
 

Dayman1225

Golden Member
Aug 14, 2017
1,081
791
146
So it means that TGL_LP has 768 ALUs, and Gen12HP has 1536 ALUs?
In Gen11 each EU only has 2 ALUs, but they are "quad pumped" (I think thats the terminology) and each ALU can perform 4 32bit floating point or interger operations per clock.

10523

So assuming Gen12/Xe is the same which isn't a safe assumption, it would be:

96EUs*2 = 192 ALUs * 4 32bit ops/clk (768)

192 EUs*2 = 384 ALUs * 4 32bit ops/clk (1536)

I think. If anything is wrong here and someone spots it, please correct me.
 
  • Like
Reactions: Glo.

IntelUser2000

Elite Member
Oct 14, 2003
8,583
3,638
136
So assuming Gen12/Xe is the same which isn't a safe assumption, it would be:

96EUs*2 = 192 ALUs * 4 32bit ops/clk (768)

192 EUs*2 = 384 ALUs * 4 32bit ops/clk (1536)
No, you got it right.

An easier way to remember it for those that were in since the early days is think of it like the original Pentium.

The Pentium was the first superscalar processor featuring 2 decoders, but they were not identical. They were called "U" and "V" pipes. In a way the Intel Gen graphics are similar.

Think of each EU as being a core with 2 way issue. Each issue pipes allowed for 4-way FP32 SIMD execution. Realworldtech has a great article that explains this. The functionality between each ways are not identical, although for 3D gaming it is.

So it means that TGL_LP has 768 ALUs, and Gen12HP has 1536 ALUs?
I think the Gen12HP config shown in that tweet is only a possible configuration and not the base. If you looked at earlier driver leaks they have different configurations.
 
  • Like
Reactions: Dayman1225

Dayman1225

Golden Member
Aug 14, 2017
1,081
791
146
Bob Swan just mentioned on Intel's call that their dGPU codenamed DG1 is back and powered on in the labs.

First confirmation from Intel on the DGx codenames.
 

jpiniero

Lifer
Oct 1, 2010
12,846
4,135
136
Bob Swan just mentioned on Intel's call that their dGPU codenamed DG1 is back and powered on in the labs.

First confirmation from Intel on the DGx codenames.
If the release is going to be in June, that doesn't seem like that much time from when they have an actual working sample.
 

ASK THE COMMUNITY