Intel Chips With “Vega Inside” Coming Soon?

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Dayman1225

Golden Member
Aug 14, 2017
1,160
996
146
AMD has a massively-larger and better IP library regarding GPU tech than intel. Most AI hardware at this point is custom re-purposed GPU hardware... which is why nvidia and AMD are so well-positioned in the sector. Intel is a complete joke in comparison. Zero quality GPU hardware and zero quality IP.

You know Intel(Nervana) are building ASICs for this purpose right? and they have their FPGAs(Altera). GPUs aren't the only thing that can do AI, DL, ML etc
 
  • Like
Reactions: beginner99

NTMBK

Lifer
Nov 14, 2011
10,448
5,829
136
AMD has a massively-larger and better IP library regarding GPU tech than intel. Most AI hardware at this point is custom re-purposed GPU hardware... which is why nvidia and AMD are so well-positioned in the sector. Intel is a complete joke in comparison. Zero quality GPU hardware and zero quality IP.

In order to keep their GPU competitive with dedicated deep learning hardware, NVidia has had to add incredibly specialised Tensor Cores which are completely useless for almost all graphics workloads. AMD aren't even in the same ballpark.
 
  • Like
Reactions: Arachnotronic

theeedude

Lifer
Feb 5, 2006
35,787
6,197
126
AMD missed the boat on GPU AI. The space is about to get very competitive, and they are behind and have no moat. They are also competitively behind on GPU workloads, so they can't afford to dedicate too much Silicon to AI. Their best bet is to focus on integrating their CPU IP with third party AI solutions which do have a moat, and focus on improving graphics competitiveness on GPU side.
 

FIVR

Diamond Member
Jun 1, 2016
3,753
911
106
You know Intel(Nervana) are building ASICs for this purpose right? and they have their FPGAs(Altera). GPUs aren't the only thing that can do AI, DL, ML etc

FPGAs and ASICs are not going to help Intel beat the likes of AMD and Nvidia in this field. Intel needs a good GPU core and they just hired Raja so I would expect them to produce something marginally worse than their competitors in about 5 years.
 

FIVR

Diamond Member
Jun 1, 2016
3,753
911
106
In order to keep their GPU competitive with dedicated deep learning hardware, NVidia has had to add incredibly specialised Tensor Cores which are completely useless for almost all graphics workloads. AMD aren't even in the same ballpark.

Nvidia certainly has a head start, but I would not count AMD out with their MI25 accelerators and whatever they produce next.

That said, the idea that Intel will somehow jump into this field and compete head-to-head with the likes of AMD or Nvidia is a joke. Maybe in a decade.
 

NTMBK

Lifer
Nov 14, 2011
10,448
5,829
136
FPGAs and ASICs are not going to help Intel beat the likes of AMD and Nvidia in this field. Intel needs a good GPU core and they just hired Raja so I would expect them to produce something marginally worse than their competitors in about 5 years.

Look at what the likes of Google are building for their deep learning hardware. It looks almost nothing like a GPU. They haven't gone out and e.g. bought Imagination and put their GPU expertise to work- they're making hardware that fits the problem directly.

Deep learning doesn't need a whole bunch of things that graphics benefits from (texture units, ROPs, cache hierarchy, rasterizer hardware), and graphics doesn't benefit from hardware that massively accelerates deep learning (things like tensor cores, which accelerate multiplication of enormous matrices). GPUs were a decent enough approximation for the first generation, but dedicated deep learning hardware is way more efficient.
 

mikk

Diamond Member
May 15, 2012
4,299
2,383
136
AMD has a massively-larger and better IP library regarding GPU tech than intel. Most AI hardware at this point is custom re-purposed GPU hardware... which is why nvidia and AMD are so well-positioned in the sector. Intel is a complete joke in comparison. Zero quality GPU hardware and zero quality IP.


Zero quality hardware? Why couldn't AMD beat Intel at 15W prior to Raven Ridge in performance. Intels hardware supports Feature level 12_1 since 2015 with Gen9. Intels Gen9 was the most advanced DX12 GPU in 2015 when it came out supporting Feature Level 12_1 Tier 3.

With their newest GCN AMD catched up but it took them some time. Or what about AMDs notorious subpar video decoding/encoding engine compared to Intels. This is what you call zero quality. Intels video unit is top notch! Of course Intels Gen9 is old by now (initially released in 2015) but you can be sure Intel is going forward with Icelake @Gen11. Intel made huge mistakes with their 14nm product planning cycle. Their GPU department isn't to blame when some top tier managers thought it is good enough to recycle the older CPU+GPU over and over again. With 10nm Intel learned from it, Tigerlake as a refresh over Icelake gets a newer GPU.
 

mikk

Diamond Member
May 15, 2012
4,299
2,383
136
Bulldozer obviously.


Doesn't matter much in GPU tests for iGPUs in a 15W envelope, or do you think these slow GPUs were limited by its CPU? Bulldozers weak IMC didn't help but then again, against a zero quality GPU hardware it should be no match.
 

jpiniero

Lifer
Oct 1, 2010
16,818
7,258
136
Doesn't matter much in GPU tests for iGPUs in a 15W envelope, or do you think these slow GPUs were limited by its CPU? Bulldozers weak IMC didn't help but then again, against a zero quality GPU hardware it should be no match.

I was specifically talking about game performance. Intel's IGP has always done better in synthetics than in actual games though.
 
  • Like
Reactions: KompuKare

FIVR

Diamond Member
Jun 1, 2016
3,753
911
106
Zero quality hardware? Why couldn't AMD beat Intel at 15W prior to Raven Ridge in performance. Intels hardware supports Feature level 12_1 since 2015 with Gen9. Intels Gen9 was the most advanced DX12 GPU in 2015 when it came out supporting Feature Level 12_1 Tier 3.

With their newest GCN AMD catched up but it took them some time. Or what about AMDs notorious subpar video decoding/encoding engine compared to Intels. This is what you call zero quality. Intels video unit is top notch! Of course Intels Gen9 is old by now (initially released in 2015) but you can be sure Intel is going forward with Icelake @Gen11. Intel made huge mistakes with their 14nm product planning cycle. Their GPU department isn't to blame when some top tier managers thought it is good enough to recycle the older CPU+GPU over and over again. With 10nm Intel learned from it, Tigerlake as a refresh over Icelake gets a newer GPU.

I said zero quality GPU hardware. What notable GPU patents and products has intel made? How does that compare to AMD? How do their market caps compare? How do their R&D budgets compare? What does that tell you about how much talent each company has for GPU development?
 

french toast

Senior member
Feb 22, 2017
988
825
136
Doesn't matter much in GPU tests for iGPUs in a 15W envelope, or do you think these slow GPUs were limited by its CPU? Bulldozers weak IMC didn't help but then again, against a zero quality GPU hardware it should be no match.
You have this all wrong.
Firstly, the CPU certainly DID hold back the iGPU.. especially in CPU intensive games, bulldozer was horrid for gaming, this can't be understated...also bulldozer was not power efficient..meaning the igpu had even less power to work with without the benefit of a fast CPU.
The memory controller was abysmal on bulldozer APUs...in already memory starved APUs..having a crud memory controller that extracts even less bandwidth is going to hurt performance significantly.
Process...it is well known that at least until AMD have 12nm..(possibly even far out to 7nm).intel have had a far superior process...both in density and performance....which allows them to pack in a comparative massive gpu transistor budget whilst also ramping clocks.
Those reasons are a giant lead weight holding back the GPU in AMD APUs in the past..had they not been so, intel igpu would have looked much worse in comparison.
Add to this the massive budget they have to design and execute and it's not hard to see why AMD struggled in Apu more than they should have....graphics alone..IP Vs IP all else being equal... AMD is a quantum leap over intel....hence why we have kabylake G.
 
  • Like
Reactions: stockolicious

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
You have this all wrong.
Firstly, the CPU certainly DID hold back the iGPU.. especially in CPU intensive games, bulldozer was horrid for gaming, this can't be understated...also bulldozer was not power efficient..meaning the igpu had even less power to work with without the benefit of a fast CPU.
The memory controller was abysmal on bulldozer APUs...in already memory starved APUs..having a crud memory controller that extracts even less bandwidth is going to hurt performance significantly.
Process...it is well known that at least until AMD have 12nm..(possibly even far out to 7nm).intel have had a far superior process...both in density and performance....which allows them to pack in a comparative massive gpu transistor budget whilst also ramping clocks.
Those reasons are a giant lead weight holding back the GPU in AMD APUs in the past..had they not been so, intel igpu would have looked much worse in comparison.
Add to this the massive budget they have to design and execute and it's not hard to see why AMD struggled in Apu more than they should have....graphics alone..IP Vs IP all else being equal... AMD is a quantum leap over intel....hence why we have kabylake G.


Do they have a superior process? Which process? If Ryzen mobile is more efficient and clocks higher while having similar single threaded performance and higher multithreaded performance in compute and obliterates intel's chip in graphics performance, what about that equals a better process for intel?
 
Last edited:

Dayman1225

Golden Member
Aug 14, 2017
1,160
996
146
FPGAs and ASICs are not going to help Intel beat the likes of AMD and Nvidia in this field. Intel needs a good GPU core and they just hired Raja so I would expect them to produce something marginally worse than their competitors in about 5 years.

So you're telling me a chip specifically built for this type of operation will not be able to beat a GPGPU? I guess if you consider that first gen Nervana is on TSMCs 28nm, that may be the case (no numbers yet). Also FPGAs are being used for these type of situations, most recently with Microsoft using Intels Stratix 10 FPGAs for "real time AI", named Project Brain Wave, Bing recently started making use of it and Audi are also using Intel's FPGAs in their car. GPGPUs are not the only hardware that can do this type of work, nor are they be the best, but they are the most flexible.
 

DrMrLordX

Lifer
Apr 27, 2000
22,928
12,999
136
but they are the most flexible.

I would actually regard stuff like Xeon Phi (the old Phi, not the newer Phi that will be based on Core) to be the most flexible; the raw computational power is not there to compete with GPUs.

FPGAs are probably the right solution for the problem in the long run.
 

itsmydamnation

Diamond Member
Feb 6, 2011
3,076
3,907
136
So you're telling me a chip specifically built for this type of operation will not be able to beat a GPGPU? I guess if you consider that first gen Nervana is on TSMCs 28nm, that may be the case (no numbers yet). Also FPGAs are being used for these type of situations, most recently with Microsoft using Intels Stratix 10 FPGAs for "real time AI", named Project Brain Wave, Bing recently started making use of it and Audi are also using Intel's FPGAs in their car. GPGPUs are not the only hardware that can do this type of work, nor are they be the best, but they are the most flexible.
GPU are ideally placed because what we are really talking about it is memory access patterns. High throughput many core memory sub systems are really hard to build. All the Sea of cores chips in the past have failed because they suck at memory subsystem.

GPU's have very good memory subsystems that have been paid for with 20+ years of consumers buying GFX cards. The reason GP100 has 6 tensor units a SM is because that is 100% of the bandwdith that can be supplied to a SM based of the design. So that's not to say other "better" designs cant come along, but thats a lot of R&D money with a much smaller TAM. with the way GP100 works all that NV really paid for the tensor units is the R&D of them and the die space, everything else is already existing infrastructure.
 
  • Like
Reactions: Dayman1225

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
FPGAs are probably the right solution for the problem in the long run.

I think the answer always lied with their core competency. Xeon SPs are going to come with on-package FPGAs which will dramatically increase performance due to low latency communication. You still have the general purpose core which you can use for everything else and to ease compatibility.

The Nervana acquisition will ultimately end up as a Knights variant which will be a bootable Intel Xeon CPU. If I am going to guess, this is probably why they were ambiguous on the wording of whether Xeon Phi was totally gone or not. Knights Crest may be bunch of *mont cores with Nervana accelerators instead of heavy DP as with current Xeon Phi.
 

french toast

Senior member
Feb 22, 2017
988
825
136
Do they have a superior process? Which process? If Ryzen mobile is more efficient and clocks higher while having similar single threaded performance and higher multithreaded performance in compute and obliterates intel's chip in graphics performance, what about that equals a better process for intel?
I'm talking about intel 14nm Vs tsmc 28nm...which bulldozer based APUs are based on (carrizo/Bristol ridge)...your telling me there is not a difference there??
As for raven ridge it's MUCH better all round, process is much closer to intel at low power scenarios...but can you say with a straight face that glofo 14nm lpp is better than intel 14nm+?? I very much doubt it even if it's hard thing to prove.
Performance might be closer than in the past at low power (high power intel is alot faster) but density intel is still king.
This obviously skews things in Intel's favour when comparing performance of igpus...alongside CPU performance, memory controller and others, intel still does have advantages in two of those but just to a much lesser degree.
Hence why raven igpu destroys intel hd 630 @ 15w.
Swap the igpus over from intel to AMD APUs...raven ridge igpu would likely be even faster compared intel hd 630.
 

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
I'm talking about intel 14nm Vs tsmc 28nm...which bulldozer based APUs are based on (carrizo/Bristol ridge)...your telling me there is not a difference there??
As for raven ridge it's MUCH better all round, process is much closer to intel at low power scenarios...but can you say with a straight face that glofo 14nm lpp is better than intel 14nm+?? I very much doubt it even if it's hard thing to prove.
Performance might be closer than in the past at low power (high power intel is alot faster) but density intel is still king.
This obviously skews things in Intel's favour when comparing performance of igpus...alongside CPU performance, memory controller and others, intel still does have advantages in two of those but just to a much lesser degree.
Hence why raven igpu destroys intel hd 630 @ 15w.
Swap the igpus over from intel to AMD APUs...raven ridge igpu would likely be even faster compared intel hd 630.

Well you said "at least until AMD have 12nm", which implies you are comparing 14nm and 14nm as well. Yes i'm saying that with a straight face, that is what the few tests we have so far indicate- Raven Ridge clocks higher at the same wattage.
 

french toast

Senior member
Feb 22, 2017
988
825
136
Well you said "at least until AMD have 12nm", which implies you are comparing 14nm and 14nm as well. Yes i'm saying that with a straight face, that is what the few tests we have so far indicate- Raven Ridge clocks higher at the same wattage.
They are not the same architecture, it is more than possible that Ryzen and Vega are an inherently more efficient architecture than skylake and gen 9...making clock comparisons unhelpful in determining the better process at low power.
Certainly from what I've read intel still holds the density lead over Samsung 14lpp, definitely holds the performance advantages at high power silicon (my belief is ~25%)..as for low power silicon I believe 14nm lpp and 14nm + are more closely aligned....i would bet intel still has the lead though.

My original point was the difference between intel igpu Vs amd's...which was pointed out that intel actually had parity or even a lead until recently with skylake + gen 9....im saying there are many factors outside of gpu uarch which scews the results in intels favour, if igpus were swapped around...even in raven ridge vs Kaby R ...the results would swing even more favourable to AMD igpu, as intel likely still has better process, better gaming cpu, (raven ridge has chopped down 4mb L3/ccx..further magnifying Infinity fabric latency handicap).
Raven ridge could probably benefit from new drivers, 256mb dedicated vram could do with an option to increase, judging from user review who feels this holds it back.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,811
1,290
136
Coffee Lake-U comes with QC-GT3e at its peak. AMD at this point needs something a lot bigger.

48 EUs is better than AMD's 44 EUs. Gen9 EU = 16 * 32-bit and GCN CU = 64 * 32-bit, so 11 * 4 = 44 EUs. Helps that Gen9 has more efficient usage of EUs than AMD does with their CUs. Add the eDRAM, gg.
 

beginner99

Diamond Member
Jun 2, 2009
5,318
1,763
136
Nvidia certainly has a head start, but I would not count AMD out with their MI25 accelerators and whatever they produce next.

That said, the idea that Intel will somehow jump into this field and compete head-to-head with the likes of AMD or Nvidia is a joke. Maybe in a decade.

Stop all the anti-intel FUD. It's all you do in these forums and it's always completely wrong. Intel has a AI chip from a company they bought. This one will release soon and estimates are it will be in the range of Volta V100. We will see soon.
 
  • Like
Reactions: Arachnotronic

LightningZ71

Platinum Member
Mar 10, 2017
2,525
3,220
136
Raven Ridge's reduced L3 size is more to do with package size limitations and the lack of a need to talk to a second CCX. Without having to talk to a second CCX, or remote CCXs, IF latency affects are reduced to iGPU communications (still faster than previous implementations) and the memory and I/O controllers (also faster than previous generations). The two things that Raven Ridge needs most are drivers that have further refinements for iGPU situations and to have official support for, and market availability of higher specced SODIMMs. While it won't make a night and day difference, having low latency 3200 rated DDR4 would make a non-trivial improvement in the performance of the whole system, especially GPU performance.
 

stockolicious

Member
Jun 5, 2017
80
59
61
You know Intel(Nervana) are building ASICs for this purpose right? and they have their FPGAs(Altera). GPUs aren't the only thing that can do AI, DL, ML etc

From what i read you are correct INTC's Nervana is an entry into DL,AI ect - but there is no software or community there which will be their challenge - they have money to throw at that though. AMD is really in the same situation - AMD GPU hardware is not the problem for them in AI,DL with NVIDA, its that NVDA has a large echosystem with CUDA they have been at this for a while. All 3 will compete but INTC / AMD are playing catchup. AMD is getting high ASP GPU compute wins with the likes of Goog, alibaba, Amazon, ect so they are getting there.
 
  • Like
Reactions: moinmoin