Linus Torvalds: Discrete GPUs are going away

Page 16 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

NTMBK

Lifer
Nov 14, 2011
10,419
5,712
136
DDR3 on discrete is kind of irrelevant. The HD7750 GDDR5 smokes Kaveri. Double the passmark score. Double the framerate in games like bioshock infinite. It will be 2-3 more years before any IGP under $200 will outperform a 7750. There might be some $500 iris super mega pro that does it next year for $666, but I'm not counting that.

Kaveri was originally meant to support GDDR5M, which would have broken the memory bottleneck and made it faster than a 7750. But Elpida went bankrupt, and scuppered those plans.

Seriously, fast RAM on an APU is far from inconceivable. The PS4 already has GDDR5, and HBM is just around the corner.
 

PrincessFrosty

Platinum Member
Feb 13, 2008
2,300
68
91
www.frostyhacks.blogspot.com
Linux is massive in servers. Most web servers run Linux, as do most HPC servers.

It's pretty big, mostly due to historical bias against windows server as a bad platform which is something that's not true these days. ASP.NET and windows hosting IIS actually has pretty good market share, last I read linux/apache was around 37% and Microsoft were trailing by about 4% behind, although that gap is shrinking.
 
Aug 11, 2008
10,451
642
126
Kaveri was originally meant to support GDDR5M, which would have broken the memory bottleneck and made it faster than a 7750. But Elpida went bankrupt, and scuppered those plans.

Seriously, fast RAM on an APU is far from inconceivable. The PS4 already has GDDR5, and HBM is just around the corner.

You can always say "what if", but bottom line is that it didn't happen. Even then the 7750 is a 2 plus year old low end gpu. Especially for desktop, I don't see discrete cards going away for a very long time. The performance advantages and flexibility of a dgpu are just too large over an igpu. We have heard these optimistic predictions over and over from both Intel and amd about the great performance of the next Gen of igp, but each new generation has been either a marginal improvement or very expensive with limited availability.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
There will be a GT4, which will have 2.4X as much cores as Haswell GT3.

Nvidia improved their already cutting edge architecture by 1.35X per core and 2X per watt, according to their own slide, so Intel could certainly do something similar or better with 2 iterations.

Ah, the mythical GT4e dGPU killer. I'll believe that when I see it. Even with 96EUs (roughly 384 "AMD/Nvidia" shaders) its properly only going to match or slightly exceed Kaveri's current performance. Even a HD7750/GDDR5 completely smokes Kaveri. It also unlikely to come cheap, I'm not paying $300+ (for the CPU alone) for Kaveri level graphics performance.

Also what's stopping Nvidia from doing the same 2x per watt with next generation Pascal? "Big" Maxwell hasn't even launched yet. We can extrapolate from the GM107, but we don't know its performance yet. We also haven't even seen what 20nm is good for yet.

dGPU performance isn't stationary, IGPs are chasing a moving target...
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
No, first up they're chasing the fixed target of being usable for a moderate spec gaming machine at 1900*1080. Not there yet of course.

Then you keep iterating, trying to eat away at the low/mid end etc.
(Trying to have an impact before 4k resets things ;))

Kaveri's iGPU is bandwidth strangled but Intel's stuff will have the edram to help out. Like the XBox1 APU :)
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
Ah, the mythical GT4e dGPU killer. I'll believe that when I see it. Even with 96EUs (roughly 384 "AMD/Nvidia" shaders) its properly only going to match or slightly exceed Kaveri's current performance. Even a HD7750/GDDR5 completely smokes Kaveri. It also unlikely to come cheap, I'm not paying $300+ (for the CPU alone) for Kaveri level graphics performance.

Also what's stopping Nvidia from doing the same 2x per watt with next generation Pascal? "Big" Maxwell hasn't even launched yet. We can extrapolate from the GM107, but we don't know its performance yet. We also haven't even seen what 20nm is good for yet.

dGPU performance isn't stationary, IGPs are chasing a moving target...

Don't worry, a certain user has the "counter-argument" than AMD and Nvidia "can't afford" 20nm and will stay on 28nm until 2016 or later. :D
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
No, first up they're chasing the fixed target of being usable for a moderate spec gaming machine at 1900*1080. Not there yet of course.

Then you keep iterating, trying to eat away at the low/mid end etc.
(Trying to have an impact before 4k resets things ;))

Isn't that more or less what's been happening for the last 10 years? IGPs are decent at X resolution and settings, then a higher resolution comes along, with higher quality settings. After a while the IGP simply can't keep up.

I remember having this almost same discussion 10 years ago when Intel launched their GMA900. People where saying the exact same things back then. dGPUs haven't died yet... :)

Kaveri's iGPU is bandwidth strangled but Intel's stuff will have the edram to help out. Like the XBox1 APU :)

The benefit of EDRAM is already applied with the Iris Pro, and yet AMD still matches it with their highest-end IGP. Even AMDs 512 shader IGP with EDRAM properly won't be faster then a HD7750/GDDR5. We're talking the lowest-end discrete card you can buy.

The EDRAM used in Iris Pro "only" provides 50GB/s+25.6GB/s from main memory. Even my 2.5 year old HD7870 has 153.6GB/s available, not to mention over 2x more shader cores then even a highly hypothetical 144EU version of Intels Gen8... ;) (that'd be 576 "AMD/NV shaders", just to give some perspective what potential performance we're talking about)
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
We have heard these optimistic predictions over and over from both Intel and amd about the great performance of the next Gen of igp, but each new generation has been either a marginal improvement or very expensive with limited availability.
I think you are mixing stuff up, so let's get this straight:

Intel released a new architecture and more EUs with Ivy Bridge. It was a good improvement over HD3000 and HD2000. No optimistic predictions.

Intel increased the number of EUs with Haswell to 20 and announced GT3 Iris Pro with 2X as many EUs, so that was obviously a big improvement. However, they decided to limit the number of SKUs and ignore LGA. I personally wasn't expecting anything from Haswell anyway: it would continue to use Gen7, so anything new was a bonus.

This year with Broadwell, you can indeed expect a great improvement in architecture + 4X as many EUs for Atom and 20% more for Core. Again: no optimistic predictions or limited availability.

Next year, Intel will release another architectural revision with Gen9, and the number of EUs may increase by 50%. So the desktop will suddenly go from GT2/20EUs to GT4e/96-144EUs in Q2'15, on top of Gen7.5 -> Gen9.

Ah, the mythical GT4e dGPU killer. I'll believe that when I see it. Even with 96EUs (roughly 384 "AMD/Nvidia" shaders) its properly only going to match or slightly exceed Kaveri's current performance. Even a HD7750/GDDR5 completely smokes Kaveri. It also unlikely to come cheap, I'm not paying $300+ (for the CPU alone) for Kaveri level graphics performance.
How do you come up with those numbers? In any case, they're wrong. Iris Pro is already better than any APU of AMD.

And GT4 isn't mythical at all. It's a real SKU for LGA. It will blow any current IGP from Intel or AMD away with a much improved architecture and >2X as much EUs. Think of it as an improvement on the order of something like Fermi/Kepler -> Maxwell from 2,880 to 6,912 or 10,368 cores.

Also what's stopping Nvidia from doing the same 2x per watt with next generation Pascal? "Big" Maxwell hasn't even launched yet. We can extrapolate from the GM107, but we don't know its performance yet. We also haven't even seen what 20nm is good for yet.

dGPU performance isn't stationary, IGPs are chasing a moving target...
A moving target isn't a problem. IGPs are moving faster than the moving target. They have steadily been catching up in the last years and that isn't going to change. Instead, it seems that it will accelerate with Intel releasing GT3, GT4, Gen8 and Gen9 in 2015.

Pascal is a 2016 product according to Nvidia's roadmap. The best node that will be available is 20+FF, compared to Intel's 10nm with III-V. Don't forget that Intel is the richest semiconductor company and has the most advanced architecture, so why couldn't they do the same with their graphics architecture?
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
No, first up they're chasing the fixed target of being usable for a moderate spec gaming machine at 1900*1080. Not there yet of course.

Then you keep iterating, trying to eat away at the low/mid end etc.
(Trying to have an impact before 4k resets things ;))

Kaveri's iGPU is bandwidth strangled but Intel's stuff will have the edram to help out. Like the XBox1 APU :)

That won't matter until GT4 hits i3 in 3-5 years. Nobody is going to buy an i7 for 1080p on low settings. The only way that this can happen soon if if AMD makes massive marketshare gains.
 
Aug 11, 2008
10,451
642
126
I think you are mixing stuff up, so let's get this straight:

Intel released a new architecture and more EUs with Ivy Bridge. It was a good improvement over HD3000 and HD2000. No optimistic predictions.

Intel increased the number of EUs with Haswell to 20 and announced GT3 Iris Pro with 2X as many EUs, so that was obviously a big improvement. However, they decided to limit the number of SKUs and ignore LGA. I personally wasn't expecting anything from Haswell anyway: it would continue to use Gen7, so anything new was a bonus.

This year with Broadwell, you can indeed expect a great improvement in architecture + 4X as many EUs for Atom and 20% more for Core. Again: no optimistic predictions or limited availability.

Next year, Intel will release another architectural revision with Gen9, and the number of EUs may increase by 50%. So the desktop will suddenly go from GT2/20EUs to GT4e/96-144EUs in Q2'15, on top of Gen7.5 -> Gen9.


How do you come up with those numbers? In any case, they're wrong. Iris Pro is already better than any APU of AMD.

And GT4 isn't mythical at all. It's a real SKU for LGA. It will blow any current IGP from Intel or AMD away with a much improved architecture and >2X as much EUs. Think of it as an improvement on the order of something like Fermi/Kepler -> Maxwell from 2,880 to 6,912 or 10,368 cores.


A moving target isn't a problem. IGPs are moving faster than the moving target. They have steadily been catching up in the last years and that isn't going to change. Instead, it seems that it will accelerate with Intel releasing GT3, GT4, Gen8 and Gen9 in 2015.

Pascal is a 2016 product according to Nvidia's roadmap. The best node that will be available is 20+FF, compared to Intel's 10nm with III-V. Don't forget that Intel is the richest semiconductor company and has the most advanced architecture, so why couldn't they do the same with their graphics architecture?

Actually you are just reinforcing my point. None of these great "advances" from broadwell or skylake can be purchased yet. They are all projections and speculation. Show me an igp on a sub 200 dollar cpu that is the equivalent of a GT750Ti, and I will be convinced.

And I am talking about the desktop here, as per my other post. I see a much more competitive position for igps in mobile, where you cannot easily add a discrete card and you are much more limited by power and heat constraints. And I suppose you could say SB and IB igps were a "great advance" relative to the total crap igps before them, but compared to a discrete card, they are very anemic.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
How do you come up with those numbers? In any case, they're wrong. Iris Pro is already better than any APU of AMD.

Dont think so.

http://pclab.pl/art56975-4.html

And that is only using system memory against eDRAM, just imagine what will happen when AMD will start using HBM or even TSV.

It must be extremely embarrassing having a more advanced process, using eDRAM and at the same time having a bigger die (+eDRAM) and loose in performance.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Actually you are just reinforcing my point. None of these great "advances" from broadwell or skylake can be purchased yet. They are all projections and speculation. Show me an igp on a sub 200 dollar cpu that is the equivalent of a GT750Ti, and I will be convinced.

And I am talking about the desktop here, as per my other post. I see a much more competitive position for igps in mobile, where you cannot easily add a discrete card and you are much more limited by power and heat constraints. And I suppose you could say SB and IB igps were a "great advance" relative to the total crap igps before them, but compared to a discrete card, they are very anemic.

You are reinforcing my point that IGPs are catching up. I can't show you such an IGP because IGPs are still catching up. Just wait a few years.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
Actually, don't the chips in the XB1/PS4 pretty much qualify? Ok, not available for sale for putting in our own computer builds but still a CPU/iGPU :)

A shame for current top end iGPU performance that Broadwell K has been delayed so long/Kaveri lost the GDDR5M option.

As for people buying, I imagine they'll be semi niche for a bit. People specifically after small machines and the like. I'd definitely like one vs my current set up. 4k tempting in another direction of course, so goodness knows.
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
Actually, don't the chips in the XB1/PS4 pretty much qualify? Ok, not available for sale for putting in our own computer builds but still a CPU/iGPU :)

A shame for current top end iGPU performance that Broadwell K has been delayed so long/Kaveri lost the GDDR5M option.

As for people buying, I imagine they'll be semi niche for a bit. People specifically after small machines and the like. I'd definitely like one vs my current set up. 4k tempting in another direction of course, so goodness knows.

Even if they were available, the CPU would be so much of a bottleneck that it wouldn't matter.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
How do you come up with those numbers? In any case, they're wrong. Iris Pro is already better than any APU of AMD.

And GT4 isn't mythical at all. It's a real SKU for LGA. It will blow any current IGP from Intel or AMD away with a much improved architecture and >2X as much EUs. Think of it as an improvement on the order of something like Fermi/Kepler -> Maxwell from 2,880 to 6,912 or 10,368 cores.

Those are not made up numbers, you should look at anandtech's own Kaveri review:

http://www.anandtech.com/show/7677/amd-kaveri-review-a8-7600-a10-7850k/12
http://www.anandtech.com/show/7677/amd-kaveri-review-a8-7600-a10-7850k/13

For example:

60885.png


But notice both are running at BELOW 10FPS. Are you seriously suggesting gaming at sub-10FPS...? ;)

A moving target isn't a problem. IGPs are moving faster than the moving target. They have steadily been catching up in the last years and that isn't going to change. Instead, it seems that it will accelerate with Intel releasing GT3, GT4, Gen8 and Gen9 in 2015.

Pascal is a 2016 product according to Nvidia's roadmap. The best node that will be available is 20+FF, compared to Intel's 10nm with III-V. Don't forget that Intel is the richest semiconductor company and has the most advanced architecture, so why couldn't they do the same with their graphics architecture?

Going by that logic, IGPs should already be faster then dGPUs since dGPUs are fabbed at 28nm, while Intel's CPUs are fabbed at 22nm. Error-404 logic not found...

Actually you are just reinforcing my point. None of these great "advances" from broadwell or skylake can be purchased yet. They are all projections and speculation. Show me an igp on a sub 200 dollar cpu that is the equivalent of a GT750Ti, and I will be convinced.

And I am talking about the desktop here, as per my other post. I see a much more competitive position for igps in mobile, where you cannot easily add a discrete card and you are much more limited by power and heat constraints. And I suppose you could say SB and IB igps were a "great advance" relative to the total crap igps before them, but compared to a discrete card, they are very anemic.

Agreed.

Dont think so.

http://pclab.pl/art56975-4.html

And that is only using system memory against eDRAM, just imagine what will happen when AMD will start using HBM or even TSV.

It must be extremely embarrassing having a more advanced process, using eDRAM and at the same time having a bigger die (+eDRAM) and loose in performance.

That what I've been saying. Nothing is stopping AMD/NV using the same advancements as Intel. Nvidia has already boosted performance significantly on the GTX750(TI) by the "simple" expedient of adding 8x the cache on-chip compared to Kepler.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
What does that mean? Why would TSMC actually want to accelerate a process node (with products in 2018/2019) to compete with a process that is ~3 years earlier?
 

PPB

Golden Member
Jul 5, 2013
1,118
168
106
It is hard to believe iGPs will ever catch up to dGPUs when the first ones have only sub 100W thermal envelopes shared with CPUs while the latter have up to 500W of juice to stretch their legs.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
Don't worry, a certain user has the "counter-argument" than AMD and Nvidia "can't afford" 20nm and will stay on 28nm until 2016 or later. :D

The rumor is amd and nv are going to roll back to 40nm to reduce costs... ;)
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
It is hard to believe iGPs will ever catch up to dGPUs when the first ones have only sub 100W thermal envelopes shared with CPUs while the latter have up to 500W of juice to stretch their legs.
The idea that people here have is that IGPs will become "good enough" for 99% of the market, like integrated sound cards. That has a decent chance of happening, but the people saying it'll happen in 5 years are crazy.
 

PPB

Golden Member
Jul 5, 2013
1,118
168
106
The problem is that good enough in the graphics department isnt the good enough in the CPU department. Good enough for CPU is pretty much a sitting duck that was more or less quick to achieve. On the GPU department "good enough" will always be a moving target, because even mainstream games tend to get more demanding over the passing years (WoW, for example, even tho it suffered optimizations it still a hell of a more demanding game graphics wise from vanilla to these days). So iGPs reaching "good enough" is a lot more of a tough task than CPUs reaching "good enough".


I'm not even talking about your 7950, think about of a 7870 worth of graphics powers. Fitting that power into a iGP that is totally power constrainted and bandwith underfed isnt happening anytime soon.

That is why I'm a supporter of top end desktop APUs getting up to 150W so they can fit juicer GPUs. That is, when HBM becomes a mainstream solution and is paired to those APUs.
 

ultimatebob

Lifer
Jul 1, 2001
25,134
2,450
126
The idea that people here have is that IGPs will become "good enough" for 99% of the market, like integrated sound cards. That has a decent chance of happening, but the people saying it'll happen in 5 years are crazy.

Five years sounds completely plausible for killing discrete GPU's in mobile devices. Hell... the Iris Pro 5200 on my Macbook Pro are already good enough to handle Civilization V and a few other games without any issue.

Oh the Windows desktop, I'd give it a decade. For the Linux desktop, we're basically already there... most Linux users aren't gamers, and most of the 3D gaming drivers for Linux suck.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Five years sounds completely plausible for killing discrete GPU's in mobile devices. Hell... the Iris Pro 5200 on my Macbook Pro are already good enough to handle Civilization V and a few other games without any issue.

Oh the Windows desktop, I'd give it a decade. For the Linux desktop, we're basically already there... most Linux users aren't gamers, and most of the 3D gaming drivers for Linux suck.

I can play Civ with my HD4000 without any issues too, that doesnt mean iGPUs have killed dGPUs even in Laptops.

I am a big supporter of iGPU both in Mobile/Laptops and Desktop, but saying they will kill dGPUs it takes a lot of inside knowledge of AMD/Intel and NVIDIAs plans for the next 5+ years, something that nobody here has.

There will be a time that iGPUs will gain a big performance boost in around 2-3 years from now, but that doesnt mean they will start to kill dGPUs especially in Desktop with 4K monitors.