Linus Torvalds: Discrete GPUs are going away

Page 19 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
Better process = higher density -> more transistors available + lower price; better performance; lower power consumption.
You keep repeating these statements without posting any hard data. Please show us the performance of these mythical parts.

graphbacteria2.gif
What's the source of these graphs you keep posting? Can you link to the articles?

Also, can you please clarify if you have any affiliation with Intel? Your avatar seems to imply that you might.

The unlocked GT3 Broadwell SKU with a much improved Gen8 architecture and 20% more cores/GT will be the first sign of that. Not much later, there will be a Skylake part with GT4 Gen9 graphics.
You repeatedly fail to demonstrate real-world performance of these non-existent parts.

How you and I interpret this information, however, is opinion and apparently we disagree. So I can't show you benchmarks.
And that's really where your entire argument falls down.

This is what we do different. You extrapolate the past, I use the information I have about the Gen8/9 SKUs and manufacturing advances/
All you do is guess. Even your statements about current iGPUs in terms of how that stack up to dGPUs for performance are woefully wrong , yet you expect us to believe your “predictions”.

Did you miss the recent news?
This was already discussed several pages back:
Also FLOPS is only one part of the performance equation. What's its texturing fillrate on FP render targets? How many MSAA samples per cycle can the ROPs perform? What (if any) hardware HSR/culling does it employ for rasterization?
The answer I got to this was basically “it’s not a GPU”.


If you think it is a GPU then please answer my statement, otherwise you should refrain from waving around meaningless FLOPS numbers in the context of graphics performance.

Again, Larrabee was posting great FLOPS numbers, yet it couldn’t even compete with obsolete dGPUs.
 

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0

Both Fudzilla and PC Partner Group are funny.

PC partner blames Bitcoin bubble burst for their profit decline.

Fudzilla is no better - today they publish an article "PC Partner Group issues profit warning",
although the correct title should be "PROFIT WARNING FOR THE SIX MONTHS ENDED 30 JUNE 2014", as per 29.07. company announcement.

Meaning they had already took the hit and blamed the bitcoin for having to cut their prices.
Where are these price cuts anyway (Manli, Zotac, Inno3d; must be Asia, because Inno3D prices are top-dollar as always), and why is this not reflected in NVDA Q1 results?
 
Last edited:

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
The bitcoin craze screwed the market up this year. Too many cheap used cards out there and the companies involved aren't power players in the AIB market so it's really hitting them hard.

What companies? This is PC Partner Group (Manli, Zotac, Inno3D) with their profit warning. They are no tiny player by any measure.

What about EVGA and PNY? Why are they insulated, and where are these alleged price cuts?
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
The BitCoin thing (sales decline excuse) is getting old. No one was upset when they were selling cards $200-300 above MSRP last year, they should have prepared better for when the BC market would dry-up. Discrete miners were up and coming anyway, so the end was coming.
 
Last edited:

DooKey

Golden Member
Nov 9, 2005
1,811
458
136
What companies? This is PC Partner Group (Manli, Zotac, Inno3D) with their profit warning. They are no tiny player by any measure.

What about EVGA and PNY? Why are they insulated, and where are these alleged price cuts?

I suspect that at least EVGA gets better discounts on product from NV than any other AIB since they are a top mover of NV cards. This might be something that helps them as well as their reputation for great CS.

I may have been mistaken about PC Partner Group, but those brands certainly aren't big in the US and I'm not sure how big they are overseas.

Anyway, I'm just guessing like everyone else that LTC/scrypt wasn't good for the video card market as a whole because it brought lots of players in that never would have been in the video card market. It blew up into a huge bubble and now lots of high-end used stuff is out there so it's probably hurting every AIB in one way or another.
 

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
I think it's the other way around.

Nvidia exclusive dealer having tremendous issues over Scrypt bubble and over supposed price cuts does not sound right.

Clearly there is something specific about PC Partner Group (besides not being able to issue a correct assessment) if everyone else including NV is doing fine.
Wrong business decisions on their part or perhaps something about Asia?
Manli/Zotac are big there, Inno3D is Euro based and they are definitely not cutting their prices.

One other thing - I'm 100% sure that Manli/Zotac have lower number of high-end GPUs in their mix than say Inno3D or EVGA, or almost any other (NV) AIB
and higher end cards should be affected the most with Script bubble. So WTH...

Obviously there is much more to PC Partner profit slumber than the Scrypt bubble
 
Last edited:

Ajay

Lifer
Jan 8, 2001
16,094
8,112
136
I think it's the other way around.

Nvidia exclusive dealer having tremendous issues over Scrypt bubble and over supposed price cuts does not sound right.

Clearly there is something specific about PC Partner Group (besides not being able to issue a correct assessment) if everyone else including NV is doing fine.
Wrong business decisions on their part or perhaps something about Asia?
Manli/Zotac are big there, Inno3D is Euro based and they are definitely not cutting their prices

I agree, especially since the crypto-coin affected AMD AIB's much more than NV's. Although we are getting pretty OT here.
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
Script bubble burst brought AMD cards down to MSRP, forcing Nvidia card price cuts to compete.
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
What Nvidia price cuts?
They are at MSRP since forever. Always are, always will be
Retailers have made them a little cheaper, though it's mostly the 780 Ti that was affected. I guess that wouldn't really affect profits in a big way.
 

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
Retailers have made them a little cheaper, though it's mostly the 780 Ti that was affected. I guess that wouldn't really affect profits in a big way.

Indeed it wouldn't.
Except maybe for EVGA or Inno3D, but not for Asia/OEM focused Manli/Zotac.

Anyway lets get back to topic, as suggested by Ajay ;)
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
http://jonpeddie.com/back-pages/comments/discretes-are-deadlong-live-discretes/

https://jonpeddie.com/download/media/slides/Dynamics_in_GPU_market.pdf

Many would be surprised to see that discretes have never been in a decline if you look at the whole picture. Discretes won't go away in a long time to come because there will always be a use for them unless something that's cheaper and faster can replace them.. IGP's will NEVER offer the performance benefits a discrete will give you unless theres some big break through which imo is close to being impossible within the next ~5 years.
 

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
http://jonpeddie.com/back-pages/comments/discretes-are-deadlong-live-discretes/

https://jonpeddie.com/download/media/slides/Dynamics_in_GPU_market.pdf

Many would be surprised to see that discretes have never been in a decline if you look at the whole picture. Discretes won't go away in a long time to come because there will always be a use for them unless something that's cheaper and faster can replace them.. IGP's will NEVER offer the performance benefits a discrete will give you unless theres some big break through which imo is close to being impossible within the next ~5 years.

That could be HBM and die stacking, but depending on the products offered, one could still buy a much faster graphics card with the actual size of VRAM needed to be useful. In the desktop market, Intel would be stupid to build big iGPs into all their CPUs because it would be expensive and nobody wants to pay extra for all that IGP when they'll buy a graphics card anyways. On the consumer flip side, no general consumer would want to pay for an expensive GT4e equipped CPU in a notebook for back-to-school for little Timmy.

While there is value in having the backup IGP, Intel has to consider what the IGP costs in terms of die area and dollars, because ultimately, it gets passed to the OEMs and the consumer, who likely doesn't know or care. I think AMD has failed to realize this to a certain degree, though much of their idea was to use GPGPU to outdo their competition. It's arguable that a smaller graphics array (128 SPs GCN) on a dual module part would be much more successful because OEMs could deliver somewhat competitive CPU performance without having to charge consumers for the bigger graphics area. Consumers who want graphics could still go with dGPU solutions. If AMD could actually develop a competitive core with their upcoming architecture, they could come up from behind and smack Intel by offering IGP-less dies that could be smaller and similarily capable as other Intel products. Products like the Athlon 750k are a good example (even if it isn't technically GPU-less).
 
Last edited:

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
While there is value in having the backup IGP, Intel has to consider what the IGP costs in terms of die area and dollars, because ultimately, it gets passed to the OEMs and the consumer, who likely doesn't know or care. I think AMD has failed to realize this to a certain degree, though much of their idea was to use GPGPU to outdo their competition. It's arguable that a smaller graphics array (128 SPs GCN) on a dual module part would be much more successful because OEMs could deliver somewhat competitive CPU performance without having to charge consumers for the bigger graphics area. Consumers who want graphics could still go with dGPU solutions. If AMD could actually develop a competitive core with their upcoming architecture, they could come up from behind and smack Intel by offering IGP-less dies that could be smaller and similarily capable as other Intel products. Products like the Athlon 750k are a good example (even if it isn't technically GPU-less).

Given how well the "tiny" 128CGN cores on Kabini performs, I'm inclined to agree with you. A smaller die with two Steamroller modules (4 "cores") and a 128CGN core IGP could have real potential in the low-end of the market. I think one of AMD's main problems right now is that they're more-or-less forced to sell a relatively big die for even the single module/128CGN core Kaveri variants. That can't be good for profit margins...

On the flip side, AMD should offer a 3 Steamroller module (6 "cores") with a 128CGN core IGP APU. That way you'd have both a reasonably performing high-end "enthusiast"/entry-level workstation APU. Since with enthusiast systems its more likely to be used with a discrete card, the lower performing IGP won't be a handicap. As for workstations, they're more likely to be focused on CPU performance, with the IGP just being required to drive the video outputs, and assist with GPU-compute functions. A task the "tiny" 128CGN core IGP handles remarkably well.
 

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
On the flip side, AMD should offer a 3 Steamroller module (6 "cores") with a 128CGN core IGP APU. That way you'd have both a reasonably performing high-end "enthusiast"/entry-level workstation APU. Since with enthusiast systems its more likely to be used with a discrete card, the lower performing IGP won't be a handicap. As for workstations, they're more likely to be focused on CPU performance, with the IGP just being required to drive the video outputs, and assist with GPU-compute functions. A task the "tiny" 128CGN core IGP handles remarkably well.

It would be a decent desktop product. Though it would be nice if they had a newer and more efficient quad module.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
It would be a decent desktop product. Though it would be nice if they had a newer and more efficient quad module.

I would think you could cram 3 Steamroller modules + smallish (128, perhaps 256 shader) IGP into a Kaveri-sized die. Doing a 4 module chip + IGP would likely require something like a Piledriver sized die, but it could be possible if you loose the 8MB L3 cache.

Its a nice idea. One can dream... :)
 

sao123

Lifer
May 27, 2002
12,653
205
106
Where are you getting that the IHV's are even considering this? They are without a doubt incorporating GPU/CPU on a single chip. Both AMD and Intel are working very hard to make strives in this direction. Sockets for GPU's on the mobo? I've seen no mention of this at all.


NV Pascal Architecture Brief mentions exactly what I am talking about in the forms of NVLink as a bus which replaces PCIE 3.0, a new socket on motherboards for a GPU.

http://www.anandtech.com/show/7900/nvidia-updates-gpu-roadmap-unveils-pascal-architecture-for-2016


But the rabbit hole goes deeper. To pull off the kind of transfer rates NVIDIA wants to accomplish, the traditional PCI/PCIe style edge connector is no good; if nothing else the lengths that can be supported by such a fast bus are too short. So NVLink will be ditching the slot in favor of what NVIDIA is labeling a mezzanine connector, the type of connector typically used to sandwich multiple PCBs together (think GTX 295). We haven’t seen the connector yet, but it goes without saying that this requires a major change in motherboard designs for the boards that will support NVLink. The upside of this however is that with this change and the use of a true point-to-point bus, what NVIDIA is proposing is for all practical purposes a socketed GPU, just with the memory and power delivery circuitry on the GPU instead of on the motherboard.[...]

NVIDIA’s Pascal test vehicle is one such example of what a card would look like. We cannot see the connector itself, but the basic idea is that it will lay down on a motherboard parallel to the board (instead of perpendicular like PCIe slots), with each Pascal card connected to the board through the NVLink mezzanine connector. Besides reducing trace lengths, this has the added benefit of allowing such GPUs to be cooled with CPU-style cooling methods (we’re talking about servers here, not desktops) in a space efficient manner. How many NVLink mezzanine connectors available would of course depend on how many the motherboard design calls for, which in turn will depend on how much space is available.
 
Last edited:
Sep 29, 2004
18,656
67
91
Anyone remember math coprocessors?

I do. Al that tech eventually went on the CPU die. As soon as companies can move GPUs onto the CPU die it will happen. And it won't be a 10 year transition. It will happen in about a year. Die hards will sill go discrete a few years longer but the mass market won't get dGPUs.
 

NTMBK

Lifer
Nov 14, 2011
10,412
5,680
136
Who exactly is going to support NVLink? IBM and NVidia. We won't see it on x86.