PSA: GPU prices are only going up

jpiniero

Lifer
Oct 1, 2010
16,526
7,031
136
So, lets see...

- The cost of the chips is only going up. $/transistor at 14FF and TSMC 16FF+ isn't going to be much cheaper than TSMC 28 nm, and could be more esp initially. Design costs are skyrocketing as well. Since adding transisitors/cores has been the primary way the two have increased performance - that's obviously a big problem. This won't be reversed (if?) EUV arrives so it could be a long time. I actually think people have been lucky that TSMC 28 nm has been such a legendary node; chips the size of Titan X/980 Ti would not normally be doable.
- Volumes are going down. dGPUs are going to get wiped out in the low end... and that is where most of the money has been made. They need to make money elsewhere; and making the high end really high end is probably the only option.
- nVidia has shown that they can get people to pay higher prices; and AMD is hurting so bad they can't afford to be aggressive on price anymore.

I do think nVidia should be OK given their mindshare as long as PC gaming doesn't completely implode thanks to mobile or Intel blocks dGPUs by eliminating PCIe lanes from the mainstream line. But it's going to get a lot more expensive esp once AMD is gone. My thinking is that you will see small dies with slightly higher transistor counts and slightly higher prices for the first iteration for Pascal/AI; and then things will only get bigger & more expensive after that.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
The question is how long life there is left in GPUs. Its already going to be very painful economically on 14/16nm. And I can easily see 4 or more refreshes on 14/16nm.

AMDs future is the semicustom. nVidias looks to be a combination of car stuff, cloud and neuro networks.

But until then end, cost will only rise.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Volumes are going down. dGPUs are going to get wiped out in the low end... and that is where most of the money has been made.

Do you have a citation for that claim? Integrated GPUs aren't serious competition for anything over $99... even the new Broadwell Iris Pro struggles to match a GTX 750. Were AMD and Nvidia really making "most of the money" on ultra-low-end trash? I was under the impression that the sweet spot of discrete GPUs was roughly $150-$350. Nvidia got massive sales on the GTX 970, much more than they expected.

It won't be until the across-the-board introduction of 2nd generation HBM that an integrated GPU will be able to match a ~$200 discrete GPU from today (GTX 960 or R9 285). And that's being generous; GTX 960 is nearly 3 billion transistors, fitting something like that on a iGPU is going to be a heavy lift, even with a smaller process. And by the time that happens, the $200 midrange discrete GPUs will be around the performance of a modern high-end card... and newer games will be designed to take advantage of that GPU horsepower. It's an endless treadmill, and iGPUs can never really catch up.

They need to make money elsewhere; and making the high end really high end is probably the only option.

One error here is that you're only focused on the consumer gaming cards. Nvidia makes a huge chunk of their revenue on the professional Tesla and Quadro lines, and AMD's slides at the Financial Analyst Day indicate that they expect to see a large part of their GPU revenue growth in the FirePro arena. Professionals don't skimp on their tools; they will pay five times as much to get the best, rather than something that is simply "good enough, sort of".

We already see a similar pattern in the CPU market. Intel's HEDT CPU line wouldn't be viable on its own; it is a byproduct of server development. Since Intel already spent the money to develop big, many-core CPUs with no iGPUs for the server market, offering the same product to high-end desktop users lets them make additional profit with few additional R&D costs. This is already happening to some extent in the high-end GPU market. For instance, both AMD's Tahiti and Nvidia's GK110 were compute-first designs, but they were both able to win the performance crown when initially released to the gaming market. It's likely that we will see more such designs in the future.

But it's going to get a lot more expensive esp once AMD is gone.

Don't count your chickens before they're hatched. AMD isn't dead yet, and if they do wind up going under, someone will buy their assets and continue in their stead.

Frankly, this post strikes me as just more of the usual "everyone except Intel sucks" fanboyism that we see so often here. It would be interesting to find out how many of the people pushing this line have a financial position in Intel, and/or are shorting AMD stock.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
Not to mention -3.5% is hardly "tanking". lol.

Also didn't help that there are no new Tesla cards until Pascal and Maxwell didn't bring much at all for Quadro. Likely will see some big sales when these get really get refreshed.
 

Shehriazad

Senior member
Nov 3, 2014
555
2
46
In 10 years we will all look back and awkwardly smile at the youngins while trying to explain that people actually bought a PC in many pieces that needed to be supplied with tons of energy and cost a ton of cash.

But eh...short term GPU prices might go up a little...but first energy efficiency will go up with 14/16nm (reducing need for higher end power supplies and also cutting power bills at least a little bit, both are cost factors)...and then dGPUs will slowly start phasing out entirely for anything that isn't super enthusiast and supercomputer.

I don't even think either AMD or Nvidia will ever produce middle or lowish end dGPUs after 10nm.(except for the obvious super enthusiast/super computer solution)

Next stop..SoCville.

At which point GPUs will be unimportant...because in 10 years SoC machines will be so small and efficient (yet powerful) that current GPUs will probably look laughable. 10 years tops...calling it.

I'd say imagine Stronger than Skylake performance with Titan X GPU power in USB sticks formats for the average user...and for higher end there is still SFF PCs (lol)...either way...in the long run rising GPU costs will really be a non topic as the need for power will be going down for the majority of people.


Edit: Whoa I derped and wrote PSU instead of GPU...what a nice slip XD
 
Last edited:

jpiniero

Lifer
Oct 1, 2010
16,526
7,031
136
Do you have a citation for that claim? Integrated GPUs aren't serious competition for anything over $99... even the new Broadwell Iris Pro struggles to match a GTX 750. Were AMD and Nvidia really making "most of the money" on ultra-low-end trash?

Historically it's always been that way. Now I imagine the most money is being made on the OEM mobile parts, like the 940M/950M/960M. nVidia's response to this is to basically push the high end with higher prices, and they've been relatively successful in that.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
From that article: "Despite the disappointing numbers, the Q1 results are not likely indicative of any long-term trend. Rather, the poorer than expected shipment figures were in part a consequence of an atypically strong fourth quarter."

No way dude, you are telling me you actually read the report and analyzed the numbers? Why are you making so much sense??

Also, OP, chill out. Bang for buck will continue to go up even if average selling prices for dGPUs goes up.
 

rgallant

Golden Member
Apr 14, 2007
1,361
11
81
So, lets see...

- The cost of the chips is only going up. $/transistor at 14FF and TSMC 16FF+ isn't going to be much cheaper than TSMC 28 nm, and could be more esp initially. Design costs are skyrocketing as well. Since adding transisitors/cores has been the primary way the two have increased performance - that's obviously a big problem. This won't be reversed (if?) EUV arrives so it could be a long time. I actually think people have been lucky that TSMC 28 nm has been such a legendary node; chips the size of Titan X/980 Ti would not normally be doable.
- Volumes are going down. dGPUs are going to get wiped out in the low end... and that is where most of the money has been made. They need to make money elsewhere; and making the high end really high end is probably the only option.
- nVidia has shown that they can get people to pay higher prices; and AMD is hurting so bad they can't afford to be aggressive on price anymore.

I do think nVidia should be OK given their mindshare as long as PC gaming doesn't completely implode thanks to mobile or Intel blocks dGPUs by eliminating PCIe lanes from the mainstream line. But it's going to get a lot more expensive esp once AMD is gone. My thinking is that you will see small dies with slightly higher transistor counts and slightly higher prices for the first iteration for Pascal/AI; and then things will only get bigger & more expensive after that.
just saying

not sure why volumes would go down , $850+CND for a gtx 980ti seems like a steal [for some one] and it's not 2 x the gtx 980

I'll run my 780sli into the ground then look around before paying for 2 of those to get the 2x upgrade. [for more vram nv neffed on their high end kelper cards]

sucks not to live in the USA , but most of us gamers don't.
so no sale for me .

http://www.ncix.com/detail/evga-geforce-gtx-980-ti-b2-109571.htm
also that comes with a +13% tax , + blocks in my case
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
just saying

not sure why volumes would go down , $850+CND for a gtx 980ti seems like a steal [for some one] and it's not 2 x the gtx 980

I'll run my 780sli into the ground then look around before paying for 2 of those to get the 2x upgrade. [for more vram nv neffed on their high end kelper cards]

sucks not to live in the USA , but most of us gamers don't.
so no sale for me .

http://www.ncix.com/detail/evga-geforce-gtx-980-ti-b2-109571.htm
also that comes with a +13% tax , + blocks in my case

Volumes go down because IGPs eat from below. And its important revenue to feed the R&D pineline for dGPU makers. Thats also why dGPUs wont dissapear due to performance, but due to lack of cashflow.
 

alcoholbob

Diamond Member
May 24, 2005
6,380
448
126
We've been on 28nm since what, 2011? And 16FF isn't expected to roll out until late 2016/early 2017? So we can probably expect at least a 5-6 year period between 16FF+ and 10nm GPUs.

That should be anywhere from 3-4 generations of Nvidia refreshes at 16FF+. AMD might only have 2 generations but their architectures tend to age better so they will still be within 10% of Nvidia's newest tech regardless.
 
Last edited:

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Historically it's always been that way. Now I imagine the most money is being made on the OEM mobile parts, like the 940M/950M/960M. nVidia's response to this is to basically push the high end with higher prices, and they've been relatively successful in that.

Not to be confrontational, but do you have any source except your imagination for these claims? I don't doubt that mobile revenue is an important source for Nvidia, but the idea that bottom-of-the-barrel sub-$100 cards have "always" been the primary revenue generator just doesn't ring true. Are there any hard numbers available that could indicate who's right?
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
626
126
I think the only way to know for sure is to dig into the financial statements and find the margin breakdowns and volumes. I do know that Nvidia makes a disproportionate share of profit from the high end compute products, but at the same time I've read about how Nvidia could not make only the high end stuff because the mainstream parts allow them to effectively monetize the R&D.

Put another way the cost of developing a GPU would not be viable if all that was sold was high end and/or professional products.
 

el etro

Golden Member
Jul 21, 2013
1,584
14
81
Surely the GPU market is shrinking, but design costs at newer processes will get cheaper as the process is maturing and getting older. Even if GPUs are not manufactured at the best node possible at the time, there will still good performance uplift at every generation. The integration of technologies can be bad for GPUs(iGPs for an example), but can be good too(HBM/Interposers and the card total manufacturing price going down in the future).
 

Borealis7

Platinum Member
Oct 19, 2006
2,901
205
106
We've been on 28nm since what, 2011? And 16FF isn't expected to roll out until late 2016/early 2017? So we can probably expect at least a 5-6 year period between 16FF+ and 10nm GPUs.
you can buy 14nm FinFET GPUs today...they're called "Iris" and they come with an integrated Intel CPU! :)
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Surely the GPU market is shrinking, but design costs at newer processes will get cheaper as the process is maturing and getting older. Even if GPUs are not manufactured at the best node possible at the time, there will still good performance uplift at every generation. The integration of technologies can be bad for GPUs(iGPs for an example), but can be good too(HBM/Interposers and the card total manufacturing price going down in the future).

Nomatter how you turn it. A smaller node will always cost more than the previous in design cost (14FF is 4x over 28nm). And the R&D for a new (better) uarch will always be higher. And until EUV, a smaller node will always cost more in transistor cost.

You can bet that AMD and nVidia dont go willingly to 14/16nm. But only because they are forced to due to 28nm being exausted with new uarch and 600mm2 chips. They are both on an economic death spiral in this relation.

The shrinking volume means less money to pay for R&D and design cost. R&D wise AMD already looks to be out of the game completely. And who knows how nVidia feels. But we do know they are going half precision with future uarch.

We can just as well start to accept the IGP future.
 
Last edited:

Azix

Golden Member
Apr 18, 2014
1,438
67
91
In 10 years we will all look back and awkwardly smile at the youngins while trying to explain that people actually bought a PC in many pieces that needed to be supplied with tons of energy and cost a ton of cash.

But eh...short term GPU prices might go up a little...but first energy efficiency will go up with 14/16nm (reducing need for higher end power supplies and also cutting power bills at least a little bit, both are cost factors)...and then dGPUs will slowly start phasing out entirely for anything that isn't super enthusiast and supercomputer.

I don't even think either AMD or Nvidia will ever produce middle or lowish end dGPUs after 10nm.(except for the obvious super enthusiast/super computer solution)

Next stop..SoCville.

At which point GPUs will be unimportant...because in 10 years SoC machines will be so small and efficient (yet powerful) that current GPUs will probably look laughable. 10 years tops...calling it.

I'd say imagine Stronger than Skylake performance with Titan X GPU power in USB sticks formats for the average user...and for higher end there is still SFF PCs (lol)...either way...in the long run rising GPU costs will really be a non topic as the need for power will be going down for the majority of people.


Edit: Whoa I derped and wrote PSU instead of GPU...what a nice slip XD

iGPUs are unlikely to be enough unless we end up with some massive improvements in chip manufacturing. Even at 7nm they are unlikely to match up to mid range GPUs.

No way dude, you are telling me you actually read the report and analyzed the numbers? Why are you making so much sense??

Also, OP, chill out. Bang for buck will continue to go up even if average selling prices for dGPUs goes up.

If we end up with nvidia monopoly you can kiss bang for buck goodbye. The only reason we'd have that is if they felt sorry for us.
 

Mondozei

Golden Member
Jul 7, 2013
1,043
41
86
Linus Torvalds has been predicting the death of the dGPU for some time. I mean the trend is pretty unambiguous; it's all about integration. That's what AMD's Zen is all about.

While I share some of the skepticism of JDG1980, I would caution to think that past is prologue. While you're correct that OP makes a series of unsubstantiated claims, this claim from you is equally speculative:

It's an endless treadmill, and iGPUs can never really catch up.

There have been massive improvements in recent years, and we both know that the big driver in iGPUs will be AMD, not Intel.

Having an iGPU at 960-esque levels will be enough for many gamers even 2 years from now. And if you can save hundreds of dollars in the process, what is there to lose? Sure, the high-end will likely not change as fast but even if you can get higher maximum performance, people also look at efficiency. Not every supercomputer needs to be the fastest in the world, there are also cases where "fast enough" is doable provided you get a large efficiency boost.

And this is also what Nvidia is aiming for with th NVLink technology, even if it is directed for dGPUs in the short term the horizon here, too, is crystal clear.

So while I share some skepticism for the simplistic claims made earlier, I'm not sure if the budget-minded dGPU will be around for that much longer and after that, the medium turf. For the high-end, it will be slower, both for consumers and enterprise market, but there too the trendline is clear.
 

Shehriazad

Senior member
Nov 3, 2014
555
2
46
iGPUs are unlikely to be enough unless we end up with some massive improvements in chip manufacturing. Even at 7nm they are unlikely to match up to mid range GPUs.


Not sure how you back up such a claim. While "Onboard" graphics have existed for some time...iGPUs have not yet actually been developed "serious" for a long time... (I'd like to think that this is mainly because the nodes were too big in the past, not leaving any room for those chips).

iGPUs/APUs also have some possible distinct advantages that can, in future generations lead to advantages dGPUs just won't "quite" achieve.

The iGPU sits directly on die with the CPU...it can more directly share resources.
Once DDR4 and on-die HBM/HMC for iGPUs become mainstream....bottlenecking will no longer be an issue.

Every coming generation of iGPUs will make massive performance jumps. Just look at what AMD managed ONE AND A HALF years ago.

They essentially managed to put a R7 250 in their Quad Core CPU @ 28nm.

CPUs will be fine @ 4/8 cores/threads for years to come...so that portion will only get smaller as time goes on. The GPU budget on the chip will literally skyrocket.

And once the memory bandwidth problems are taken care of...there is NOTHING holding anyone back from smashing a high end GPU right next to that CPU.


If AMD still had the R&D funds I'm pretty sure we would already see them phasing out anything but high end GPUs and make anything that is 285 level or below into an APU...which is quite easily possible in 16/14nm.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Remember, the discussion is not about if IGP will catch up to GPUs. If they do or dont doesnt matter on the fate of GPUs.

GPUs will vanish due to not being economicly viable to produce anymore.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Remember, the discussion is not about if IGP will catch up to GPUs. If they do or dont doesnt matter on the fate of GPUs.

GPUs will vanish due to not being economicly viable to produce anymore.

I don't see this happening. What you keep overlooking is that the biggest profit margins, by far, are found on high-end professional products that aren't going to be supplanted by less powerful integrated solutions. GK110 could easily have justified its production cost based on Tesla cards alone; the consumer releases were just icing on the cake for Nvidia. This is demonstrated by the fact that they made an improved version, GK210, just to do a Tesla refresh. And even though Titan X (GM200) sacrifices Double Precision performance, Nvidia still touted its GPGPU capabilities for applications that don't need DP.

The Quadro K5000 debuted at $2,499. That's a GK104-based card. Imagine the profit margin on this. It wouldn't surprise me if a majority of GK104's profits (not number of cards sold, but actual profits measured in dollars) came from the Quadros, rather than the GTX series (680, 670, 660 Ti, 770, 760). Those GTX sales were just a nice extra bonus that required little additional R&D costs to generate.

You seem to think that losing sales on low-end trash is going to deal a death blow to the discrete GPU market. I don't see that happening. The sub-$100 cards are already pretty much dead, and no one cares. They haven't been refreshed and probably won't get refreshed. Better iGPUs may push the level of minimum discrete GPU viability up to $200, but that won't make a substantial difference in the ability to pay for R&D, because those sales were always low margin to begin with. The real money was always in the high end.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,572
10,208
126
R&D wise AMD already looks to be out of the game completely.
You and your constant trolling negative opinions, slipped in there as if they were facts. Don't the mods ever notice?

Fact: AMD is releasing new products very soon, on an updated arch., with new HBM technology. Sure doesn't sound like a company that is "out of the game completely" to me...

We can just as well start to accept the IGP future.

Ironically, compared to your statement, AMD still designs and produces leading-edge IGPs, too. They aren't only a discrete company, like NVidia mostly is.

Infraction issued for inflammatory language and moderator callout.
-- stahlhart
 
Last edited by a moderator: