What are todays "mid-range" cards?

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DrMrLordX

Lifer
Apr 27, 2000
23,191
13,275
136
Or, and I know this sounds crazy, perhaps we can evaluate what "mid range" performance is, rather than worry about die sizes and product stacks?

Or maybe MSRPs could just stop creeping into the stratosphere for their entire lineup. Right now NV is trying to sell us $1300 (and more!) consumer dGPUs and then pretending that the rest of their lineup should be priced in relationship to that. AMD is going along with it. Then discussions such as these would be moot.
 
  • Like
Reactions: crisium

maddie

Diamond Member
Jul 18, 2010
5,203
5,612
136
Or maybe MSRPs could just stop creeping into the stratosphere for their entire lineup. Right now NV is trying to sell us $1300 (and more!) consumer dGPUs and then pretending that the rest of their lineup should be priced in relationship to that. AMD is going along with it. Then discussions such as these would be moot.
Don't worry, with a few more generations, we'll be saying mid-range is the $999 card. Enjoy your present bliss. :)
 

Shivansps

Diamond Member
Sep 11, 2013
3,918
1,570
136
Your point is Moot. You can find in Google the same proofs that I provided to show you the REALITY that its manufacturing costs that drive prices up.

I will ask you once again. Where will TSMC get back up 3 000 000 000 000$ that they have put into development of N7 node, if not by price hiking the silicon wafers?

How will TSMC get back 5 000 000 000 000$ that they will put into development of N5 process node if not by price hiking the wafer prices EVEN MORE?

Why do you expect AMD and Nvidia to sell 200$ GPU in manufacturing costs for 300$ in the market, knowing that dGPU, DIY market is shrinking?

Companies want to break even with their investments as fast as possible, which is why they maintain the highest possible margins for as long as they can. Use this rule for N7 process and think about what are we facing from this point of view.

Im going to ask you a simple question, if Navi 10 (small die, and Lisa Su was the one that said on stage that was small, dont point at me), is so expensive to produce, thus the prices in your opinion, how AMD is able to provide a SoC for next gen consoles? The prices DOUBLED from just over 3 years ago, for both AMD and Nvidia, you cant blame it on increasing costs. And again COST is not what defines a price... what was the production cost on Raven Ridge? Based on your logic the 2400G price was due to costs and they were lossing money with the 200GE.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Im going to ask you a simple question, if Navi 10 (small die, and Lisa Su was the one that said on stage that was small, dont point at me), is so expensive to produce, thus the prices in your opinion, how AMD is able to provide a SoC for next gen consoles? The prices DOUBLED from just over 3 years ago, for both AMD and Nvidia, you cant blame it on increasing costs. And again COST is not what defines a price... what was the production cost on Raven Ridge? Based on your logic the 2400G price was due to costs and they were lossing money with the 200GE.
Its laughable.

PS5 is SoC, which means that its complete package. You get CPU and GPU on the same die, which saves money. Lets say PS5 APU is around 400 mm2. In the best case scenario AMD gets 140 dies from single 300 mm wafer. Which results in 89$ per chip, from 12500$ wafer. In best case scenario. Now you need only GDDR6 memory which will result in 210$ package. Adding everything(PCB, cooling, hard drives/SSDs, etc) you get 400$ for whole console. Now you need to add Pad, packaging, shipping etc and you are at around 475-500$ of manufacturing costs of whole console.

And this assumes that APU has 400mm2. Which it might not, and it might be smaller, which will affect yields, and amount of APUs per wafer, which might lower the costs. So my next paragraph might even be pointless. But that remains to be seen.

Playstation and Xbox were famous for years for selling consoles at a cost, at the start, and recouping the manufacturing and design costs down the line, when the wafer prices go down, or with shrinking the design to another process, and with services, etc.

But this are still growing market. They CAN afford selling them at a cost because there still is growth in selling consoles. In dGPUs - there is no growth of the market. Its saturated market because those who would buy dGPUs already have them, and those who don't buy dGPUs - don't need them. The only way AMD and Nvidia can earn money is by increasing, step by step, margins, or prices, to counteract the increased manufacturing costs with each generation of the process node.
 

jpiniero

Lifer
Oct 1, 2010
17,145
7,530
136
The profits that Sony/MS/Nintendo make is from a licensing fee from every game sold on their console, and of course any games they publish themselves.
 

maddie

Diamond Member
Jul 18, 2010
5,203
5,612
136
Its laughable.

PS5 is SoC, which means that its complete package. You get CPU and GPU on the same die, which saves money. Lets say PS5 APU is around 400 mm2. In the best case scenario AMD gets 140 dies from single 300 mm wafer. Which results in 89$ per chip, from 12500$ wafer. In best case scenario. Now you need only GDDR6 memory which will result in 210$ package. Adding everything(PCB, cooling, hard drives/SSDs, etc) you get 400$ for whole console. Now you need to add Pad, packaging, shipping etc and you are at around 475-500$ of manufacturing costs of whole console.

And this assumes that APU has 400mm2. Which it might not, and it might be smaller, which will affect yields, and amount of APUs per wafer, which might lower the costs. So my next paragraph might even be pointless. But that remains to be seen.

Playstation and Xbox were famous for years for selling consoles at a cost, at the start, and recouping the manufacturing and design costs down the line, when the wafer prices go down, or with shrinking the design to another process, and with services, etc.

But this are still growing market. They CAN afford selling them at a cost because there still is growth in selling consoles. In dGPUs - there is no growth of the market. Its saturated market because those who would buy dGPUs already have them, and those who don't buy dGPUs - don't need them. The only way AMD and Nvidia can earn money is by increasing, step by step, margins, or prices, to counteract the increased manufacturing costs with each generation of the process node.
"You get CPU and GPU on the same die, which saves money."

Are you practicing your sophistry? If true, then pray tell why is AMD dis-aggregating the CPU to the chiplet model?
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
Way too many tiers. Ultra enthusiast even higher?

I guess when prices are so massive you have to add 10 tiers.

Easier in 2011 when it was $350-$500 (570-580) for High End, $200-$250 (560 - 560 Ti) for Midrange, and $150 Mainstream (550 Ti).

That's now $1200-$2500 (2080Ti-Titan) for High End and $500-$700 (2070S-2080S) for Midrange. Harder to define Mainstream with both the TU116 and TU106, but I guess we can include both and say Mainstream is $220-$400, which is a big ass range but it is what it is.

Not even gonna cry and say it should be 2011 prices, or be a corporate apologist and claim they need to jack prices to not run at a loss, it's obviously a combination of chasing far larger profits while battling more expensive R&D and manufacturing. But the truth is prices have increased from 3.42-5x on the largest chip, 2.5-2.8x on the second largest chip in 8 years. And that's sad for consumers.
In 2011 they had a $700 gtx590 that was your 1080ti/titan card.
R9 295 x2 was $1499 that was you enthusiast card back then.
How about the $1000 7990?
Just saying..don't forget about the dual enthusiast cards back then also.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
"You get CPU and GPU on the same die, which saves money."

Are you practicing your sophistry? If true, then pray tell why is AMD dis-aggregating the CPU to the chiplet model?
And what has Chiplet to do with Xbox and PS5 APU, about which I was talking about?
 

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
Or maybe MSRPs could just stop creeping into the stratosphere for their entire lineup. Right now NV is trying to sell us $1300 (and more!) consumer dGPUs and then pretending that the rest of their lineup should be priced in relationship to that. AMD is going along with it. Then discussions such as these would be moot.

In 2011, the GTX 560 Ti debuted for $249. It was mid-range.

1660 Ti MSRP on release was $279, and the RX590 was $279.

Inflation suggests these mid-range cards should have been priced at $284 (14% inflation) so we're doing fair there.

Sadly AMD left a huge gap in their mid-range. The RX590 is low end of mid-range, the Vega56 is high end (in fact, we now have a card in the Vega56 that can play many AAA titles at 4k 40+ FPS for $299).

I will agree that for the top end, MSRPs are creeping way up. Probably because they're just focusing on putting a ton of cards in the top 25% of performance. This is a page out of a sales textbook. If you sell one card at $279, and offer something with small performance step-ups for small price increases all the way up to the top, you end up selling more high-end cards.

But the reality is that... the top end is expensive everywhere. You can configure an Audi A4 to be a $63,000 car, but to then lament that Audi have no mid-level luxury options is silly, because they also sell an A4 for $39,000... similarly, the fact that Nvidia have optioned out a 2080 Ti at $1100 does not take away from the fact that they also have an extremely capable mid-range 1660 Ti for $279 (well, cheaper now).
 

maddie

Diamond Member
Jul 18, 2010
5,203
5,612
136
And what has Chiplet to do with Xbox and PS5 APU, about which I was talking about?
You argued "You get CPU and GPU on the same die, which saves money."

Assuming we accept this as true, then why is AMD implementing the use of chiplets in order to reduce costs, as one of the benefits, when you claim that a fully integrated SOC is cheaper? Are they wrong?

Also, does anyone know, with certainty, that the new consoles have a fully integrated SOC?
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
You argued "You get CPU and GPU on the same die, which saves money."

Assuming we accept this as true, then why is AMD implementing the use of chiplets in order to reduce costs, as one of the benefits, when you claim that a fully integrated SOC is cheaper? Are they wrong?

Also, does anyone know, with certainty, that the new consoles have a fully integrated SOC?
What is cheaper to design and manufacture: Sperate dies for the CPU and GPU, or ONE die that combines CPU and GPU?
 

maddie

Diamond Member
Jul 18, 2010
5,203
5,612
136
What is cheaper to design and manufacture: Sperate dies for the CPU and GPU, or ONE die that combines CPU and GPU?
To be honest, design costs when amortized over the console volumes are trivial when compared to the production costs. Using the design argument is simply a misdirection.

The full cost of a console APU is almost all production cost to the extent that it makes economic sense to spend more on design to lower production costs.

I have disagreed with your claim that the full $100-150 M 7nm SOC design costs are attributed to each variation of a design, but even if we assume it to be true, then what is the design component of the cost. $1-2 per unit, assuming 100M sales. Do you seriously think this comes close to the production cost of the unit?


Edit:
If we assume an 8C Zen for the consoles, then AMD will already have an 8C chiplet designed for other markets, so this removes the double design cost in you argument. All they have to do is the custom GPU portion of the individual console line.
 
Last edited:

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
If we assume an 8C Zen for the consoles, then AMD will already have an 8C chiplet designed for other markets, so this removes the double design cost in you argument. All they have to do is the custom GPU portion of the individual console line.
Can you go back to my post which has spawned this discussion, and leave the design out of the equation, and see that I was talking only about MANUFACTURING costs?

I made a mistake in previous post talking about Design costs, which has zero relation to what I was talking about in the post that spawned this discussion.

What will be cheaper to yield and manufacture? Seperate dies of CPU and GPU, or single APU?

Depending on the die size, obviously it will be cheaper to yield and manufacture monolithic APU, than three chiplets.
 

maddie

Diamond Member
Jul 18, 2010
5,203
5,612
136
Can you go back to my post which has spawned this discussion, and leave the design out of the equation, and see that I was talking only about MANUFACTURING costs?

I made a mistake in previous post talking about Design costs, which has zero relation to what I was talking about in the post that spawned this discussion.

What will be cheaper to yield and manufacture? Seperate dies of CPU and GPU, or single APU?

Depending on the die size, obviously it will be cheaper to yield and manufacture monolithic APU, than three chiplets.
Seriously?

The math of the yield curve tell us that it's always higher yield, thus lower costs, to fab chiplets of equal or close to equal combined area to that of a monolithic die.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Seriously?

The math of the yield curve tell us that it's always higher yield, thus lower costs, to fab chiplets of equal or close to equal combined area to that of a monolithic die.
And what are the costs of combining: GPU die, CPU chiplet die, and IO die, together, compared to single monolithic die, of an APU?

Are you sure, that yielding 7 nm CPU chiplet, 14 nm IO die, and 7 nm GPU die costs will be lower, than lets say 350 mm2 APU die?
 

maddie

Diamond Member
Jul 18, 2010
5,203
5,612
136
And what are the costs of combining: GPU die, CPU chiplet die, and IO die, together, compared to single monolithic die, of an APU?

Are you sure, that yielding 7 nm CPU chiplet, 14 nm IO die, and 7 nm GPU die costs will be lower, than lets say 350 mm2 APU die?
Yes.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
No it won't. ;)

Even if silicon wafer costs 10k for N7 process, and for 16/14 nm process wafers cost 6500, that leaves you with 11$ per die, in best case scenario for IO die made on 14 nm process.

Whats funnier, only 400 mm2 monolithic die made on N7 process, wil equal manufacturing costs of seperate IO die, CPU and GPU chiplets.

And here comes the design costs. You pay for one design. Not three speparate.

AMD decided to go with chiplet based design, because its cheaper to design and manufacture ONLY FOR CPUs. Even their own 7 nm APUs, based on Zen 2 architecture, for desktop computers, will be completely monolithic, and not Chiplet based.

If N7 wafers cost 10k, 400 mm2 die costs 71$ to yield. 255 mm2 die costs 44$, if N7 wafers cost 10 grand. In the best case scenario, obviously. 350 mm2 die, which is more in line with the die size of previous consoles die sizes, will cost 60-65$ depending on Yield on N7.
 

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
In 2011, the GTX 560 Ti debuted for $249. It was mid-range.

1660 Ti MSRP on release was $279, and the RX590 was $279.

Inflation suggests these mid-range cards should have been priced at $284 (14% inflation) so we're doing fair there.

Sadly AMD left a huge gap in their mid-range. The RX590 is low end of mid-range, the Vega56 is high end (in fact, we now have a card in the Vega56 that can play many AAA titles at 4k 40+ FPS for $299).

I will agree that for the top end, MSRPs are creeping way up. Probably because they're just focusing on putting a ton of cards in the top 25% of performance. This is a page out of a sales textbook. If you sell one card at $279, and offer something with small performance step-ups for small price increases all the way up to the top, you end up selling more high-end cards.

But the reality is that... the top end is expensive everywhere. You can configure an Audi A4 to be a $63,000 car, but to then lament that Audi have no mid-level luxury options is silly, because they also sell an A4 for $39,000... similarly, the fact that Nvidia have optioned out a 2080 Ti at $1100 does not take away from the fact that they also have an extremely capable mid-range 1660 Ti for $279 (well, cheaper now).

I thought you said you used performance as your metric? It looks like you're simply using marketed name (x60 Ti) and price?

The 1660 Ti is the successor to the 550 Ti, not to the 560 Ti. Nvidia is using market names and prices to disguise actual chips, and enthusiasts should we do our diligence and see through the marketing.

1660 Ti and 550 Ti are both x116 chips. I’ve already explained to you how their ratios are similar to other chips in the stack. Let me demonstrate performance.


https://tpucdn.com/review/asus-geforce-gtx-550-ti-direct-cu/images/perfrel_1680.gif

https://tpucdn.com/review/msi-gefor...-xs/images/relative-performance_2560-1440.png



1660 Ti provides 49% of the highest end Nvidia gaming chip (2080 Ti)

550 Ti provides 46% of the highest end Nvidia gaming chip (580)



If you define 560 Ti as midrange, then let’s take a look:

The 560 Ti provides 73% of the highest end Nvidia gaming chip (580)

The 2070 Super provides 76% of the highest end Nvidia gaming chip (2080 Ti):

https://tpucdn.com/review/nvida-geforce-rtx-2070-super/images/relative-performance_2560-1440.png



You already said before you use performance as the metric. So it looks like you should consider 2070 Super as the new midrange. Likely 2070 Super will be closest of all Turing cards to matching the 560 Ti performance ratio.



Problem is this:

$150 ($171 inflation) 550 Ti -> $280 1660 Ti

$250 ($285) 560 Ti -> $500 2070 Super

$500 ($569) 580 -> $1200 (mythical $1000 MSRP) 2080 Ti


So you need to choose what your midrange metric is defined as. If it’s price only, then yes, the 1660 Ti is midrange. If it is performance, then the 2070 Super is midrange. Regardless, it’s quite clear the price inflation is real. And it’s also clear that saying the 560 Ti’s successor is the 1660 Ti is misleading, considering how much close the 560 Ti got you to the highest single GPU performance possible.
 

maddie

Diamond Member
Jul 18, 2010
5,203
5,612
136
No it won't. ;)

Even if silicon wafer costs 10k for N7 process, and for 16/14 nm process wafers cost 6500, that leaves you with 11$ per die, in best case scenario for IO die made on 14 nm process.

Whats funnier, only 400 mm2 monolithic die made on N7 process, wil equal manufacturing costs of seperate IO die, CPU and GPU chiplets.

And here comes the design costs. You pay for one design. Not three speparate.

AMD decided to go with chiplet based design, because its cheaper to design and manufacture ONLY FOR CPUs. Even their own 7 nm APUs, based on Zen 2 architecture, for desktop computers, will be completely monolithic, and not Chiplet based.

If N7 wafers cost 10k, 400 mm2 die costs 71$ to yield. 255 mm2 die costs 44$, if N7 wafers cost 10 grand. In the best case scenario, obviously. 350 mm2 die, which is more in line with the die size of previous consoles die sizes, will cost 60-65$ depending on Yield on N7.
1] This technique is not only applicable to CPUs, but to many SOCs.

2] All the die yield equations have an exponential component, and real world data confirms it.
As an example:

http://www.isine.com/resources/die-yield-calculator

400mm^2 chiplet based on (2) 200mm^2 chiplets versus a 375mm^2 SOC. I assume some extra space for communicating.

Defect density = 0.2 (any yield rate gives a discrepancy)
Chiplet = 14.1mm * 14.1mm gives 192 good die/wafer with a 68.1% yield
SOC = 19.4mm * 19.4mm gives 69 good die/wafer with a 49.4% yield

You end up with 96 (192/2) chiplet based units versus 69 SOC units. A 39% increase in good die/wafer or a cost of 72% of the full SOC.

3] Speculating on the wafer price is irrelevant as it applies to both cases.

4] Perhaps (most likely), the 7nm APU coming will only have 4C/8T and the Zen chiplet will be wasted in discarding 1/2 the cores. As I expected to happen, we already do not see any 4C Matisse, but the consoles will have 8C, so it can be used there. Zen3 will also be on 7nm+.


I think that the IO will be on the GPU portion of the die leading to 2 chiplets. The GPU will have greater data traffic and you would want to reduce distances for power consumption savings. This will lead to only 1 new design/console. Sony and Microsoft, each with their unique custom components in the GPU chiplet.


Edit: using more realistic sizes for die.

CPU chiplet = 75mm^2 (7.5 X 10.0)
GPU/IO chiplet =275mm^2 (14.36 x 19.15)

Unified SOC =350 mm^2 (16.2 x21.6)

Chiplet = 676 die @ Y 85.9%
GPU/IO chiplet = 117 die @ Y 59.2

SOC = 79 die @ Y 51.7

It takes 5.78 wafer of GPU/IO to mate with 1 wafer of CPU

So 6.78 wafer gives you 676 chiplet based console units.
But 6.78 wafers gives you only 536 SOC units.

You will spend 26% more to fab the SOC. Taking your $60 cost/SOC, this is ~ $1.5B over the console lifetime @ 100M units sold. Not a trivial amount.
 
Last edited:

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
I thought you said you used performance as your metric? It looks like you're simply using marketed name (x60 Ti) and price?

The 1660 Ti is the successor to the 550 Ti, not to the 560 Ti. Nvidia is using market names and prices to disguise actual chips, and enthusiasts should we do our diligence and see through the marketing.

1660 Ti and 550 Ti are both x116 chips. I’ve already explained to you how their ratios are similar to other chips in the stack. Let me demonstrate performance.


https://tpucdn.com/review/asus-geforce-gtx-550-ti-direct-cu/images/perfrel_1680.gif

https://tpucdn.com/review/msi-gefor...-xs/images/relative-performance_2560-1440.png



1660 Ti provides 49% of the highest end Nvidia gaming chip (2080 Ti)

550 Ti provides 46% of the highest end Nvidia gaming chip (580)



If you define 560 Ti as midrange, then let’s take a look:

The 560 Ti provides 73% of the highest end Nvidia gaming chip (580)

The 2070 Super provides 76% of the highest end Nvidia gaming chip (2080 Ti):

https://tpucdn.com/review/nvida-geforce-rtx-2070-super/images/relative-performance_2560-1440.png



You already said before you use performance as the metric. So it looks like you should consider 2070 Super as the new midrange. Likely 2070 Super will be closest of all Turing cards to matching the 560 Ti performance ratio.



Problem is this:

$150 ($171 inflation) 550 Ti -> $280 1660 Ti

$250 ($285) 560 Ti -> $500 2070 Super

$500 ($569) 580 -> $1200 (mythical $1000 MSRP) 2080 Ti


So you need to choose what your midrange metric is defined as. If it’s price only, then yes, the 1660 Ti is midrange. If it is performance, then the 2070 Super is midrange. Regardless, it’s quite clear the price inflation is real. And it’s also clear that saying the 560 Ti’s successor is the 1660 Ti is misleading, considering how much close the 560 Ti got you to the highest single GPU performance possible.

Nope. I'm not using marketing names. I don't care whether they're x116 chips, or what the stack looks like price-wise. I don't care who is supposed to what's successor. I don't care about ratios. What I care about is the benchmarks and whether the card sits in the 33-67% performance range in the reviews (though that's probably a wrong way to do it, admittedly, for reasons described below*). Both the 560 Ti and 1660 Ti were mid-range on release. So was the 550 Ti. At least based on the performance summaries on TPU.

I never said the 2070 SUPER is NOT a mid range card, though if you include Nvidia's entire lineup available, from 1030 to Titan RTX/2080 Ti, it probably isn't mid-range in performance, but rather top 33%.

What I am emphasizing is that one can purchase a (lower) mid range card for a grossly similar price as in 2011 (higher mid range).

* I have to say, though, that after looking at the reviews and cards tested on TPU, it's inconsistent. The 550 Ti review uses the GT 430, but the 560 Ti review doesn't. The problem of a shifting baseline is huge, because if Nvidia's cheapest option is a 1030, and the best option is a 2080 Ti, then the relative performance numbers shift because the baseline isn't 50%, it's 15% or 20% or whatever. And the problem persists from the 5xx series onto the 2xxx series reviews on TPU, and Anandtech isn't clear of this issue either. The 2070 SUPER review didn't include the 1030, or even the 1650. Compiling a comparison that includes ALL available cards from a manufacturer could be fun, and may result in some surprises. I don't know. Maybe I'll carve out some July 4th weekend time to do so, though it will be made more difficult by the lack of consistency in driver versions, test setups, etc. so we'll likely have to rely on extrapolating relative performances and so on. It could be really interesting to more deeply explore the benchmarks and prices not just for a generic mid-range card, but also for each percentile of performance. After looking more, it's clear, for instance, that 1660 Ti is more in the 33-50% performance range, where the 560 Ti appears to be in the 50-67% range. So that's something I'd really like to compare.

What I see now is that we're both right, in different ways. You are correct that there is price creep in the mid-range (if 2070S is indeed mid-range, which it may not be if we include 1030 performance as baseline -- and the 560 Ti may not be if we include the GT 430 as the baseline), but I am correct that that 1660 Ti technically offered mid-range performance at release.

Definitely an interesting subject.