[Updated] Your Thoughts on Die Sizes and Pricing in the Last 6 Years

zliqdedo

Member
Dec 10, 2010
59
10
81
UPDATE: Based on the discussion below, I’m adding possible fair market values to the list for two of the new Turing offerings. It goes without saying that some, or many, people won’t agree with those; they’re representative of what I feel constitutes a fair market value for these cards, after a thought process sparked by this thread and its contributors.​

To be honest, these possible FMVs are closer to the MSRPs than I might have originally expected, largely on account of the massive Turning die sizes. How valuable that extra silicon would be to you, however, depends on the importance you place on ray tracing and related features, since a significant area of those big GPUs has been dedicated to functional units that would be of little help in traditional rendering.

Another question is whether RT would be possible on anything but the top model, since all demos so far have been using 2080 Ti at 1080p, and somewhat struggling to maintain 60fps. If optimizations reach a point where 2080 would be capable of RT, in practical terms, with some less important settings dialed back, it would be nice, especially given how 2080 is within 5% of the possible FMV. Then again, I don’t think it would sell for $700 in a relevant time frame, which brings me to the next point.​

Comparing these possible FMVs to the base MSRPs might be a bit misleading since, so far at least, OEMs seem to have embraced Nvidia’s Founders Edition pricing, which is significantly higher. The market might settle, of course. I hope that the base prices don’t turn out to be but simple formality – in my mind, a card with a decent open air cooler that’s reasonably quiet, and no added extras such as OC capability, should retail for the suggested base MSRP.

I’ve been enjoying this discussion, and I hope we can keep it going. Initial benchmarks and further RT resting as it becomes available will shed more light on the actual value proposition of the new cards. I think that developers should add features alongside the costly RT to make life easier for consumers, especially those with higher resolution monitors. Features like rendering the UI at native resolution, which should be a part of every modern game, but that’s another story, and also atypical resolutions – if you can only render ~2m pixels and you have a ~4m pixel 1440p monitor, it might be better to play at 1280x1440 rather than 1920x1080.

I intend to add a possible FMV to 2070 as well when we get its die size.

Lastly, I realized something, if we go back to that Supreme Court ruling, the last part in particular – “… both [buyer and seller] having reasonable knowledge of relevant facts.” – it struck me that we, as buyers, might never have reasonable knowledge of relevant facts, of course, reasonable is a subjective thing, as is the whole concept of fair market values, but still, it’s a bit sad. To me, the cost of a card could be outrageously high, and to you, it might be the best deal of the century, and without having a reasonable understanding of margins in the consumer sector, neither purchase could constitute fair market value. If only we could at least have the margins for consumer video cards as they were throughout the years. It’s actually a bit surprising such info hasn’t leaked and been made available; maybe it has and we just don’t know it.

I’ve complied a quick and dirty list of Nvidia’s pricing in relation to die sizes since Fermi, the reason being that while it’s no secret the prices of video cards have skyrocketed in the last 6 years, with RTX just the latest example of that, I’ve since wondered whether it was justified; well, to be honest, I believe it hasn’t. Now, I’m not arguing a company doesn’t have a right to price its products how it wishes, and consumers are of course free to buy or ignore said products, I’m not even playing the morality card, I’m simply asking whether Nvidia actually had to or it did so just because it rightly judged it could do so, and get away with it. Again, full disclosure: in my mind, what we saw with Kepler was a downright doubling of prices driven by greed and made possible by a temporary lack of potent competition. I could be wrong, of course. I’d really like to hear your opinions on the matter.

Back when Kepler launched I started a thread about pricing on HardForum, or maybe Guru3D, that quickly went to shit – some fervently defended the new cards (and their pricing) and claimed any such discussion is a hoax, others said die sizes didn’t matter at all, yet another group blamed other’s ignorance on the matter as a reason for the high prices, and some were even accused of being Nvidia investors.

One last thing before the list: I’m aware that die sizes and inflation rates alone cannot paint a complete picture – the relative maturity of the process at the time of manufacturing and R&D are obviously important as well – but I still think they can be indicative. The inflation rates were generated with an online calculator fed by up-to-date US government data.

UPDATE: Added additional Turing die size.
UPDATE: Added possible FMVs to Turing.

Fermi; launch
529 mm² - $500; cut-down
529 mm² - $350; cut-down twice

Fermi; refresh
520 mm² - $500; fully-enabled
520 mm² - $350; cut-down
332 mm² - $250; fully-enabled
332 mm² - $200; cut-down

Inflation rate (2010 to 2012): 5.3%

Kepler; launch
294 mm² - $500; fully-enabled

Kepler; later
294 mm² - $400; cut-down (7 weeks after launch)
561 mm² - $1000; cut-down (11 months [!!!] after launch)

Kepler; refresh
561 mm² - $650; cut-down twice (14 months after launch)
294 mm² - $400; fully-enabled (14 months after launch)
561 mm² - $700; fully-enabled (20 months after launch; as a result of competition)

Inflation rate (2012 to 2014): 3.1%
Inflation rate (2010 to 2014): 8.6%

Maxwell; launch
398 mm² - $550; fully-enabled
398 mm² - $330; cut-down x1.5

Maxwell; later
601 mm² - $650; cut-down (9 months after launch; expecting competition)

Inflation rate (2014 to 2016): 1.4%
Inflation rate (2010 to 2016): 10.1%

Pascal; launch
314 mm² - $600-700; fully-enabled
314 mm² - $380-450; cut-down twice

Pascal; later
471 mm² - $700; cut-down (10 months after launch, expecting competition)

Inflation rate (2016 to 2018): 5%
Inflation rate (2010 to 2018): 15.6%

Turing; launch
754 mm² - $1000-1200; cut-down; possible FMV: $838
538 mm² - $700-800; cut-down; possible FMV: $665
Less mm² - $500-600; cut-down
 
Last edited:
  • Like
Reactions: krumme

n0x1ous

Platinum Member
Sep 9, 2010
2,574
252
126
Maxwell later and pascal later were cut down, big die TIs for $650 and $700 respectively
 
  • Like
Reactions: zliqdedo

DooKey

Golden Member
Nov 9, 2005
1,811
458
136
Supply and demand is dictating the market cost of these luxury goods. Die sizes, nomenclature, etc. don't mean squat. NVIDIA is going to maintain their margins and sell video cards for what the market will bear. If the market revolts then they'll adjust. A bunch of noisy forum dwellers aren't the market and never have been. Simply vote with your wallet.

With that said, I don't like the pricing. However, I want the best and I have to pay for it if I want it.

Current Ti pricing is pretty much my limit unless something comes out that is just an unbelievable performance powerhouse.
 

Head1985

Golden Member
Jul 8, 2014
1,867
699
136
Almost all reviews today dont criticize nvidia pricing.Look at old reviews here from anand and look here now how it looks.Everyone is scared tell anything bad at nvidia because they will lose free samples.
I bet nobody will say anything bad even on turing.
Old anand would criticize this crap upside down about turing you know.
Todays reviewers will be like: well its 1200usd but is so great.Its less than 3000Titan buy it.
Look rtx 2070 cost maybe 650usd but its still less than launch price of GTX1080 and its 10% faster so buy it...
 
Last edited:

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,250
136
Almost all reviews today dont criticize nvidia pricing.Look at old reviews here from anand and look here now how it looks.everyone is scared tell anything bad at nvidia because they will lose free samples.

In the end it's up to the end user to decide on the pricing. If they're willing to throw the money at nvidia it's their choice. The more objective reviews will be from those that don't take the freebies with the strings attached.
 

zliqdedo

Member
Dec 10, 2010
59
10
81
Supply and demand is dictating the market cost of these luxury goods. Die sizes, nomenclature, etc. don't mean squat. NVIDIA is going to maintain their margins and sell video cards for what the market will bear. If the market revolts then they'll adjust. A bunch of noisy forum dwellers aren't the market and never have been. Simply vote with your wallet.

With that said, I don't like the pricing. However, I want the best and I have to pay for it if I want it.

Yeah, that's another thing I should've disclosed: I do vote with my wallet, at least I hope I do, the only post-Fermi Nvidia card I've bought is GTX 970, a product I feel was priced fairly enough, just don't ask me how I reached that conclusion, it has to do with the list above.

Otherwise, I agree, but my goal isn't to start a discussion on marketing science and forum dwellers changing the world.

What I can't agree with, in fact, it's something I fervently oppose, is that video cards are luxury goods. Yes, they aren't products vital to living, but they aren't Swiss watches either. Video cards are hobby products, and I believe hobbies are important for our well-being. Of course, that doesn't make Nvidia a charity, but it doesn't make it a luxury brand either.

So, as far as your thoughts on pricing are concerned, and please correct me if I'm wrong, it all boils down to whether you yourself can actually pay out a certain sum at any given time; if the answer is yes, it's all good, even though you'll have less cash to spend on other stuff after the fact, but what would you think if the answer was no? Shouldn't we have some practical limits on what's acceptable independent of our current financial means? I think that it wouldn't hurt. Otherwise we grow ever-tolerant and companies can and will take advantage of that, essentially exploiting us, despite our lack of liking it. I guess what I'm getting at is that I think we have a role to play in all of this, and it's not as insignificant as we might think, just as suggested by Kenmitch:
Isn't that what created the current pricing in a round about way?

Another notion I find interesting is this:
Current Ti pricing is pretty much my limit unless something comes out that is just an unbelievable performance powerhouse.

I find it interesting because it's a tool used by Nvidia, or any manufacturer, to manipulate public opinion. When they are running ahead of the competition, they can introduce a cheap-to-make, mid-range product as a flagship, for a very high price, because it performs better than the old generation, while purposefully holding back the actual high-end product until a later date and then say it's unbelievably fast, whereas it actually isn't, it just might seem that way because of the trick they pulled. Things never change - we get new manufacturing processes, new microarchitectures, and with them, big performance gains - there's nothing unbelievable about a Titan video card, just very good marketing. Riva TNT2 Ultra retailed for $300; that doesn't mean RTX 2080 Ti should retail for $135,000 because it has that much of a higher pixel fillrate.
 
Last edited:

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
One last thing before the list: I’m aware that die sizes and inflation rates alone cannot paint a complete picture – the relative maturity of the process at the time of manufacturing and R&D are obviously important as well – but I still think they can be indicative. The inflation rates were generated with an online calculator fed by up-to-date US government data.

Big dies in the current time period face two non linear price increases. First cost per area ($/mm) has been significantly increasing at each process shrink. The first image below shows that:

intel_cannonlake.png



Die size cost is increasing at a non linear rate every generation. Cost/Transistor is still going down, but the cost of die area has been increasing at even faster rate. So big dies have always been expensive, but they are getting even more expensive over time, at rate MUCH higher than the rate of inflation. 32nm to 14nm, costs have ~doubled. That means that 400mm2 die today cost multiple times higher than 400mm2 die a few generations ago. So costs have to increase, or margins shrink, just with constant size die as generations move forward.

Costs doubling in a couple of generations, and that is just for constant die size
.

Now factor in an increasing die size on top. Here is a nice visual Yield Calculator to play with:
http://caly-technologies.com/en/die-yield-calculator/
zZBEoz3.png



I will do three die sizes Intels 8700K (149mm2 per wikichip), 1080Ti (471mm2), and 2080Ti (754mm2). Sticking with defaults other than changing only the wafer size to 300mm.

8700K - 86% yield, 332 good dies
1080Ti - 64% yield, 71 good dies
2080Ti - 49% yield, 34 good dies


Implications? If it costs ~$50 dollars to produce an 8700K size die,then it would cost:

~$233 to produce a 1080Ti size die
~$488 to produce a 2080Ti size die

While people can ague about the minutia of the tools defaults, this is the reality of big die scaling.

Die area cost is increasing at a non linear rate at each process generation, and within a generation larger dies increase in cost at non linear rate per area increase. Modern large die costs are the unfortunate multiplication of two non linear prices increases.

This why people who pay attention to the realities of silicon production, thought such a die was cost prohibitive. Hence comments like this one:
https://www.anandtech.com/show/13249/nvidia-announces-geforce-rtx-20-series-rtx-2080-ti-2080-2070/2 (Ryan Smith)
And like the name suggests, Big Turing is big: 18.6B transistors, measuring 754mm2 in die size. This is closer in size to GV100 (Volta/Titan V) than it is any past x80 Ti card, so I am surprised that, even as a cut-down chip, NVIDIA can economically offer it for sale. None the less here we are, with Big Turing coming to consumer cards.

I was very surprised to see Big Turing in a consumer product, and pricing it the same as Titan X is NOT gouging, it's losing margin compared to Titan X.

Don't expect any competition to ever show up, and compete on price against such a monstrous and expensive die.
 
Last edited:

Elfear

Diamond Member
May 30, 2004
7,163
819
126
I will do three die sizes Intels 8700K (149mm2 per wikichip), 1080Ti (471mm2), and 2080Ti (754mm2). Sticking with defaults other than changing only the wafer size to 300mm.

8700K - 86% yield, 332 good dies
1080Ti - 64% yield, 71 good dies
2080Ti - 49% yield, 34 good dies


Implications? If it costs ~$50 dollars to produce an 8700K size die,then it would cost:

~$233 to produce a 1080Ti size die
~$488 to produce a 2080Ti size die

While people can ague about the minutia of the tools defaults, this is the reality of big die scaling.

Thanks for the very detailed analysis! Out of curiosity, where did you get your yield rates for Turing? Seems quite low for just a slight change from 14nm.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Thanks for the very detailed analysis! Out of curiosity, where did you get your yield rates for Turing? Seems quite low for just a slight change from 14nm.

All the yield rates are from the Caly-Tech online Yield Calculator that I linked in my post, just using the default values in the calculator.

I added the tool link/info to help illustrate, how bad the big die problem in a an interactive way.
 

maddie

Diamond Member
Jul 18, 2010
5,152
5,540
136
Thanks for the very detailed analysis! Out of curiosity, where did you get your yield rates for Turing? Seems quite low for just a slight change from 14nm.
THE most important question.

In fact, some yield calculators such as this one, [http://www.isine.com/resources/die-yield-calculator] have a suggested range for defect density, in this case they state [(typically 0.01-0.2)].

The yield range in this case for a die as suggested above [27.45 x 27.45] follows.
Defect Density (0.2) = 17 out of 64 = 26.5%

Defect Density (0.01) = 59 out of 64 = 92.2%

Calling this a quite large range of values might be considered an understatement. In these immortal words, BS in, BS out.
 
  • Like
Reactions: Headfoot and IEC

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Without knowing exactly how much NVIDIA pays for each 12nm Wafer to TSMC and without knowing the yields of the TU102 we can only speculate.

So I will only speculate here and say that 12nm today is cheaper than 16nm in 2016 when NV released Pascal. So I will just put a 4000USD per 12nm Wafer.
Also I will speculate that yields for TU102 is not bellow 80% in order to use it for high volume as a consumer product.
TU102 die size is 754mm2, that puts roughly 65 dies per 12in wefer. With 80% yields we get, 52 working dies per wafer.

So the end result is 4000/52 = ~77 USD per die.
 

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
AMD sells a cutdown 486mm2 (Vega 56) for $400 MSRP. Nvidia sells cutdown 471mm2 for $700. So die size is not the reason for the high prices Nvidia charges, rather the market simply allows them to do so. In a truly high competitive environment, we could probably get reasonably close to Fermi prices per die size, but a little higher with adjustments to inflation and increased area costs. Vega 56 486mm2 proves that. $350 GTX 570 is $400 today. And the 570 is 520mm2. So same inflation adjusted price, and slightly smaller area thanks to die size costs rising.

Without competition, we won't get there. But I think we can do moderately better than today if more people voted "no" with their wallets.
 
  • Like
Reactions: f2bnp and zliqdedo

zliqdedo

Member
Dec 10, 2010
59
10
81
Big dies in the current time period face two non linear price increases. First cost per area ($/mm) has been significantly increasing at each process shrink. The first image below shows that:

intel_cannonlake.png



Die size cost is increasing at a non linear rate every generation. Cost/Transistor is still going down, but the cost of die area has been increasing at even faster rate. So big dies have always been expensive, but they are getting even more expensive over time, at rate MUCH higher than the rate of inflation. 32nm to 14nm, costs have ~doubled. That means that 400mm2 die today cost multiple times higher than 400mm2 die a few generations ago. So costs have to increase, or margins shrink, just with constant size die as generations move forward.

Costs doubling in a couple of generations, and that is just for constant die size
.

Now factor in an increasing die size on top. Here is a nice visual Yield Calculator to play with:
http://caly-technologies.com/en/die-yield-calculator/
zZBEoz3.png



I will do three die sizes Intels 8700K (149mm2 per wikichip), 1080Ti (471mm2), and 2080Ti (754mm2). Sticking with defaults other than changing only the wafer size to 300mm.

8700K - 86% yield, 332 good dies
1080Ti - 64% yield, 71 good dies
2080Ti - 49% yield, 34 good dies


Implications? If it costs ~$50 dollars to produce an 8700K size die,then it would cost:

~$233 to produce a 1080Ti size die
~$488 to produce a 2080Ti size die

While people can ague about the minutia of the tools defaults, this is the reality of big die scaling.

Die area cost is increasing at a non linear rate at each process generation, and within a generation larger dies increase in cost at non linear rate per area increase. Modern large die costs are the unfortunate multiplication of two non linear prices increases.

This why people who pay attention to the realities of silicon production, thought such a die was cost prohibitive. Hence comments like this one:
https://www.anandtech.com/show/13249/nvidia-announces-geforce-rtx-20-series-rtx-2080-ti-2080-2070/2 (Ryan Smith)


I was very surprised to see Big Turing in a consumer product, and pricing it the same as Titan X is NOT gouging, it's losing margin compared to Titan X.

Don't expect any competition to ever show up, and compete on price against such a monstrous and expensive die.

Thank you for elaborating on the manufacturing aspect, and in such an informative way.

I agree about the RTX 2080 Ti -- it's based on a monstrous chip, unlike any of the others -- it was probably best to leave it out of the whole thing. I still don't think Titan X is a benchmark for acceptable margins in the consumer space, but then again, RTX 2080 Ti is quite different, as we've established already. Looking forward to the die sizes of the other two RTX cards; wouldn't be surprising if 2080 is about 500 mm², and 2070 -- about 350 mm².

That being said, ignoring the 754 mm² elephant in the room, would you say Nvidia's margins for cards similar to those in 2010 -- about 500 mm² and less -- have remained at least somewhat consistent throughout the years? I believe they haven't. Again, could be wrong.

I did play around with the calculator, it's very cool, thanks. I'm thinking of adding info from it in the list. Can you think of a way to calculate, relatively speaking, the cost increase of every TSMC process since 2010? Say, if 40 nm is 100%, and I want to calculate how much more expensive a given die size would be at 28 nm, how do I do that?
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
I did play around with the calculator, it's very cool, thanks. I'm thinking of adding info from it in the list. Can you think of a way to calculate, relatively speaking, the cost increase of every TSMC process since 2010? Say, if 40 nm is 100%, and I want to calculate how much more expensive a given die size would be at 28 nm, how do I do that?

I don't think there is any way to reasonably calculate real cost increases, as they will mostly be closely guarded secrets.

I just wanted to highlight that two important trends that cause rising chip costs are at play. Rising cost of unit chip area, that is going up faster than inflation, and how big die sizes impact yield.

The point is that there is a reasonable expectation, that:

A) the same size die will cost more over time, and that increase will be more than the rate of inflation.
B) A much bigger die will cost MUCH more, than a smaller die.

I really like the Yield tool. While the default defect rate is not going to be exact (closely guarded secret), but it the defaults are probably better than the rates some people will tweak to "prove" their personal agenda. It's mostly instructive to show how yield decreases with die size, which is why big dies are so rare and expensive.
 

zliqdedo

Member
Dec 10, 2010
59
10
81
A) the same size die will cost more over time, and that increase will be more than the rate of inflation.
B) A much bigger die will cost MUCH more, than a smaller die.

No arguments there, I doubt anyone here would try to disprove this; the big question, though, is whether these additional increases in cost warrant doubling the price to maintain Fermi margins, or is Nvidia forcing consumers' hands simply because they can?

As suggested by crisium, RX Vega 56 can be considered proof that Nvidia's pricing cannot be justified by die cost and inflation alone, given how AMD's card launched for $400 and is based on a GPU that, to my knowledge, is 510 mm² large - even higher than quoted by crisium. Sure, it's manufactured by GloFo, rather than TSMC, but it's hard to believe that alone can account for the frankly huge gap.

AMD sells a cutdown 486mm2 (Vega 56) for $400 MSRP. Nvidia sells cutdown 471mm2 for $700. So die size is not the reason for the high prices Nvidia charges, rather the market simply allows them to do so. In a truly high competitive environment, we could probably get reasonably close to Fermi prices per die size, but a little higher with adjustments to inflation and increased area costs. Vega 56 486mm2 proves that. $350 GTX 570 is $400 today. And the 570 is 520mm2. So same inflation adjusted price, and slightly smaller area thanks to die size costs rising.

Without competition, we won't get there. But I think we can do moderately better than today if more people voted "no" with their wallets.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
No arguments there, I doubt anyone here would try to disprove this; the big question, though, is whether these additional increases in cost warrant doubling the price to maintain Fermi margins, or is Nvidia forcing consumers' hands simply because they can?

As suggested by crisium, RX Vega 56 can be considered proof that Nvidia's pricing cannot be justified by die cost and inflation alone, given how AMD's card launched for $400 and is based on a GPU that, to my knowledge, is 510 mm² large - even higher than quoted by crisium. Sure, it's manufactured by GloFo, rather than TSMC, but it's hard to believe that alone can account for the frankly huge gap.

There were negligible sales of Vega 56 at $400. AMD may have been desperate enough to sacrifice most of their margins to advertise that price, but even today with mining dead, you can't buy a Vega 56 for $400.

You can see right now, that GTX 1080 is selling for significantly less than Vega 64, despite GTX 1080 being the faster card. Competitive forces should have driven Vega 64 cheaper than GTX 1080.

Vega chips are simply too expensive to match the price. Because they are too big.

His example doesn't counter my point. It reinforces my point.
 

maddie

Diamond Member
Jul 18, 2010
5,152
5,540
136
No arguments there, I doubt anyone here would try to disprove this; the big question, though, is whether these additional increases in cost warrant doubling the price to maintain Fermi margins, or is Nvidia forcing consumers' hands simply because they can?

As suggested by crisium, RX Vega 56 can be considered proof that Nvidia's pricing cannot be justified by die cost and inflation alone, given how AMD's card launched for $400 and is based on a GPU that, to my knowledge, is 510 mm² large - even higher than quoted by crisium. Sure, it's manufactured by GloFo, rather than TSMC, but it's hard to believe that alone can account for the frankly huge gap.
And this is with HBM2, which we have previously been told, is hugely expensive, making the die cost stand out even more. Truly amazing the ridiculous defenses being mounted to justify the prices.
 
  • Like
Reactions: crisium

DooKey

Golden Member
Nov 9, 2005
1,811
458
136
It seems the market is currently justifying the prices if you ask me. RTX 2080 Ti is sold out at least through November. The market has spoken.

This goes for Vega 64 as well. They sold everything they could make. (No matter the margins)

All of the faux outrage over these high-end video cards is funny to watch. Most here aren't the market for these cards, so why worry about it?

Bread and butter is the sub $300 market for both companies. Halo products are halo products and aren't sold in quantity.

Seriously. Look at all the people around here running 1060's and 580's.
 
Last edited:

PrincessFrosty

Platinum Member
Feb 13, 2008
2,300
68
91
www.frostyhacks.blogspot.com
I’ve complied a quick and dirty list of Nvidia’s pricing in relation to die sizes since Fermi, the reason being that while it’s no secret the prices of video cards have skyrocketed in the last 6 years, with RTX just the latest example of that, I’ve since wondered whether it was justified; well, to be honest, I believe it hasn’t.

I appreciate the breakdown of data, but I take issue about whether something priced in the market is justified or not. The price of things is what the market is willing to pay on aggregate and that's how all big business price their products. This isn't communism where some central authority decides the real price. Most of it is contextual, Nvidia can afford high prices right now because AMD have nothing like this is as a product, when they do then competition will drive down prices.
 

coercitiv

Diamond Member
Jan 24, 2014
7,308
17,256
136
This isn't communism where some central authority decides the real price.
It certainly is not, it's more like monopoly where some other de facto authority decides the real price. Different ideological flaw, same outcome.

It may surprise some people to find out that criticizing pricing and more importantly openly discussing fair prices for goods and services is a feat of the free market economy, not of the fully regulated communism. As a side note, communist states showed central authorities failed to decide prices anyway, since those regulated goods and services simply vanished in front of the average consumer. (we had the "paper" money, we did not have the food, the energy, the medicine).