Question Speculation: Ryzen 3000 series pricing

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Speculation: Ryzen 3000 series retail pricing

  • 3700X -> 8 cores at $330, 3800X -> 12 cores at $500, 16 core possibly at higher price.

    Votes: 26 40.6%
  • 3700X -> 12 cores at $330, 3800X -> 16 cores at $500

    Votes: 31 48.4%
  • They are both too expensive. AMD's high end can't cost that much because no one will buy it

    Votes: 3 4.7%
  • They are both too cheap. AMD will go after market share at all cost.

    Votes: 4 6.3%

  • Total voters
    64

scannall

Golden Member
Jan 1, 2012
1,329
145
136
#76
I'm expecting the price per core to be roughly what it is now. No crazy cuts or anything like that. Don't leave money on the table.
 

Atari2600

Senior member
Nov 22, 2016
724
169
106
#77
Anyway, a few other observations.

1. If AMD do have a 16C chip - they release it ASAP. The sooner it is out there, the sooner they get revenue from it and the more they can charge for it as it looks better value relative to any competition.

2. AMD are already making a strong statement vis-a-vis Intel and platform support. AM4 looks like it is going to see a doubling in CPU performance over platform life - and possibly a doubling of PCIe performance too - and the platform may get a Zen3 round of upgrades in a couple of years too! When is the last time Intel ever did anything approaching that?

3. Good point raised earlier about premium products normally costing more per core.
Then I had the thought - *if* yields end up being really strong - and AMD find themselves fusing off perfectly good chips to satisfy the 12C market which lets say starts out at a much higher volume of sales price point - would AMD make more money by dropping prices on 16C so its a bit nearer to 12C and increasing the value proposition pushing volume of sale of that part up? Even if that means 16C is cheaper *per core* than 12C - AMD are making more off the sale of a 16C than 12C as production costs for 12C are slightly higher than 16C (no fusing off needed and otherwise same silicon used).
 

Mopetar

Diamond Member
Jan 31, 2011
4,445
372
126
#78
If the rumors about the next Xbox and PlayStation using these same chiplets are true there is no way the desktop and server market is coming close to make the chiplets supply constrained, like ever. Current PS4 and Xbox One so far total over 130 million units, and neither Sony nor Microsoft will plan for less for their next systems. Said rumor could never be true if TSMC wouldn't be able to deliver on that amount, and the amount required for the desktop and server market is an absolute joke in comparison.
We don't know exactly when those consoles are expected to be released. Since they're rumored to use Navi graphics, presumably they're not coming until the end of this year at the earliest.

Also, it's 130 million over the lifespan of the console. Not all of those chips need to be available out of the gate. The PS4 sold ~20 million in its first two years, which includes two holiday buying seasons. They've since sold a little over 20 million per year.

In 2018, it's estimated that over a quarter of a billion PC's were sold, so over 10 times as many PCs as there are Play Stations. AMD alone isn't supplying that, but they'd like a bigger piece of it, especially the server market where the margins are significantly higher.

The point is that they've got a single component that could go into all of those different products and markets, but they can't supply all of them at once without having to cut back in other markets. If you want to think about what products AMD is most likely to offer, it is necessary to consider what constraints they're under.

This is why I think it's reasonable to consider that AMD may launch R3/R5 first, instead of their best chips first as they've done historically. There are other reasons simply beyond manufacturing constraints, such as knowing that AMD could make an R5 that beats out Intel's best (9900K) right now, which means that it could stand on its own.
 

maddie

Platinum Member
Jul 18, 2010
2,436
358
136
#79
AMD will only have roughly 1 year to amortize the Zen2 design assuming everything go as planned. Zen 3 is 2020.

Do you really want to limit sales by holding back variants for significant periods of time? Come on now.

Have we become so accustomed to the recent Intel past that we can't even imagine an alternative, which I might add, was the norm?
 

Atari2600

Senior member
Nov 22, 2016
724
169
106
#80
The point is that they've got a single component that could go into all of those different products and markets, but they can't supply all of them at once without having to cut back in other markets. If you want to think about what products AMD is most likely to offer, it is necessary to consider what constraints they're under.

This is why I think it's reasonable to consider that AMD may launch R3/R5 first, instead of their best chips first as they've done historically. There are other reasons simply beyond manufacturing constraints, such as knowing that AMD could make an R5 that beats out Intel's best (9900K) right now, which means that it could stand on its own.
So AMD have a supply constraint on an excellent product...

Then they decide the answer to that lack of supply is to only provide that product in only the lowest profit margin forms?

Like seriously? WTF!


Meanwhile in the real world.
If AMD have a supply constraint, the hierarchy for priority is: (1) EPYC, (2) Ryzen9/Ryzen7, (3) ThreadRipper3 (too niche to surpass R9/R7), (4) Ryzen5, (5) Ryzen3.
If AMD have a *massive* supply constraint, then the hierarchy for priority is: (1) EPYC, (2) ThreadRipper3, (3) Ryzen9/Ryzen7, (4) Ryzen5, (5) Ryzen3.
 

moinmoin

Senior member
Jun 1, 2017
687
193
96
#81
The capacity of TSMC's new process nodes need to fulfill at the very least the demand of their first early adopter, which for 7nm is Apple. The iPhone sales since 2015 have been over 200 million per year. Their second early adopter is by the way of its HiSilicon Huawei, which exceeded 200 million smartphones as well in 2018. We have talks about current demand for 7nm by far not being high enough to run TSMC's 7nm fab at full capacity. There is no way 7nm chiplets will be supply constrained unless AMD orders too few of them.
 
Apr 27, 2000
11,175
690
126
#82
Look at the 2990WX. 32 cores on 4 channels of DDR4 = 8 cores/DDR channel. Same as a prospective 16C AM4. It performs perfectly well on many workloads.
Dunno if that's the best possible example. Half of the 2990WX's cores aren't even serviced by a memory controller. Change the 2990WX around to have single-channel memory controllers on each die to balance out the latency problems and you'll push the performance bottleneck away from latency and towards bandwidth.

However, I will agree that, in most cases, end-users have no idea how much memory bandwidth is required for a given CPU handling a given workload before other factors (such as latency) become performance bottlenecks. The only way we'll ever really know if a 16c Zen2 will become bandwidth-starved is to let AMD release the things and see how well they perform at various different DDR4 speeds in comparison to 8c Zen2 parts. If per-core performance starts to degrade at low(er) DDR4 speeds more-rapidly on the 16c part than on the 8c part, then we'll know.
 

Mopetar

Diamond Member
Jan 31, 2011
4,445
372
126
#83
So AMD have a supply constraint on an excellent product...

Then they decide the answer to that lack of supply is to only provide that product in only the lowest profit margin forms?
I don't think it's that easy. AMD wants to sell Navi GPUs to the console manufacturers as well, so it's not just a matter of send chips to where the margin is highest. They consider having their GPUs in the gaming consoles to be a pretty important strategic priority.

As I pointed out in a previous thread, depending on what prices AMD wants to charge, they could earn more profit from selling two R5's instead of one R9 since the latter needs two chiplets.

I wouldn't be surprised AMD sells a single chiplet Ryzen that's meant for the gaming crowd. 16 cores aren't particularly useful for most games, and who knows if multiple chiplets introduces some performance penalty in games. There are going to be a limited number of chiplets that can hit the highest clock speeds, so AMD could conceivably make a greater profit selling them as an R5 that commands a much higher price.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
226
96
#84
I'm expecting the price per core to be roughly what it is now. No crazy cuts or anything like that. Don't leave money on the table.
Seems like common sense argument to me. But polling suggests it isn't that common.
 

amd6502

Senior member
Apr 21, 2017
450
53
61
#85
I'm expecting the price per core to be roughly what it is now. No crazy cuts or anything like that. Don't leave money on the table.
Given that the transistors per core have risen significantly and the cost per transistors as well, I suspect the cost per core is going to go up somewhat. Per gigaherz*core however, it should still be not far higher than Pinnacles.
 
Last edited:
Feb 6, 2011
1,771
88
136
#86
Given that the transistors per core have risen significantly and the cost per transistors as well, I suspect the cost per core is going to go up somewhat. Per gigaherz*core however, it should still be not far higher than Pinnacles.
Cost per transistor has not increased, cost per mm has
 

NostaSeronx

Platinum Member
Sep 18, 2011
2,302
98
126
#87
Cost per transistor has not increased, cost per mm has
If they stayed planar that might be the case.

However, FinFETs require SADP, SAQP, SAOP, etc. So, the cost per transistor per FinFET generation has increased. Do to the complexity of the FinFET's increasing process steps and masks.
 

moinmoin

Senior member
Jun 1, 2017
687
193
96
#88
I don't think it's that easy. AMD wants to sell Navi GPUs to the console manufacturers as well, so it's not just a matter of send chips to where the margin is highest. They consider having their GPUs in the gaming consoles to be a pretty important strategic priority.
AMD's involvement in consoles is through their semi custom business. The semi custom business primarily allows AMD to share a huge part of R&D costs they wouldn't do otherwise. I find that pretty obvious with AMD's consumer GPU development which for quite some time is no longer driven by the consumer market itself but by the demands of their two console customers.
 

Atari2600

Senior member
Nov 22, 2016
724
169
106
#89
Dunno if that's the best possible example. Half of the 2990WX's cores aren't even serviced by a memory controller. Change the 2990WX around to have single-channel memory controllers on each die to balance out the latency problems and you'll push the performance bottleneck away from latency and towards bandwidth.
So the 2990WX has potentially both bandwidth and latency - yet in the likes of rendering, still goes like stink.

A 16C AM4 will only have a potential bandwidth issue - but might do worse?

Not following the thought train.
 

Atari2600

Senior member
Nov 22, 2016
724
169
106
#90
I don't think it's that easy. AMD wants to sell Navi GPUs to the console manufacturers as well, so it's not just a matter of send chips to where the margin is highest. They consider having their GPUs in the gaming consoles to be a pretty important strategic priority.
Supplying strategic partners with technology critical to their own products is not equivalent to supplying Zen2 based Ryzen3s instead of Zen1 Ryzen 3s to OEMs or Newegg.


As I pointed out in a previous thread, depending on what prices AMD wants to charge, they could earn more profit from selling two R5's instead of one R9 since the latter needs two chiplets.
I made the argument above for dropping 16C prices closer to 12C if they were fusing off perfectly good parts so that 16C per core price could be less than 12C per core price.

That does not equate to pricing 12C (or 16C) parts at less per core than 8C parts.


You'll really struggle to find an example of where Intel or AMD have provided a premium product on more expensive silicon at a price that isn't superlinear to performance (relative to mainstream/low end products).
 

Topweasel

Diamond Member
Oct 19, 2000
4,742
345
136
#91
So the 2990WX has potentially both bandwidth and latency - yet in the likes of rendering, still goes like stink.

A 16C AM4 will only have a potential bandwidth issue - but might do worse?

Not following the thought train.
Honestly having not tracked the 4 Die chips for a while. Is this statement before or after the Fix that Level One talked about a couple weeks ago?
 

Mopetar

Diamond Member
Jan 31, 2011
4,445
372
126
#92
I find that pretty obvious with AMD's consumer GPU development which for quite some time is no longer driven by the consumer market itself but by the demands of their two console customers.
I don't think that was AMD's intention, but it may have been what happened as a result of AMD's lack of resources and as you point out the necessity for them to find a way to pay for their R&D on the graphics front while they focused on developing Zen.

I think that they wanted to be bigger in the consumer market, but they made a few missteps in how to approach that and the crypto currency boom didn't do them a lot of favors either. I think that with NVidia devoting a lot of silicon to ray tracing and increasing their margins, AMD has a much better chance at taking back market share in the consumer space.


Supplying strategic partners with technology critical to their own products is not equivalent to supplying Zen2 based Ryzen3s instead of Zen1 Ryzen 3s to OEMs or Newegg.
But it's still all connected. The point is that it's not simply a matter of just pushing products into whatever market has the highest margins until they're saturated. That makes the most sense from the point of view of maximizing profits, but it's not that simple.

The rumors regarding the consoles suggest that the clock speeds for the CPU aren't that high, so it's entirely possible that the chiplets that go towards a console are a bin that wouldn't be too valuable for other markets.

You'll really struggle to find an example of where Intel or AMD have provided a premium product on more expensive silicon at a price that isn't superlinear to performance (relative to mainstream/low end products).
Probably because it never made sense to do something like that until now. Chips were monolithic so if you had a chip that could have 8-cores, you would want to try to sell as many of them as you could.

With Ryzen 3000, AMD can be faced with a situation where they have an 8-core chiplet that can clock extremely high which makes it valuable for gaming. Sure they could pair two such chiplets to make a beastly 16-core part, but TDP constraints will probably mean that two such chiplets can't reach the same sustained speeds at the same time.

They might also have some 6-core chiplets that don't clock very high or have garbage efficiency. They could put them together as a 12 core part that doesn't get very high clock speeds, but still might be useful to consumers who want more cores. It's conceivable that AMD could get about the same amount of money for both parts and that it's more profitable for them to sell those as a 12-core part than two 6-core parts since they would be bottom of the barrel R3s.

There's no guarantee that they do any of that, but it's a possibility. Switching away from a monolithic die changes things. Maybe we do just get the same thing as always because that's what the market is used to, but if AMD changed things up it would be because of the way they've changed their approach to CPU design.
 
Apr 4, 2017
117
167
86
measlytwerp.live
#93
Honestly having not tracked the 4 Die chips for a while. Is this statement before or after the Fix that Level One talked about a couple weeks ago?
It was never an issue for Linux, for Windows it was an issue before this fix for some workloads and still is an issue for others.
 
Apr 27, 2000
11,175
690
126
#94
Not following the thought train.
The 2990WX has enough latency problems due to its lopsided mem controller situation that you may never notice that it is bandwidth-starved. Yes, it does "go like stink", but it might actually be faster - maybe - if the orphaned cores didn't have extra latency imposed by making an extra IF hop to a memory controller AND being forced to wait for the host core to finish making its own mem read/write requests (which is another problem people don't often think about; the mem controller from the properly-serviced die is being forced to service requests from two dice). The 2990WX has so many weird design trade-offs that it's hard to compare it to more-balanced package layouts.

If anyone really wanted to use TR2 to extrapolate data on whether 16c Zen2 might be bandwidth-starved, they would be better served running a 2950X with both mem controllers in single-channel mode. It still wouldn't exactly replicate Zen2, but it would be a lot closer.

I never said a 16c Zen2 was necessarily going to be "worse". It is possible that the 2990WX is starved for more bandwidth and that 16c Zen2 will have the same problem. If anything, the bandwidth-starvation may be more an issue on Zen2 due to higher clockspeeds and IPC.
 

Atari2600

Senior member
Nov 22, 2016
724
169
106
#95
The 2990WX has enough latency problems due to its lopsided mem controller situation that you may never notice that it is bandwidth-starved.
I'm still not getting you.

If the program has very "peaky" memory use and are synchronised across all cores, then yes, you might see a drawback in performance as the memory controller services all requests, until the requests are serviced then it'll fly. In which case it will be *at absolutely worst case* no worse than the 2990WX.

If the program has constant high memory use (say CFD), then the 2990WX should reflect the problems already.



It is possible that the 2990WX is starved for more bandwidth and that 16c Zen2 will have the same problem.
In certain workloads, it *will* be bandwidth starved. But those are niche within a niche. For many prosumers, a 16C on AM4 would be a very good solution.


If anything, the bandwidth-starvation may be more an issue on Zen2 due to higher clockspeeds and IPC.
Agreed - but I am hopeful of the new Zen2 Memory Controller being able to hit much higher DDR4 speeds.
 
Apr 27, 2000
11,175
690
126
#96
I'm still not getting you.

If the program has very "peaky" memory use and are synchronised across all cores, then yes, you might see a drawback in performance as the memory controller services all requests, until the requests are serviced then it'll fly. In which case it will be *at absolutely worst case* no worse than the 2990WX.

If the program has constant high memory use (say CFD), then the 2990WX should reflect the problems already.
Then maybe we just need to find someone with a 2950X, get them to go down to one DIMM per bank, and see what happens. Then maybe I will (or won't) be vindicated. Hell we can have them set it to the all-core turbo for a 2990WX and get them to run some Blender or CBR15 runs and compare it to 2990WX reviews, assuming we can get the RAM speed/timings to line up.
 

Atari2600

Senior member
Nov 22, 2016
724
169
106
#97
But it's still all connected. The point is that it's not simply a matter of just pushing products into whatever market has the highest margins until they're saturated. That makes the most sense from the point of view of maximizing profits, but it's not that simple.
Its not far off being that simple.

If AMD have to supply X chips for consoles, then they can devote X+% chips to that order. Anything left over can be either repurposed or sold to the console OEMs anyway later on.

If they have a shortage of Zen2 chiplets - they absolutely will be putting them into the most profitable products first. In which case do not expect to see Ryzen 3 or 5.

*I suppose, if yields were junk and they were harvesting loads - you might see R3/R5 as a means to get these semi-broken chips out the door.

The rumors regarding the consoles suggest that the clock speeds for the CPU aren't that high, so it's entirely possible that the chiplets that go towards a console are a bin that wouldn't be too valuable for other markets.
A lower speed with tighter power use is also same as EPYC.
 

Atari2600

Senior member
Nov 22, 2016
724
169
106
#98
Then maybe we just need to find someone with a 2950X, get them to go down to one DIMM per bank, and see what happens. Then maybe I will (or won't) be vindicated. Hell we can have them set it to the all-core turbo for a 2990WX and get them to run some Blender or CBR15 runs and compare it to 2990WX reviews, assuming we can get the RAM speed/timings to line up.
I have one. Its busy, but if you want to gimme an example of what to run I'll see what I can do.

[If I can figure out how to make it run in 2 ch mode.]
 
Apr 27, 2000
11,175
690
126
#99
Making it run in two-channel mode should be easy. You have two banks of RAM (one on each side of the CPU socket), so you populate slot A1 on each side. Or however the OEM labels the slots. That will put each memory controller of each die into single-channel mode. That will cut your memory bandwidth in half without explicitly changing memory latency.

There are two series of tests you could run that might prove informative. One would be to test the 2950X against itself: keep your clockspeed the same, but cut memory bandwidth in half to see what happens as compared to your current configuration (which presumably uses all four RAM channels). That might give us some insights about how a hypothetical 16c Zen2 at about the same clockspeed (or perhaps 15% lower clocks) would fare with DDR4 at the same speed as yours.

Then you could lower clocks to make it match a 2990WX all-core turbo (which is about where the 2990WX should run during any extended rendering workload). Try to match the memory speeds from a review of the 2990WX and compare your results. Assuming the 2990WX is not suffering from bandwidth starvation, results from your test run should be roughly half as fast as the 2990WX in any embarrassingly-parallel rendering workload. For an example, the AT review of the 2990WX in rendering tasks:

https://www.anandtech.com/show/13124/the-amd-threadripper-2990wx-and-2950x-review/8

If your 2950X, at the same clocks as a 2990WX, with only half the memory channels enabled, manages better than 50% of the performance of the 2990WX in any of those tests, then it should tell you something else is wrong with the 2990WX besides just memory bandwidth.

Honestly I think the first test idea would be more informative, though. I'd rather speculate about the performance a 2-channel DDR4 Zen2 than worry overly-much about performance bottlenecks of an existing CPU that will be EoLed as soon as Zen2 Threadripper hits the streets. So if you are pressed for time, I'd opt for the former.

@Atari2600 btw if you choose to perform the above test with your 2950X, would you be willing to post the results in the speculation thread instead of the pricing thread? The speculation thread is being derailed again, and honestly this line of inquiry might be more on-topic there anyway. Thanks.
 
Last edited:

Topweasel

Diamond Member
Oct 19, 2000
4,742
345
136
It was never an issue for Linux, for Windows it was an issue before this fix for some workloads and still is an issue for others.
I am just saying that it's being sold as gospel that its memory Latency and I haven't seen anyone but Wendal testing it since the patch. Even the ones that are still an issue. Are we absolutely sure it's memory latency and not Windows being Windows? It just reminds me of the game performance being low because of CCX to CCX latency when Ryzen launched sure it might not help. But there was a lot more wrong including again Windows's schedular. But it became gospel because it was the one thing people could see a major difference in early.
 

Similar threads



ASK THE COMMUNITY

TRENDING THREADS