Speculation: Ryzen 3000 series

Page 15 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

What will Ryzen 3000 for AM4 look like?


  • Total voters
    230

Mopetar

Diamond Member
Jan 31, 2011
7,843
5,998
136
Even with a perfect MT scaling it will still be a matter on how fast every one of these threads can finish the work for 1 frame. Unless the CPU cant process the volume of data fast enoght. And games offload most of the heavy stuff to the GPU (shader compute) because GPU are just faster for that.

If you had perfect MT scaling, the CPU limits would be easier to overcome by adding more cores as that’s almost always easier than anything else.

If you had perfect scaling, you’d need something almost indescribably complex to wind up being limited by the CPU
 

Mopetar

Diamond Member
Jan 31, 2011
7,843
5,998
136
true but I want to add my gripe with dx12 and vulkan: There has not been a single-game /game-engine that is dx12 /vulkan only.

That won’t happen for a while. Look at Steam hardware surveys and you can easily see why.

There are still loads of people who don’t have cards that would support such a game. So you have companies that aren’t in any hurry to drop the old engines.
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
Experience in the last couple years have shown that dx12 for some reason doesn't work or is too hard if even dice /BF-series can't get it to work right. Besides that not everything can be split into threads.

It's not just games but many applications are still single-threaded. Browser might use multiple processes/threads but rendering of a single page is still single-thread performance dependent.



true but I want to add my gripe with dx12 and vulkan: There has not been a single-game /game-engine that is dx12 /vulkan only. The paradigms are so different that IMHO dx12 simply can't work correctly as long as the game also supports dx11. dx12 is simply tacked on. Even AotS has dx11 path. You have to limit your game /engine to what can be done with dx11 and then replace the code with dx12. What should be done is pure dx12. Only then if ever will it shine.

Currently it like building a race car on top of a truck platform. It's still a truck deep down and it shows.

People are misguided about DX-12 because the vast majority of reviews always use a 5GHz high performance CPU like Core i7 or Core i9. People should see results with low performance CPUs in order to really appreciate how good DX-12 is.
 

Shivansps

Diamond Member
Sep 11, 2013
3,855
1,518
136
But both of those has happened.

It did not, game port performance is still awfull on every system, to te point the only game that i could actually feel it was propely optimised is Battlefield 1 that runs on almost everything.

As for the second, im still waiting for my RX480 to catch up with the GTX1060, no matter how many times people repeat consoles would make games run better on AMD and the RX480 would improve over time with drivers, IT DID NOT HAPPEN.

If you had perfect MT scaling, the CPU limits would be easier to overcome by adding more cores as that’s almost always easier than anything else.

If you had perfect scaling, you’d need something almost indescribably complex to wind up being limited by the CPU

You would need to have a large volume of information that it be worth to be splited in several tasks in the first place. You cant split 1+1=2 in 64 threads.
Thats the only thing it can offset the ST advantages
Games today dint have that kind of stuff and even if they wanted, they would offload it to the GPU.

Even if you could take any modern game that exist today, and make it perfectly MT to 8 cores, that would mean that every one of those 8 cores to have way too long idle times, a faster ST 4 Core would still be faster, AS LONG IT CAN KEEP UP with the amount of stuff to be done.

People are misguided about DX-12 because the vast majority of reviews always use a 5GHz high performance CPU like Core i7 or Core i9. People should see results with low performance CPUs in order to really appreciate how good DX-12 is.
Vast mayority of people sees DX12/Vulkan as something magical that would make games to use all cores, that is false, it only allows the rendering engine to use more cores, that dosent mean it will be faster, to be faster it needs a vast amount of work. Then you need a escene complex enoght or/and slow enoght cpu to even notice.
And this is not even the worst part and most of CPU time is spend on game logic before even starting the render.
 
Last edited:
  • Like
Reactions: ryan20fun

realibrad

Lifer
Oct 18, 2013
12,337
898
126
It did not, game port performance is still awfull on every system, to te point the only game that i could actually feel it was propely optimised is Battlefield 1 that runs on almost everything.

As for the second, im still waiting for my RX480 to catch up with the GTX1060, no matter how many times people repeat consoles would make games run better on AMD and the RX480 would improve over time with drivers, IT DID NOT HAPPEN.



You would need to have a large volume of information that it be worth to be splited in several tasks in the first place. You cant split 1+1=2 in 64 threads.
Thats the only thing it can offset the ST advantages
Games today dint have that kind of stuff and even if they wanted, they would offload it to the GPU.

Even if you could take any modern game that exist today, and make it perfectly MT to 8 cores, that would mean that every one of those 8 cores to have way too long idle times, a faster ST 4 Core would still be faster, AS LONG IT CAN KEEP UP with the amount of stuff to be done.


Vast mayority of people sees DX12/Vulkan as something magical that would make games to use all cores, that is false, it only allows the rendering engine to use more cores, that dosent mean it will be faster, to be faster it needs a vast amount of work. Then you need a escene complex enoght or/and slow enoght cpu to even notice.
And this is not even the worst part and most of CPU time is spend on game logic before even starting the render.


I thought the 480 had caught the 1060, and extended it lead in DX12?
 

Terzo

Platinum Member
Dec 13, 2005
2,589
27
91
Is there any news as to release date for this yet? For some reason I was assuming around April, but I recently saw an article from Tom's suggesting it probably wont be until Q3.

Is it likely that AMD will provide more definite release dates come CES?
 

jpiniero

Lifer
Oct 1, 2010
14,618
5,227
136
Is there any news as to release date for this yet? For some reason I was assuming around April, but I recently saw an article from Tom's suggesting it probably wont be until Q3.

Launch looks like it would be at Computex, actual release would presumably be a bit after it.
 

Terzo

Platinum Member
Dec 13, 2005
2,589
27
91
Damn quite a bit later than I thought. I guess I assumed they'd match the release date for the 2000 series.
 
Feb 4, 2009
34,580
15,795
136
Damn quite a bit later than I thought. I guess I assumed they'd match the release date for the 2000 series.

It’s unknown at this point. Safe bet is launch announcement at CES, then product on shelves sometime later.
Later could mean a week
Later could mean a few months
Later could mean some get release and others get released at a later date

Could copy me, I’m thinking of putting together a 2200G system then upgrade to a 3000 chip when all the dust settles. That is what’s great about now AM4 motherboards & memory should all work pretty seemlessly with the 3000 products

*Provided AMD doesn’t disappoint which it has a nasty habit of doing
 

Terzo

Platinum Member
Dec 13, 2005
2,589
27
91
It’s unknown at this point. Safe bet is launch announcement at CES, then product on shelves sometime later.
Later could mean a week
Later could mean a few months
Later could mean some get release and others get released at a later date

Could copy me, I’m thinking of putting together a 2200G system then upgrade to a 3000 chip when all the dust settles. That is what’s great about now AM4 motherboards & memory should all work pretty seemlessly with the 3000 products

*Provided AMD doesn’t disappoint which it has a nasty habit of doing

I definitely had considered that but I don't want to deal with the hassle of swapping out cpu's and heatsinks as well as trying to offload the 2200. Realistically I could just get a 2600 now and be set for a while but my heart is pretty much set on zen 2 with the belief that it will be an appreciable improvement over current 2000 series offerings.
 
Feb 4, 2009
34,580
15,795
136
I definitely had considered that but I don't want to deal with the hassle of swapping out cpu's and heatsinks as well as trying to offload the 2200. Realistically I could just get a 2600 now and be set for a while but my heart is pretty much set on zen 2 with the belief that it will be an appreciable improvement over current 2000 series offerings.

Got a micro center near you?
$80 2200G & cooler.
 

Mopetar

Diamond Member
Jan 31, 2011
7,843
5,998
136
You would need to have a large volume of information that it be worth to be splited in several tasks in the first place. You cant split 1+1=2 in 64 threads.
Thats the only thing it can offset the ST advantages
Games today dint have that kind of stuff and even if they wanted, they would offload it to the GPU.

I'm aware of that, but the argument is that single threaded performance is already really good, so you can run millions of those little calculations that can't be parallelized on a single core for each frame without any problem. As an example, in the past the game might be trying to run a lot of those little calculations for eight different AIs (computer controlled players) and it had to do them all on a single core/thread which limited performance. If that code can experience perfect MT scaling as you said, that means each of this individual AIs could run on its own core.

If we pretend we have a four core chip, each AI has to share a core with one other AI. Doubling the cores so that each AI can have its own core is much easier than doubling the ST performance of the existing cores (whether that's done with clock speed or IPC improvements doesn't matter) because you've assumed perfect MT scaling. You would have to see each individual AI get so complex that it completely saturates a core before you gain performance from better ST performance. However, no one would build anything that complicated to begin with (or at least they wouldn't have historically) because there was a distinct lack of hardware to support extremely multi-threaded code.

ST performance is important because we don't have perfect MT scaling, but if we did (or could get reasonably close to it) you'd be better off getting a 32 core Threadripper than a slightly faster i7 to improve performance. I'm just saying that saying that arguing about clock speed or IPC improvements when discussing a hypothetical situation where really good MT scaling exists is silly, because throwing more cores at it is almost always the better answer until you hit the point where you end up bound on any single core for the frame rate you want (or the GPU can support) or you've run out of things (e.g. no extra AIs) for the additional cores to do. If you aren't, even the 32-core Threadripper at 3 GHz would be better exchanged for a 64-core Epyc at 2 GHz since you get more out of it.
 
  • Like
Reactions: ub4ty

jpiniero

Lifer
Oct 1, 2010
14,618
5,227
136
Maybe they'll just skip the Zen2 tease part and launch it instead?

The Gigabyte leak suggesting the chipset launch at Computex seemed pretty legit to me, and of course releasing the chipset and the CPU at the same time seems pretty logical.
 

scannall

Golden Member
Jan 1, 2012
1,946
1,638
136
The Gigabyte leak suggesting the chipset launch at Computex seemed pretty legit to me, and of course releasing the chipset and the CPU at the same time seems pretty logical.
What I'm curious about is, since Epyc will have PCIe 4, will Ryzen? Should be doable without a socket change.
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
What I'm curious about is, since Epyc will have PCIe 4, will Ryzen? Should be doable without a socket change.
I'm not interested in a platform w/o PCIE 4.0 + 7nm going forward.
EPYC will ahve PCIE 4.0
Ryzen and threadripper better as well.. Otherwise, I have no interest.
PCIE 4.0 needs to be supported by your motherboard.
So, if you want PCIE 4.0, even if the processor supports it and the socket, you're going to need a new mobo.
PCIE 4.0 is backward compatible w/ 3.0.


I have ryzen 14nm/pcie 3.0 purchased in 2017. I need 7nm/pcie 4.0 to get me off this platform.
I am willing to sell off the CPU/Mobo for it as I paid little to nothing for them.
This is sort of why you don't go crazy buying the most high-end hardware of a series. The mid-tier of the upcoming big update series obsoletes it relatively quickly.

Having paid about $100 for a mobo, it's fine if the 3rd gen obsoletes it.
I'd get $50 or so for it. Good deal.

PCIE 4.0 was never going to magically work on a PCIE 3.0 motherboard. So, there is no reason to have any expectations about 3rd gen Ryzen. I'd hope they shoot for much more PCIE slots beyond the current 32 slots offered on Ryzen and PCIE 4.0 and more I/O for threadripper than I care about maintaining an incompatible socket. If they retain compatibility, it should be on 8 cores and less on the budget tier line of 7nm.

I want the platform to soar towards its potential not be restricted by unrealistic backward compatibility.
 
Last edited:

Reinvented

Senior member
Oct 5, 2005
489
77
91
Ryzen 2 isn't going to mean much to me at all, especially if motherboard makers can't get their BIOS's sorted. ASRock has pretty much abandoned their flagship X300 series and B350/A320 boards. They couldn't even get it sorted to work very well with Ryzen+. I do hope that Ryzen 2 proves to be a nice powerhouse at a reasonable price as well!
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,691
136
I'm not interested in a platform w/o PCIE 4.0 + 7nm going forward.
EPYC will ahve PCIE 4.0
Ryzen and threadripper better as well.. Otherwise, I have no interest.
PCIE 4.0 needs to be supported by your motherboard.
So, if you want PCIE 4.0, even if the processor supports it and the socket, you're going to need a new mobo.
PCIE 4.0 is backward compatible w/ 3.0.

Don't forget you'll also need actual PCIe 4.0 devices to actually get any use from it. I doubt they'll be many for the first year, except a couple of flagship NVMe SSD parts here and there.

Most expansion cards have barely moved past PCIe 2.0.
 

DrMrLordX

Lifer
Apr 27, 2000
21,637
10,855
136
Ryzen 2 isn't going to mean much to me at all, especially if motherboard makers can't get their BIOS's sorted. ASRock has pretty much abandoned their flagship X300 series and B350/A320 boards. They couldn't even get it sorted to work very well with Ryzen+. I do hope that Ryzen 2 proves to be a nice powerhouse at a reasonable price as well!

While I agree that UEFI support for ASRock's X370 boards has been pretty bad, you'll notice that they shifted their UEFI support to X470 and B450 boards instead. So when the 5-series boards launch, you can bet that ASRock (and others) will prioritize support for those new boards. You will want to buy a new board.

If you are expected good UEFI updates for X370 boards, eh, forget about it. Not really gonna happen.