• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."
  • Community Question: What makes a good motherboard?

Speculation: Zen 4 (EPYC 4 "Genoa", Ryzen 6000)

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

What do you expect with Zen 4?


  • Total voters
    183

soresu

Golden Member
Dec 19, 2014
1,539
762
136
then why would any cloud provider want to assign high performance threads to low demand work?
Depends on the cloud target - Google Zinc provides cloud rendering services for VFX/CG work, that is pretty high end stuff, I imagine there are other such high end cloud uses.

Besides which, if the work is so low demand, you could probably run it on the client you use to access the cloud in the first place.
 

soresu

Golden Member
Dec 19, 2014
1,539
762
136
I can't remember AMD being complacent and milking customers back in the warm and glorious days they led the market and Intel had to bribe their way into every OEM's cold, dead heart.
They were caught unawares by Core 2, despite its uArch predecessor Yonah being quite competitive already with K8.

If they had something waiting only a few months away, they would not have gutted Athlon X2 pricing to the extent that they did (I benefited from it, was a happy customer).
 

Atari2600

Golden Member
Nov 22, 2016
1,224
1,373
136
I can't remember AMD being complacent and milking customers back in the warm and glorious days they led the market and Intel had to bribe their way into every OEM's cold, dead heart.
They did get complacent.

Hector way overpaid for ATi and the Barcelona redesign was complacent. At best. [while it held its own in some areas due to the IMC, it did not advance AMD's cause further in any other area]


But, now... AMD are introducing fairly fundamental steps with every iteration and look like they have a rock solid roadmap that is not merely tweaking a bit of what came before.
 

moinmoin

Platinum Member
Jun 1, 2017
2,101
2,509
106
Besides which, if the work is so low demand, you could probably run it on the client you use to access the cloud in the first place.
That's not how cloud is often used. With typical cloud servers you essentially book virtual instances of CPU threads, and depending on service provider you pay CPU time actually used. But the availability is always there, even if idling. And in web serving (as a common use case example) idling is the rule, and performance capability is typically booked for the few exceptions where the site could be overloaded otherwise.
 

soresu

Golden Member
Dec 19, 2014
1,539
762
136
That's not how cloud is often used. With typical cloud servers you essentially book virtual instances of CPU threads, and depending on service provider you pay CPU time actually used. But the availability is always there, even if idling. And in web serving (as a common use case example) idling is the rule, and performance capability is typically booked for the few exceptions where the site could be overloaded otherwise.
Ah, I think we are talking about different cloud services entirely - I meant things like cloud office apps when I said low demand.
 

moinmoin

Platinum Member
Jun 1, 2017
2,101
2,509
106
Ah, I think we are talking about different cloud services entirely - I meant things like cloud office apps when I said low demand.
Even in such cases you are essentially serving data stored in the cloud ("web app" as well as user files) which often is not a one-off on demand but a service idling most of the time.
 

soresu

Golden Member
Dec 19, 2014
1,539
762
136
As a sidenote, how do AMD actually differentiate generations of Epyc in branding?

Obviously Ryzen goes with 1xxx,2xxx,3xxx - but how has Epyc progressed in branding?
 

A///

Senior member
Feb 24, 2017
828
578
106
Barcelona redesign was complacent. At best. [while it held its own in some areas due to the IMC, it did not advance AMD's cause further in any other area]
Complacency is defined as sitting on your hands and doing little. A badly designed processor or overpaying for a graphics company aren't complacency. One is a design issue that didn't translate well to materials as it did on paper and the other is a financial mistake even though people railed against it. AMD would be in a tougher spot today if they sold their graphics division off years ago before their major troubles began.
 
  • Like
Reactions: Space Tyrant

soresu

Golden Member
Dec 19, 2014
1,539
762
136
Complacency is defined as sitting on your hands and doing little. A badly designed processor or overpaying for a graphics company aren't complacency. One is a design issue that didn't translate well to materials as it did on paper and the other is a financial mistake even though people railed against it. AMD would be in a tougher spot today if they sold their graphics division off years ago before their major troubles began.
It was not just a bad processor design, it was also too late by far to compete effectively with the Core2 onslaught which trashed AMD pricing back then - even when Agena landed, that TLB errata made it iffy for server use, making any aspiring server customers wait for Deneb in 2009 (which was great IMHO), by then a full 6 years after K8.
 

A///

Senior member
Feb 24, 2017
828
578
106
It was not just a bad processor design, it was also too late by far to compete effectively with the Core2 onslaught which trashed AMD pricing back then - even when Agena landed, that TLB errata made it iffy for server use, making any aspiring server customers wait for Deneb in 2009 (which was great IMHO), by then a full 6 years after K8.
You're implying had it not been late, its poor design would have competed with Core2. In other words, if lateness is a condition in which you feel processors are make it or break it, then every proceeding architecture by either two in competition with the other is late as there is no 1:1 unified design and manufacturing cycle that are in step with one another.

Intel's 10nm is late. Real late. The announced 10nm plans nearly a decade ago in 2011. They announced 14nm in 2007. By definition, Intel's incredible complacency driven by their own strive of mediocrity to make as much money while doing as little as possible and not having a reliable answer to Zen even 3 years later is a failure. And they will not have an answer for at least another 2 years, maybe 3.

And that is presuming Intel's next uarch is competitive.
 
Last edited:
  • Like
Reactions: Space Tyrant

soresu

Golden Member
Dec 19, 2014
1,539
762
136
You're implying had it not been late, its poor design would have competed with Core2. In other words, if lateness is a condition in which you feel processors are make it or break it, then every proceeding architecture by either two in competition with the other is late as there is no 1:1 unified design and manufacturing cycle that are in step with one another.
From what I heard barring the TLB fault and process problems it wasn't particularly bad - again just somewhat late, I had the money to upgrade but decided to skip it for Deneb (excuse me Phenom II, so catchy).
 

A///

Senior member
Feb 24, 2017
828
578
106
From what I heard barring the TLB fault and process problems it wasn't particularly bad - again just somewhat late, I had the money to upgrade but decided to skip it for Deneb (excuse me Phenom II, so catchy).
Sorry I edited while you posted. I'm not chewing you out. The language is just iffy. I think AMD had some great ideas at the time, but being too revolutionary costs you.

Intel's 10nm is late. Real late. The announced 10nm plans nearly a decade ago in 2011. They announced 14nm in 2007. By definition, Intel's incredible complacency driven by their own strive of mediocrity to make as much money while doing as little as possible and not having a reliable answer to Zen even 3 years later is a failure. And they will not have an answer for at least another 2 years, maybe 3.

And that is presuming Intel's next uarch is competitive.

I had the OG Core 2s after getting rid of my fire hazard P4. It took me a while to believe in Core2 even though I was reading benches everyday. I feel the same as I did then with Zen2. I'll probably upgrade at Zen3 because my i7 is getting long in the tooth.
 
  • Like
Reactions: Tlh97 and soresu

soresu

Golden Member
Dec 19, 2014
1,539
762
136
Especially when you compare the cadence of current yearly improvements at AMD, the K8 era seems woefully lax - they made a great step forward with AMD64 and then wasted that lead in my opinion, the Agena TLB errata and Intel's dirty bribery perfidy with OEM's just exacerbated their financial blows.
 

soresu

Golden Member
Dec 19, 2014
1,539
762
136
I had the OG Core 2s after getting rid of my fire hazard P4.
Hahahahha me too.

I upgraded from my first personal (not hand me down) PC build of a P4 2.4 Ghz to a 3 Ghz one and it just ran hot as hell.

Pretty much my entire reason for first looking into aftermarket coolers - to sadly be repeated with my Bulldozer/Piledriver CPU's which went into lava mode while running SWTOR (perhaps the worst optimised game ever it would seem).
 
  • Like
Reactions: Tlh97 and A///

A///

Senior member
Feb 24, 2017
828
578
106
Hahahahha me too.

I upgraded from my first personal (not hand me down) PC build of a P4 2.4 Ghz to a 3 Ghz one and it just ran hot as hell.

Pretty much my entire reason for first looking into aftermarket coolers - to sadly be repeated with my Bulldozer/Piledriver CPU's which went into lava mode while running SWTOR (perhaps the worst optimised game ever it would seem).
I had two P4s. Prior that I had a P3 and before that a P1. My P3 was in the 500 Mhz range, IIRC. I was aware of PD at the time, but the Core2s had just come out months before. The OG Conroe 65 nm came out in the summer of 06 and I remember parts shopping in October or November of that year. I was cautious of it event though I'd read the writeups and examined the benchmarks. Fool me once, shame on you, fool me twice, shame on me kind of thing, you know. Going from P4 to Conroe was a huge step up for me. I held and clocked that chip for a while before getting my hands on a higher E chip in 2011 out of an OEM build sold for scrap (busted PSU, silly owner).

Nowadays I can upgrade each year if my heart desired. Realistically, I prefer waiting because the performance increases coupled with that bedazzling WOW factor is nostalgia. I've been tempted to get the 3900X at MSRP several time snow, or the 3800X that was on sale yesterday. I'll wait until the 4000 series. Last on AM4, possibly unless AMD change their minds, but it should last me a good deal of time.
 

soresu

Golden Member
Dec 19, 2014
1,539
762
136
Going from P4 to Conroe was a huge step up for me.
Yeah it was such a step up because P4 was a step backwards from the P6 uArch, though I think not quite on the level BD was over K10.6 (Thuban/Phenom II X6)..

It's weird that they didn't simply replace their desktop range with Yonah instead of carrying P4/NetBurst as long as they did.
 

A///

Senior member
Feb 24, 2017
828
578
106
Especially when you compare the cadence of current yearly improvements at AMD, the K8 era seems woefully lax - they made a great step forward with AMD64 and then wasted that lead in my opinion, the Agena TLB errata and Intel's dirty bribery perfidy with OEM's just exacerbated their financial blows.
I hope that cadence keeps. Because it's impressive. If you remove their start time on Zen about 7 years ago, it took them 3 years to match and overcome Intel except in gaming. I suspect Intel's Odyssey, which some suspect has failed but I'll hold my tongue on that, was or still is a method for Intel's marketing to push chips via the gaming slant. To my knowledge, pure gamers make up a small portion of DIY sales of the already small DIY sector. No idea on OEMS but Dell's been selling Ryzens in their gaming computers. No idea how long,t hough.

I think the people who say they care about an extra 5-10 FPS, or in some terribly optimized games, upwards of 40 FPS, these are extreme cases. In other words, there's greater likelihood of these people stating their opinions over those who aren't too bothered by the difference.

Intel, if they wanted to bribe OEMs now, would have to spend several times over their total bribe amounts per OEM. It isn't financially feasible. Nor is an astroturfing campaign. I sincerely doubt Intel was invested in VCR because of that dubious report last year where they published their feelings on AMD. They'd been hitting other companies but clearly they're amateurs because their success rate, especially against AMD, was executed terribly.
 

soresu

Golden Member
Dec 19, 2014
1,539
762
136
I've been tempted to get the 3900X at MSRP several time snow, or the 3800X that was on sale yesterday. I'll wait until the 4000 series. Last on AM4, possibly unless AMD change their minds, but it should last me a good deal of time.
Depends what you are into I guess, for me moar cores is always better - video encoders, path tracing renderers and compositors are my main compute workloads of choice.
 

A///

Senior member
Feb 24, 2017
828
578
106
Yeah it was such a step up because P4 was a step backwards from the P6 uArch, though I think not quite on the level BD was over K10.6 (Thuban/Phenom II X6)..

It's weird that they didn't simply replace their desktop range with Yonah instead of carrying P4/NetBurst as long as they did.
I remember someone explaining it to me a year ago but it had to do with costs and worrying if AMD had something better. Intel knew 10nm was a red headed stepchild quite a bit into their development of the node but decided to keep going because they were already invested in it. You're thinking, Intel's got a ton of money, what's the big deal. Shareholders. Time is money, and money is time.
 

A///

Senior member
Feb 24, 2017
828
578
106
Depends what you are into I guess, for me moar cores is always better - video encoders, path tracing renderers and compositors are my main compute workloads of choice.
Mine is mostly VM work and faster compile times among other things. $500 for the 3900X is a lot of money, but it does that work lickity split, to quote an older coworker who scored one for home use. You could always wait in this era of AMD where each gen gets 8-15% IPC increase along the benefits of a refined or mature node, but you may get stuck waiting "indefinitely." After Core matured in the 2010s, particularly with Sandy Bridge, you didn't need to update for a long time. On the gaming front, they, AMD, have made decent progress. There's a guy on youtube called thespyhood who's done generational comparisons of the most popular AMD Ryzen processors and there's a 15-20 fps gain on minimums and averages for most games.

I suspect Zen 3's single-thread performance will be greatly improved to match Intel or nudge past it. I suspect mult-thread performance to improve yet again. I expect better clocks out of the box, better boosting and all the other typical goodies people are theorizing about. My next upgrade after that will probably be Zen 5. Just because Intel hit a homerun with Core, it doesn't mean their next-gen uarch will be amazing, too. The amount of bad uarchs in history outnumber good ones. Keller is an excellent engineer, but placing too much emphasis on his work when there were more hands on people involved with Zen, than him, is a disservice to all the others. However, Keller isn't a messiah who will solve Intel's problems. Intel's next gen could have a 40% IPC increase over today's 10th gen parts, but it could in 2022-2023 match what AMD has in plan, or fall short. In other words, it may workout to Intel pulling their own AMD dark period.

As far as Mama Su/Dr. Lisa Su goes, she's an excellent leader and very likely a skilled engineer herself. But let's be real, she and the rest of AMD's management are out for Intel's blood, and rightfully so. They're probably going to pummel Intel for years.
 
Last edited:

soresu

Golden Member
Dec 19, 2014
1,539
762
136
Intel, if they wanted to bribe OEMs now, would have to spend several times over their total bribe amounts per OEM. It isn't financially feasible.
More than that, people and journo's are watching out for it now, like when nVidia got cautght with that iffy scheme a while ago - as an AMD buyer, I don't just feel like Intel cheated AMD with the bribery, I personally felt cheated (whih I know is more than a little strange, but hey tribalism), and I'm sure enough others felt similar that it makes a good story for a journo to break if Intel gets caught at it.

Besides which with their current ongoing uArch security woes and 'Glue Gate', they really can't afford to risk bad press.
 
  • Like
Reactions: Tlh97, OTG and A///

jpiniero

Diamond Member
Oct 1, 2010
8,505
1,489
126
To my knowledge, pure gamers make up a small portion of DIY sales of the already small DIY sector.
If anything, gaming is by far the biggest part of DIY. I can't emphasize that enough. And for good reason, pretty much everyone else is better off buying a prebuilt. Yes DIY is small in the scheme of things. You do have the e-penis crowd which can be quite profitable.

The vast overwhelming majority of the client market is boring corporate desktop and laptop machines that do little other than Office and a Web Browser.
 

soresu

Golden Member
Dec 19, 2014
1,539
762
136
There's a guy on youtube called thespyhood who's done generational comparisons of the most popular AMD Ryzen processors and there's a 15-20 fps gain on minimums and averages for most games.
It's still early yet to say final perf on Zen2, given how many BIOS updates and OS scheduler tunings you tend to get for signficant uArch changes.

Likewise I'd say the unified L3 change for Zen3 should be a doozy for those successive tuning updates on its own, even without whatever else comes with it.
 
  • Like
Reactions: Tlh97 and A///

ASK THE COMMUNITY