Discussion Zen 5 Speculation (EPYC Turin and Strix Point/Granite Ridge - Ryzen 9000)

Page 706 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

jdubs03

Golden Member
Oct 1, 2013
1,282
902
136
The dGPU use as much as 110W measured at the main, that s 75W peak left for the rest of the system, so about 68W at the DC adapter output, wich translate to 50-55W for the CPU.
Based on the video, in performance mode, they're saying 65-80W on the CPU. Which tracks with what I’ve seen for performance mode.. around 80W max.

But in their article analysis completely maxed out at 80W TDP, it’s gets 23236 pts in R23-multi, and averages 102W (227 pts/W).
They also show at 65W TDP a score of 22960 pts (max power draw there not stated).

So to get that last bit definitely requires a lot of juice.

I think the key point is how well it does when it’s set to 28W TDP. It’s near its most efficient there. It’s hard to tell what mode they used to get their battery life results; but I presume performance. Would’ve thought it’d be higher.
 
  • Like
Reactions: krawcmac

mostwanted002

Member
Jun 16, 2023
63
127
86
mostwanted002.page
Am I insane to say I am interested in ST. halo than the 9950X?

Like the 9950X is so boring where the Halo is fun and new.
Not at all.

I'm primarily waiting for it to upgrade my laptop to it and it'll be so good for homelab applications. Best of both worlds on a single package, low power footprint, and compact form factors.
 
  • Like
Reactions: Gideon

CouncilorIrissa

Senior member
Jul 28, 2023
731
2,692
106
Programs compiled targeting znver5 aren't as fast as they could be. But outside of benchmarks and research labs no one will be doing that. So for most people it means nothing.
How dare you forget Gentoo users compiling everything with -march=native and -mtune=native.
 

misuspita

Senior member
Jul 15, 2006
738
904
136
Am I insane to say I am interested in ST. halo than the 9950X?

Like the 9950X is so boring where the Halo is fun and new.
Oh no, I'm waiting to see it in something the size of this, or probably a bit bigger and throw my money on it

 

LightningZ71

Platinum Member
Mar 10, 2017
2,509
3,191
136
Re: GCC decoder intelligence...
Essentially, this means that the compiler isn't presenting the actual machine language instructions to the CPU in a manner that is (near) optimal for it's instruction layout. This can (theoretically) hurt it's ability to efficiently, quickly and intelligently convert machine language instructions that it is receiving into actual work functions in the core itself. According to certain absent posters here that will not be named, modern out of order processors are completely immune to anything that a compiler or programmer can throw at them and can adjust on the fly to these things and the above has absolutely zero effect on processor performance. In reality, how the machine language instructions are presented to the decoder can have a measurable effect on processor performance and efficiency.

Time and updates to GCC will show the truth. I suspect that it won't be a big change in MOST, but not all, cases.
 

Timmah!

Golden Member
Jul 24, 2010
1,571
935
136
Am I insane to say I am interested in ST. halo than the 9950X?

Like the 9950X is so boring where the Halo is fun and new.
9950x is for sure not that exciting, as its just slighlty faster 7950x, but not sure what is so exciting about Halo either - its pretty much glorified APU, with integrated GPU thats finally not completely useless. Still, if you are desktop user, would you now get laptop with Halo instead, cause its so "fun and new"?
Since APUs have been a thing for years, it actually begs question why did we have to wait for one like Halo up until now - i mean one that combines top desktop CPU with semi-competent GPU, instead of being gimped on either one or the other part. Like APUs up until now being only single chiplet thing, or Intel´s high-end desktop CPUs with their crappy GPUs.
 
  • Like
Reactions: Joe NYC

tsamolotoff

Senior member
May 19, 2019
256
510
136
That x265 benchmark result is insane, here my previous 7950X (stock):

View attachment 104292

I suspect my result was a little low, but still, that 9950X is 62% faster
If you follow the settings that hwbot benchers use in this benchmark, your score would be much higher, for example my result from back ago:


So, roughly 27% more as compared to the prev gen x3d.
 

LightningZ71

Platinum Member
Mar 10, 2017
2,509
3,191
136
9950x is for sure not that exciting, as its just slighlty faster 7950x, but not sure what is so exciting about Halo either - its pretty much glorified APU, with integrated GPU thats finally not completely useless. Still, if you are desktop user, would you now get laptop with Halo instead, cause its so "fun and new"?
Since APUs have been a thing for years, it actually begs question why did we have to wait for one like Halo up until now - i mean one that combines top desktop CPU with semi-competent GPU, instead of being gimped on either one or the other part. Like APUs up until now being only single chiplet thing, or Intel´s high-end desktop CPUs with their crappy GPUs.
Why did it wait this long? Packaging, process shrinks and memory performance. Up until just a few years ago, you couldn't put the kind of memory packages around a regular CPU to feed these beasts, both with respect to individual package throughput, and the lanes of data needed to get it into a processor. The closest we ever came, prior to Apple's M1 or Strix Halo, was Kaby Lake- G (not a true APU, just a package that had both a CPU, a GPU, and a memory IC, so it looked like one to the outside world) and that was an absolute market failure. It was also near impossible to pack enough circuits and transistors into a single die, or pack enough dies close enough, connected by fast enough and power efficient enough data links to make it make sense from a performance and power standpoint.

As for useless, I beg to differ. Since Rembrandt, and arguably Renoir, there's been enough APU performance to make 1080p laptops usable (maybe not ideal) for modern games. intelligent scaling and sharpening has made it even better on the software side. For professional workloads, you can do a whole lot on modern APUs that used to be hard restricted to external GPUs. Again, maybe not the fastest, but "good enough" for most things for most users.
 

MarkPost

Senior member
Mar 1, 2017
378
794
136
WTF is Cinebench 2003?
an ancient cinebench version
If you follow the settings that hwbot benchers use in this benchmark, your score would be much higher, for example my result from back ago:

ah ok, yeah I didnt change bench default settings. So, 9950X would be 20-25% faster, not bad either
 

JustViewing

Senior member
Aug 17, 2022
269
473
106
Programs compiled targeting znver5 aren't as fast as they could be. But outside of benchmarks and research labs no one will be doing that. So for most people it means nothing.
Why does compiler has to support 2*4 decoder? It is invisible to software. While you may fine tune the code to get maximum throughput, but it won't do much to improve performance. It is not like supporting new instruction set, it is just an enhancement to existing decode logic.
 
  • Like
Reactions: Nothingness

gdansk

Diamond Member
Feb 8, 2011
4,578
7,694
136
Why does compiler has to support 2*4 decoder? It is invisible to software. While you may fine tune the code to get maximum throughput, but it won't do much to improve performance. It is not like supporting new instruction set, it is just an enhancement to existing decode logic.
No one said it did. It doesn't and that is why AMD didn't contribute anything to gcc for it. But in theory you can be a bit clever if you know that you can now chain likely-take. branches.
 

MS_AT

Senior member
Jul 15, 2024
870
1,767
96
Re: GCC decoder intelligence...
Essentially, this means that the compiler isn't presenting the actual machine language instructions to the CPU in a manner that is (near) optimal for it's instruction layout. This can (theoretically) hurt it's ability to efficiently, quickly and intelligently convert machine language instructions that it is receiving into actual work functions in the core itself. According to certain absent posters here that will not be named, modern out of order processors are completely immune to anything that a compiler or programmer can throw at them and can adjust on the fly to these things and the above has absolutely zero effect on processor performance. In reality, how the machine language instructions are presented to the decoder can have a measurable effect on processor performance and efficiency.

Time and updates to GCC will show the truth. I suspect that it won't be a big change in MOST, but not all, cases.
This won't be able at all to affect any existing software, only the software written from now on. For most of the software you are using at the moment it completely doesn't matter. And if anyone is responsible for the lack of support it's AMD. Like Clang compiler still doesn't even have znver5 dummy target even tough AMD's own closed source compiler is based on Clang, so one could expect that upstream Clang could get patches as soon as possible. Yet not even dummy is present;)
 

B-Riz

Golden Member
Feb 15, 2011
1,595
765
136
Pricing guesses here, since AMD marketing is referencing a Zen2 ! product as a surprise sales sweet spot (the 3700X).

Reasoning: I don't think Zen4 prices will go much lower in the next few weeks, and if Zen5 is too high, people will not buy it.

9950X $699
9900X $499
7800X3D $339 as of this post
9700X $299 - $329
9600X ~$249

I think the 9700X has to launch cheaper than what the 7800X3D is going for right now, if it is only a little slower in gaming.

Maybe 9950X and 9900X could be cheaper? Cannot believe 7950X is under $500 right now, lol.

Rumor is at

9950X $599
9900X $449
9700X $359
9600X $279

I was not too far off, seems if this is true AMD gonna get some sales with this launch! 😁
 
  • Like
Reactions: lightmanek

Timmah!

Golden Member
Jul 24, 2010
1,571
935
136
Why did it wait this long? Packaging, process shrinks and memory performance. Up until just a few years ago, you couldn't put the kind of memory packages around a regular CPU to feed these beasts, both with respect to individual package throughput, and the lanes of data needed to get it into a processor. The closest we ever came, prior to Apple's M1 or Strix Halo, was Kaby Lake- G (not a true APU, just a package that had both a CPU, a GPU, and a memory IC, so it looked like one to the outside world) and that was an absolute market failure. It was also near impossible to pack enough circuits and transistors into a single die, or pack enough dies close enough, connected by fast enough and power efficient enough data links to make it make sense from a performance and power standpoint.

As for useless, I beg to differ. Since Rembrandt, and arguably Renoir, there's been enough APU performance to make 1080p laptops usable (maybe not ideal) for modern games. intelligent scaling and sharpening has made it even better on the software side. For professional workloads, you can do a whole lot on modern APUs that used to be hard restricted to external GPUs. Again, maybe not the fastest, but "good enough" for most things for most users.
Yeah, useless is bit of a hyperbole, it was meant as nowhere near high-end desktop GPU performance. Obviously for non-demanding tasks and light gaming they were good enough. Then again, if they are good enough already, then it makes it Halo even less alluring, as the better GPU is pretty much what it takes apart from older APUs.
Regarding the reasons why not sooner, i am layman, so OK, i take your explanation, it very well may be. But IMO they could do it as soon as they made PS5/Xbox series APU, the only difference would be additional CPU chiplet. And those APUs are what, Zen 2?