- Mar 3, 2017
- 1,777
- 6,791
- 136
Based on the video, in performance mode, they're saying 65-80W on the CPU. Which tracks with what I’ve seen for performance mode.. around 80W max.The dGPU use as much as 110W measured at the main, that s 75W peak left for the rest of the system, so about 68W at the DC adapter output, wich translate to 50-55W for the CPU.
STX Halo my beloved...I caught these posts on Xitter regarding Strix Halo. Can see where the CPU and GPU is located based on hotspots. Also, die sizes.
View attachment 104283
View attachment 104284
Am I insane to say I am interested in ST. halo than the 9950X?STX Halo my beloved...
Not at all.Am I insane to say I am interested in ST. halo than the 9950X?
Like the 9950X is so boring where the Halo is fun and new.
WTF is Cinebench 2003?Fresh Benchmarks of 9950X with water cooling:
https://videocardz.com/newz/amd-strix-halo-zen5-apu-leak-shows-307mm²-die-with-rdna3-5-graphics-compared-to-rtx-4070-80wWoah, 450mm² of N3 and N4 silicon... this thing is yooge
Programs compiled targeting znver5 aren't as fast as they could be. But outside of benchmarks and research labs no one will be doing that. So for most people it means nothing.What does that mean? Please explain like I am 5.![]()
How dare you forget Gentoo users compiling everything with -march=native and -mtune=native.Programs compiled targeting znver5 aren't as fast as they could be. But outside of benchmarks and research labs no one will be doing that. So for most people it means nothing.
Oh no, I'm waiting to see it in something the size of this, or probably a bit bigger and throw my money on itAm I insane to say I am interested in ST. halo than the 9950X?
Like the 9950X is so boring where the Halo is fun and new.
9950x is for sure not that exciting, as its just slighlty faster 7950x, but not sure what is so exciting about Halo either - its pretty much glorified APU, with integrated GPU thats finally not completely useless. Still, if you are desktop user, would you now get laptop with Halo instead, cause its so "fun and new"?Am I insane to say I am interested in ST. halo than the 9950X?
Like the 9950X is so boring where the Halo is fun and new.
If you follow the settings that hwbot benchers use in this benchmark, your score would be much higher, for example my result from back ago:That x265 benchmark result is insane, here my previous 7950X (stock):
View attachment 104292
I suspect my result was a little low, but still, that 9950X is 62% faster
Why did it wait this long? Packaging, process shrinks and memory performance. Up until just a few years ago, you couldn't put the kind of memory packages around a regular CPU to feed these beasts, both with respect to individual package throughput, and the lanes of data needed to get it into a processor. The closest we ever came, prior to Apple's M1 or Strix Halo, was Kaby Lake- G (not a true APU, just a package that had both a CPU, a GPU, and a memory IC, so it looked like one to the outside world) and that was an absolute market failure. It was also near impossible to pack enough circuits and transistors into a single die, or pack enough dies close enough, connected by fast enough and power efficient enough data links to make it make sense from a performance and power standpoint.9950x is for sure not that exciting, as its just slighlty faster 7950x, but not sure what is so exciting about Halo either - its pretty much glorified APU, with integrated GPU thats finally not completely useless. Still, if you are desktop user, would you now get laptop with Halo instead, cause its so "fun and new"?
Since APUs have been a thing for years, it actually begs question why did we have to wait for one like Halo up until now - i mean one that combines top desktop CPU with semi-competent GPU, instead of being gimped on either one or the other part. Like APUs up until now being only single chiplet thing, or Intel´s high-end desktop CPUs with their crappy GPUs.
an ancient cinebench versionWTF is Cinebench 2003?
ah ok, yeah I didnt change bench default settings. So, 9950X would be 20-25% faster, not bad eitherIf you follow the settings that hwbot benchers use in this benchmark, your score would be much higher, for example my result from back ago:
![]()
tsa`s 216.14 fps score: HWBOT x265 Benchmark - 1080p with aRyzen 9 7950X3D
The Ryzen 9 7950X3D @ 5044MHzscores getScoreFormatted in the HWBOT x265 Benchmark - 1080p benchmark. tsaranks #47 worldwide and #2 in the hardware class. Find out more at HWBOT.hwbot.org
Why does compiler has to support 2*4 decoder? It is invisible to software. While you may fine tune the code to get maximum throughput, but it won't do much to improve performance. It is not like supporting new instruction set, it is just an enhancement to existing decode logic.Programs compiled targeting znver5 aren't as fast as they could be. But outside of benchmarks and research labs no one will be doing that. So for most people it means nothing.
No one said it did. It doesn't and that is why AMD didn't contribute anything to gcc for it. But in theory you can be a bit clever if you know that you can now chain likely-take. branches.Why does compiler has to support 2*4 decoder? It is invisible to software. While you may fine tune the code to get maximum throughput, but it won't do much to improve performance. It is not like supporting new instruction set, it is just an enhancement to existing decode logic.
This won't be able at all to affect any existing software, only the software written from now on. For most of the software you are using at the moment it completely doesn't matter. And if anyone is responsible for the lack of support it's AMD. Like Clang compiler still doesn't even have znver5 dummy target even tough AMD's own closed source compiler is based on Clang, so one could expect that upstream Clang could get patches as soon as possible. Yet not even dummy is presentRe: GCC decoder intelligence...
Essentially, this means that the compiler isn't presenting the actual machine language instructions to the CPU in a manner that is (near) optimal for it's instruction layout. This can (theoretically) hurt it's ability to efficiently, quickly and intelligently convert machine language instructions that it is receiving into actual work functions in the core itself. According to certain absent posters here that will not be named, modern out of order processors are completely immune to anything that a compiler or programmer can throw at them and can adjust on the fly to these things and the above has absolutely zero effect on processor performance. In reality, how the machine language instructions are presented to the decoder can have a measurable effect on processor performance and efficiency.
Time and updates to GCC will show the truth. I suspect that it won't be a big change in MOST, but not all, cases.
Pricing guesses here, since AMD marketing is referencing a Zen2 ! product as a surprise sales sweet spot (the 3700X).
Reasoning: I don't think Zen4 prices will go much lower in the next few weeks, and if Zen5 is too high, people will not buy it.
9950X $699
9900X $499
7800X3D $339 as of this post
9700X $299 - $329
9600X ~$249
I think the 9700X has to launch cheaper than what the 7800X3D is going for right now, if it is only a little slower in gaming.
Maybe 9950X and 9900X could be cheaper? Cannot believe 7950X is under $500 right now, lol.
No, it's Medusa-proto aka cool technical bits.Am I insane to say I am interested in ST. halo than the 9950X?
Yeah, useless is bit of a hyperbole, it was meant as nowhere near high-end desktop GPU performance. Obviously for non-demanding tasks and light gaming they were good enough. Then again, if they are good enough already, then it makes it Halo even less alluring, as the better GPU is pretty much what it takes apart from older APUs.Why did it wait this long? Packaging, process shrinks and memory performance. Up until just a few years ago, you couldn't put the kind of memory packages around a regular CPU to feed these beasts, both with respect to individual package throughput, and the lanes of data needed to get it into a processor. The closest we ever came, prior to Apple's M1 or Strix Halo, was Kaby Lake- G (not a true APU, just a package that had both a CPU, a GPU, and a memory IC, so it looked like one to the outside world) and that was an absolute market failure. It was also near impossible to pack enough circuits and transistors into a single die, or pack enough dies close enough, connected by fast enough and power efficient enough data links to make it make sense from a performance and power standpoint.
As for useless, I beg to differ. Since Rembrandt, and arguably Renoir, there's been enough APU performance to make 1080p laptops usable (maybe not ideal) for modern games. intelligent scaling and sharpening has made it even better on the software side. For professional workloads, you can do a whole lot on modern APUs that used to be hard restricted to external GPUs. Again, maybe not the fastest, but "good enough" for most things for most users.