- Mar 3, 2017
- 1,777
- 6,791
- 136
Even a zen 16 core is plenty fast. The 16 was what broke intel's back because they used to have high core counts on their hedt lineup. amd ran a lorry over intel's head, backed it up and ran it over again.Sure, if you can afford a TR 3990X or 5995WX system.
My main point was that GPU general compute µArch has limited uses where it is obviously better than a CPU for power efficiceny.If you want the most power efficient video encoder you buy that AMD/Xilix hardware. 1500 bones gets you something more efficient than the two.
The Xilinx AMD card is not an fpga to my knowledge based on the material i've see of the ma35d. Your example while a sound theory is a stretch. how many new codecs are on he horizon? h.265 despite it's many flaws is still sticking around and the lazy slow adoption of av1 by major players shows very few of them care to do the leg work. h.266/vvc is the next big jump but who knows when that'll show up on the scene.My main point was that GPU general compute µArch has limited uses where it is obviously better than a CPU for power efficiceny.
Video, audio and image codec ASIC blocks bolted onto the GPU design are even more limited to the point that they cannot serve any function beyond the spec of each codec - although some such solutions do optimise the ASIC to prevent unnecessary function duplication between different codecs.
ASIC's give better perf/W in exchange for total reliance on the ODMs codec implementation which varies wildly from pants to meh compared to the best x264, x265 or libaom software solutions can offer for a given bitrate.
Sadly GPU general compute can't even run those complex programs without choking like a man trying to swallow a crushed glass smoothie.
I think an FPGA configuration could offer a nice middle ground, but I've yet to see such a system demonstrated - I'm not even sure what the AMD/Xilinx solution is actually using, as despite the Xilinx name they didn't take any pains to highlight it as FPGA based hardware.
It's decent sure, but I have a 3950X myself - even encoding just 1080p with x265 on reasonable settings it isn't exactly blowing my eyebrows off.Even a zen 16 core is plenty fast
3950x was good but the 5950x was much better. this is the firs ttime I'm reading of av2. I'd read into it but I'm dozing off due to having a few hot toddies because of a sore throat. must have overdone it with the ice cold rose last weekend.It's decent sure, but I have a 3950X myself - even encoding just 1080p with x265 on reasonable settings it isn't exactly blowing my eyebrows off.
With libaom it's even worse 😭
I can't even imagine how bad AV2 is going to be when it's first standardised 😅
It's already finalised since July 2020.h.266/vvc is the next big jump
still needs to be adopted by the software side of the equation. that'll be a while. even at that it'll be a long time before either of them become adopted by the general public. large scale conversion and encoding outfits may adopt it. it's too complex of a situation to say one thing or another about the future prospects.It's already finalised since July 2020.
It's successor ECM / h267 is in the works as we speak.
They are talking about a whopping 10x increase in decoder complexity to the point that ASIC is likely to be the only sensible way to go for consumers, with chunked compute on 64C+ server systems the only viable solution for encoding.
I'd say we are well past the point on diminishing returns with that one.
AV2 has been in the works for years.this is the firs ttime I'm reading of av2
I doubt that VVC will find much use outside of digital broadcast video transmission standards like ATSC or DVB which both seem perfectly happy with proprietary standards no matter how problematic the patent encumberment becomes.still needs to be adopted by the software side of the equation. that'll be a while. even at that it'll be a long time before either of them become adopted by the general public. large scale conversion and encoding outfits may adopt it. it's too complex of a situation to say one thing or another about the future prospects.
It's already finalised since July 2020.
It's successor ECM / h267 is in the works as we speak.
They are talking about a whopping 10x increase in decoder complexity to the point that ASIC is likely to be the only sensible way to go for consumers, with chunked compute on 64C+ server systems the only viable solution for encoding.
I'd say we are well past the point on diminishing returns with that one.
Encode wise yes - the complexity increase is usually pretty huge, and cripplingly so before the first optimised software codec implementations exist.Hasn't every successive MPEG standard been around 10x increase in complexity?
You misunderstand the greater benefits of compression for the content creation and delivery side which are bandwidth consumption to the big streamers.so before h.267 arrives we may have reached a point where further video compression would benefit too small a percentage of smartphone customers for it to be worth implementing
You misunderstand the greater benefits of compression for the content creation and delivery side which are bandwidth consumption to the big streamers.
At the end of the day what the consumers are taking in at the client end is a tiny drop in an ocean compared to what the streaming content creators are pushing out every minute worldwide.
Even a 5% decrease for the same quality is something they would get behind as long as the compute cost wasn't way out there beyond what they already have.
Not quite the way it works - it's not an all or nothing problem.But Netflix et al can't dictate to customers that they must use hardware that supports a certain compression scheme. They can't shut off MPEG4 support tomorrow to reduce their bandwidth consumption. They don't have any way to encourage customers to connect to their service with something capable of using VVC.
Snapdragon 8 Gen 2 has AV1 support.AV2 has been in the works for years.
The AOM people just don't talk about it much.
Especially as AV1 is still less than ubiquitous for the time being with Qualcomm and Apple's reticence to implement it.
Here's a link to the main AV2 code git repo.
Finally, and I think it's just decode support?Snapdragon 8 Gen 2 has AV1 support.
Yep. I think quite a bit has to change before we see AV1/2 encoders in mobile chips.Finally, and I think it's just decode support?
TUF A16 2023 | TUF A16 2025 Base Model | TUF A16 2025 Halo Model |
---|---|---|
AMD Ryzen 7 7735HS
| AMD Ryzen 9
| AMD Ryzen 9
|
128-bit DDR5 16GB | 192-bit LPDDR5x 24GB | 256-bit LPDDR5x 32GB |
16" FHD+ 1920x1200 165Hz | 16" FHD+ 1920x1200 165Hz | 16" QHD+ 2560x1600 240Hz |
512GB SSD | at least 1TB SSD | at least 2TB SSD |
90 WHrs | 100 WHrs | 100 WHrs |
200W AC Adapter | ? | ? |
$799 | Estimated Price Range: $999 | $1,299 |
Isn't that price for chinese market? International would be another $200-300? Hard to tell If It's good or not.RGT has brief info regarding upcoming Sarlak platform by ASUS TUF series. Based on my understanding on ASUS, here are my speculations on upcoming TUF A16 series to let you guys know more about AMD's new platform.
Base Model Halo Model AMD Ryzen 9 8850HX
- 6xZen5 + 8xZen5c
- 32MB L3 Cache
- RDNA3+ 16CU, 2048ALU ?
AMD Ryzen 9 8950HX
- 8xZen5 + 8xZen5c
- 40MB L3 Cache
- RDNA3+ 20CU, 2560ALU with 32MB IC
192-bit LPDDR5x 24GB 256-bit LPDDR5x 48GB 16" FHD+ 1920x1200 165Hz 16" QHD+ 2560x1600 240Hz at least 1TB SSD at least 2TB SSD Estimated Price Range: $999 - $1,299 $1,299 - $1,599
@TESKATLIPOKA Does the prices make sense to you?![]()
Isn't that price for chinese market? International would be another $200-300? Hard to tell If It's good or not.
I am confused about something else.![]()
Base model is Strix Point or Strix Halo(Sarlak)?
It supposedly has 6xZen5 + 8xZen5c which looks like a cutdown Sarlak, but only 16CU and no IC looks like Strix Point.
Halo model has 32MB IC but only 20CU? What happened to 40CU IGP?
Then why use 192-256bit LPPDR5x for only 16-20CU IGP? It looks like an overkill compared to Phoenix. Maybe higher clocks could explain It, but not really the extra 32MB IC in 8950HX.
It looks like Zen5 has 4MB L3, but Zen5C only 1MB. Of course It's a victim case, so each core can use the whole cache.
192-bit LPDDR5x 24GB for base model is 6* 32gbit chips or 3* 64gbit chips?
256-bit LPDDR5x 48GB for base model is 8* 48gbit chips or 4* 96gbit chips?
BTW 8533 Mbps modules from Samsung are only in 64,96 and 128gbit densities. 7500 Mbps is from 16gbit -> 128gbit.
Hmm, the issue is with around $1,500, we have much better choice than large iGPU. I have updated the table to include ROG Strix G17 with Fire Range 12 Zen5 cores and RTX4070 GPU. Based on current pricing, the upcoming G17 with full fat Zen5 cores will offer much better graphics performance at slightly higher price. Would you rather go for RTX4070 or iGPU?8950HX makes the 8850HX look pretty bad in comparison. Considering what you are getting for $1599 with the 8950HX, the 8850HX shouldn't be one cent above $899.