Question Bloomberg: Apple testing SoCs with 16 and 32 high performance cores

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

mikegg

Golden Member
Jan 30, 2010
1,755
411
136
The current ‌M1‌ chip has four high-performance processing cores and four power-saving cores. For its next generation chip targeting MacBook Pro and ‌iMac‌ models, Apple is said to be working on designs with as many as 16 power cores and four efficiency cores.

Apple is also reportedly testing a chip design with as many as 32 high-performance cores for higher-end desktop computers planned for later in 2021, as well as a new half-sized Mac Pro planned to launch by 2022.

 

Heartbreaker

Diamond Member
Apr 3, 2006
4,226
5,228
136
Seems to be the consensus that the M1 hardware encoders produce mediocre output compared to software which is pretty much consistent with most non-professional hardware encoders. The casual user who going to encode their videos is going to get much value going from their camera/phone's hardware encoder to the M1 hardware encoder.

Consensus of what? I haven't really seen any testing. Though I have just seen a lot of people assuming this.

Also Consumer HW encoders keep improving while people stick to their outdated assumptions.

NVenc was already to the point were it was preferred option to software encoding for streaming.

I'd really like to see some extensive M1 HW vs software encoder testing.
 

wlee15

Senior member
Jan 7, 2009
313
31
91
Consensus of what? I haven't really seen any testing. Though I have just seen a lot of people assuming this.

Also Consumer HW encoders keep improving while people stick to their outdated assumptions.

NVenc was already to the point were it was preferred option to software encoding for streaming.

I'd really like to see some extensive M1 HW vs software encoder testing.




 

Heartbreaker

Diamond Member
Apr 3, 2006
4,226
5,228
136


When I say I'd like to see extensive testing. It's actually seeing the result of extensive testing, by some trusted source like Anandtech. Not some anonymous guy on a forum saying by his eye it's no good.

One killer for me, that last time I looked is that Apple VT, is very limited. It only does Average bitrate encodes, not quality factor based encodes. I vastly prefer the latter. Unless you are streaming it's kind of dumb to do ABR.
 
Last edited:
  • Like
Reactions: Mopetar

Hitman928

Diamond Member
Apr 15, 2012
5,244
7,792
136
When I say I'd like to see extensive testing. It's actually seeing the result of extensive testing, by some trusted source like Anandtech. Not some anonymous guy on a forum saying by his eye it's no good.

One killer for me, that last time I looked is that Apple VT, is very limited. It only does Average bitrate encodes, not quality factor based encodes. I vastly prefer the latter. Unless you are streaming it's kind of dumb to do ABR.

ABR only would explain the linked forum posts saying you had to use very high bitrate (thus very large files) to get the quality to similar levels as software encode. Having to use higher bitrates to match quality is still true for NV/AMD/etc., but the example given in the linked post is pretty extreme. But if you can only do ABR, then you'd have to use really high rates to make sure the quality is still good in complex scenes which would lead to a lot of bloat for the non-complex scenes.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
None of the guys who criticized SPEC have access to it or know a lot about it. As an example the link you give proves Chester doesn't know what he is talking about:

He thinks the training input is what is being run. The mistake was spotted on the message just after the one you linked.

Yes, he was corrected on that point but I think his overarching point was that the blender subtest was so small/simplistic as to not be representative of a real world workload due to the very high L1D cache hit rate. The Haswell based Xeon used in that referenced paper with 32KB of L1D cache can manage what appears to be a 99% hit rate in the blender subtest. So a CPU with really big caches like the M1 seems like it would have a big advantage in the blender subtest, and several others.

That's my layman's understanding of what he was saying, but I could be wrong of course.

What the other guy said about the x264 subtest was more indicative though. No one in the real world is going to use the x264 codec with no SIMD assembly optimization.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,226
5,228
136
ABR only would explain the linked forum posts saying you had to use very high bitrate (thus very large files) to get the quality to similar levels as software encode. Having to use higher bitrates to match quality is still true for NV/AMD/etc., but the example given in the linked post is pretty extreme. But if you can only do ABR, then you'd have to use really high rates to make sure the quality is still good in complex scenes which would lead to a lot of bloat for the non-complex scenes.

True, which is why you wouldn't really choose ABR outside of streaming.

It would also put Apple behind other HW solutions like NVenc, which can use quality based IIRC.
 

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
Do we even know if ABX (should be AMX) is a set of hardware instructions on the M1 cores, or is it a software media programming interface where you target your application during compile time and the actual compiler translates that into either fixed function unit programs, such as encoding on the gpu, or, if it calls for it, using special core instructions on the CPU core? This would allow a write once, run on many different generations approach that is seamless for developers, yet can use whatever Apple decides to throw at it in hardware on each generation.
 
Last edited:

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
1.for Zen3 over Zen2 it workload dependant, 0% to 50% :p
2. dont know, to hard, code quality, instruction mix etc all matter
i cant find anything that says firestorm supports FMA, and if it does on how many ports etc
so if it does FMA on all ports then in terms of absolute width per cycle executed its a very very slight advantage to Zen2/3 ( 512bit add+mul , vs 512bit mull , 786 bit add, can be 1024bit add )
if it only does FADD or FMUL then its 512bits of add or mull vs 2xFMA ( 512bit add + 512 bit mul) + upto 2x FADD ( 512bit add)
if the workload was only FADD or only FMUL then it is equal 512bit vs 512bit.

But application code and complier optimisation can make a massive difference here and in apple walled garden that can be an advantage if what is offered in that garden is what you need.

Also Andrei F is wrong on a few points in that review for the x86 side ( normally in a negative to x86 way) , was discussed at length on RWT.
Well, the guy is held up like an absolute god in the apple subforum here, even though the most impressive thing about him isn't his knowledge but his ego.
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
It’s also why drawing conclusions from such benchmarks to the general market is simply wrong.

And their presence in reviews is a problem, because the largest issue in benchmarking is relevance. To what extent can I use the benchmark results to predict my own results as a user? When a review contains benchmarks that are irrelevant to the overwhelming majority of users it creates a false image of what to expect from new hardware.

Which, lets be honest, is often the point, the consumer oriented tech press mostly acts as advertising for the industry they are covering.
You should work for Intel PR. Teach us please, what the relevant benchmarks are.
 
  • Like
Reactions: Tlh97 and Hulk

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
It would be interesting to hear good arguments, instead of just criticizing other people, we all know how to do that but it doesn't help in understanding about CPU.
Talk about taking things out of context. Your comment, just as it is, sounds so nice, like a true invocation of justice. However when you hear the same false arguments from the very same people over and over and over and over again, then here comes a new thread with an actually interesting topic, then all you see is the saaaaaaaaaame old freaking childish boasting about how X company/uarch/superhero just utterly trounces, crushes, destroys and decimates Y company/uarch/peasants theoretically in the future, you can't help but sound tired and confrontational, sometimes even off-topic when you answer.

I suggest you either read back a lot of topics, or refrain from being too judgmental with your fellow posters :)