Question x86 and ARM architectures comparison thread.

Page 30 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DavidC1

Platinum Member
Dec 29, 2023
2,093
3,215
106
If we go by this yes it is 36mm2 if they have not added much more L2 even than it would be couple of extra mm2 like 38 fair PTL is 42% more area on slightly dense node or roughly 1.36X more area is node but the question is does it deliver 35% more performance overall.
The difference won't be 42%, because M5 is monolithic while Pantherlake's GPU is not so it costs a bit in terms of communication and IO resources. The chiplet era is not a free one, just a compromise as Moore's Law gains screech to a halt. Something like 10% of the die will be overhead, so its like a 30% difference.
yea, it's not the new 3nm one, it's the old N4 one based on a 2 gen old arch as well.
Also with how much PTL uncore improved I doubt it's the core side doing the heavy lifting of being able to compete in total package/board power vs qualcomm, but whatever lol.
That's a load comparison so it's just benefitting from slightly faster performance, which has a greater effect if you downclock/downvolt it.
If we go by this yes it is 36mm2 if they have not added much more L2 even than it would be couple of extra mm2 like 38 fair PTL is 42% more area on slightly dense node or roughly 1.36X more area is node but the question is does it deliver 35% more performance overall.
In Cyberpunk, M5 is comparable to Lunarlake and Strix Point. It's faster in Baldur's Gate. In Total War it's noticeably slower.
 
Last edited:
  • Like
Reactions: Tlh97 and 511

soresu

Diamond Member
Dec 19, 2014
4,239
3,741
136
A bit useless tho in this instance given VVC/H266 is if not dead, it's certainly only a pale shadow of what AVC/H264 was already shaping up to be at this point in its lifecycle.

It's never even going to reach the adoption level of HEVC, which was also kneecapped by the same problem, albeit later in its lifecycle.

The various patent holders just can't seem to help themselves from shooting these standards in the meat and 2 veg over and over and over again by forming multiple patent pools and confusing the licensing far beyond what it needs to be for wide adoption.

That's precisely why the Alliance for Open Media was formed and AV1 (soon AV2 also) was created.
 

Doug S

Diamond Member
Feb 8, 2020
3,789
6,713
136
A bit useless tho in this instance given VVC/H266 is if not dead, it's certainly only a pale shadow of what AVC/H264 was already shaping up to be at this point in its lifecycle.

It's never even going to reach the adoption level of HEVC, which was also kneecapped by the same problem, albeit later in its lifecycle.

The various patent holders just can't seem to help themselves from shooting these standards in the meat and 2 veg over and over and over again by forming multiple patent pools and confusing the licensing far beyond what it needs to be for wide adoption.

That's precisely why the Alliance for Open Media was formed and AV1 (soon AV2 also) was created.

There is no guarantee that alliance doesn't suffer from the same issues, because they can't guarantee that every patent holder in the standard is part of their alliance. Even ignoring a "patent holder who decides they aren't getting adequately compensated so they pull out of the alliance and form a second pool" which is what hit MPEG, they can't guarantee those standards don't include patented technology from outside their alliance. All it takes is one or two patents that some outside firm discovers they own, and a court agrees is valid, and they could upend everything by demanding fees outside of the pool. They wouldn't be bound by FRAND so they can charge whatever they like, play favorites by charging different companies different rates, etc.

So yeah AV1/AV2 looks like a better option today, but because of the way the patent system works someone who held a patent that ended up getting used as part of AV2's technology without those implementing it realizing just sitting on it until it achieves wide adoption, then start suing everyone looking to collect billions in back royalties.
 

soresu

Diamond Member
Dec 19, 2014
4,239
3,741
136
There is no guarantee that alliance doesn't suffer from the same issues, because they can't guarantee that every patent holder in the standard is part of their alliance
If you are referring to patent troll collectives like Sisvel then no, it's impossible to fully 100% protect it from that.

But given the joining agreement basically enjoins members to dedicate legal resources and patents to the cause combined with the sheer number of them with a lot of patents to that effect it's certainly not a trivial effort to patent troll them, and the gains will likely not be worth the effort/time and legal fees to do so.
 

Doug S

Diamond Member
Feb 8, 2020
3,789
6,713
136
If you are referring to patent troll collectives like Sisvel then no, it's impossible to fully 100% protect it from that.

But given the joining agreement basically enjoins members to dedicate legal resources and patents to the cause combined with the sheer number of them with a lot of patents to that effect it's certainly not a trivial effort to patent troll them, and the gains will likely not be worth the effort/time and legal fees to do so.

A collective effort protects another industry player, but it changes nothing for patent trolls. They don't sell products so having more companies willing to go after someone doesn't dissuade them. Nor does the combined legal resources - it isn't like an Apple or a Google is short of funds to fight a patent troll and is mounting a substandard defense because they aren't spending enough. All this does is spread the same amount of legal defense money across more companies because they're all agreeing to chip in.

There is no structure that would act to hinder patent trolls who own a valid patent (or a patent that looks valid enough for them to be willing to roll the dice on a court case) Collective legal defense doesn't hurt the patent troll. The only thing that would affect them are changes to the law, but it is hard to come up with changes to the law that make life harder for a patent troll without also making life harder for the legit little guy innovator to avoid having the trillion dollar megacaps simply stealing their IP and outlasting them in court. It is already hard enough for them that some of them sell out to patent trolls because they don't have the resources to fight against big tech.
 

Nothingness

Diamond Member
Jul 3, 2013
3,364
2,455
136
They're using power metrics. The Apple power consumption numbers aren't comparable to the x86 numbers.
How did they measure power for the x86 machines? As I previously wrote, I only trust power at the wall, after all this is what the machines I run consume.
 

511

Diamond Member
Jul 12, 2024
5,302
4,721
106
How did they measure power for the x86 machines? As I previously wrote, I only trust power at the wall, after all this is what the machines I run consume.
don't know Phoronix usually measure from the wall for servers but don't know about the laptops and desktop
 

jdubs03

Golden Member
Oct 1, 2013
1,437
1,006
136
don't know Phoronix usually measure from the wall for servers but don't know about the laptops and desktop
Is there any reason to suspect otherwise?

Seems like a good idea to maintain the same methodology for any of those decide types.
 

MS_AT

Senior member
Jul 15, 2024
923
1,847
96
Haven't we gone over that one some time ago? Are there any more details, like which compilers were used? After all while phoronix is doing a lots of tests
View attachment 136633

Saw this on Twitter. You can really see Apple’s advantage in integer applications. Not bad for a base M4, around 245K levels of performance.
Do you have a source other than twitter? I mean I would be interested in which compilers were actually being used. I remember a year ago we also had a sensation there, but under scrutiny it was shown that the comparison wasn't exactly apple to apple as MacOS was using clang while linux was using gcc.

For example begging of last year I did this small benchmark comparing compilation time of x64 llvm backend with llvm build scripts on Windows. All has been compiled on the same machine with flags as closely matching each other as possible. While I wasn't able to dig up gcc results done as a part of the same benchmark, it was noticeably slower than clang-cl. Now, the purpose of this table is not to nitpick on any specific version etc, but just to show that the compiler, compiler version and the way it is configured matters;)
1768387114160.png
(Edit: had to replace table with a picture as formatting went nuts)

So not knowing which compilers were used and how they have been configured makes it hard to tell how much of the performance can be attributed to superior CPU and how much to a differently configured compiler;)
 
Last edited: