Just something to think about, but MS and Qualcomm are still at least nominally pushing WARM, and there is the off chance that AMD may get into the mix through their semi-custom unit. How that helps gaming on Apple is dubious, but at least we aren't talking targeting an entirely different OS and uarch, but just an entirely different OS.
Not unless they (qualcomm, you say Microsoft, haven't researched, but I doubt Microsoft is pushing anything other than DirectX) manage to drum up a 9-10 figure gaming business. AMD isn't going to touch niche APIs of any kind unless they show staying power.
There's a thing called MoltenVK.
So apparently the new MacBooks have a "high powered mode", I wonder if that kicks in under mains power only or if that's a setting you check somewhere. Perhaps doing that gives you another nudge in performance.
Are you saying they lied on their own charts? They claimed "plugged in" performance was the same thing as "battery performance". My prediction: once the real reviews roll in, most of you guys will disappear or stick around to move goal posts. Note that I'm pretty vendor neutral here, but when I see marketing calling the shots, I know what is really going on...
OK so now there seem to be enough scores to wonder why the M1 Max scales so poorly over M1 Pro. Is it a limitation in Geekbench, a limitation in macOS or the driver, or something in the hardware itself that inhibits the same scaling observed from M1 to M1 Pro?
If it is a real limit in something Apple controls they obviously need to address it before releasing the Mac Pro, as a further 4x scaling might translate into even less of a gain.
I wouldn't be shocked if it is a driver issue. After all, this is a lot more performance than Apple has ever had available before, so some immaturity in the driver at first would not be shocking. We've seen similar from release of a new GPU from AMD or Nvidia in the more than a few times, and they have a lot more experience in high performing GPUs than Apple has.
It's a limitation in "Apple products". Geekbench has had limitations in the past, but the OpenCL test has proven to scale above and beyond (it is several years old at this point and scales above even the 3090). Apple has a slow GPU, slow drivers, or both (betting on the hardware. I know from experience). A bunch of you folks are expecting Apple to perform miracles.
RTX 3060 mobile performance IS a miracle in a Mac Product! It will NOT set global performance records. It will set SOME, but not many, efficiency records.99% of the records it sets are due to NVIDIA being 2-3 node generations behind Apple. If NVIDIA had GPUs on TSMC 5nm, they would be ahead. This is a fact. I'm not an NVIDIA fan. I actually only own stock in AMD (and a small amount of Apple shares), NVIDIA is very competitive when it comes to perf/watt. There is a reason when AMD, on a superior process, has trouble beating NVIDIA. NVIDIA is a giant and can throw tons of money at R&D. Apple has a good first try, but with both CPU and GPU performance, most of you aren't seeing the bigger picture:
It is easy to build an okay product on a superior node and claim a win. What happens next for all will determine who wins or loses. Thus far, Apple has several losses, including software compatibility, lack of high refresh rate displays, lack of mainstream API support, etc. Their only wins IMO, are on their own, in house, operating system in a closed ecosystem. which I don't personally consider wins, because they don't change the mainstream market. They need to step up to the plate if they want to claim real wins. Neither Windows nor Linux (GUI applications) currently run on Mac hardware unless being emulated. Get back to us when they do. If their emulated solutions can outperform cutting edge native x86 solutions, or if they figured out how to get all the rest of the software in the world to run faster than it does on an x86 CPU, I'll be all ears. As of now, MY company is ditching Macs because the fastest server CPU is x86, not ARM.