Discussion Apple Silicon SoC thread

Page 32 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
23,587
1,000
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:

Screen-Shot-2021-10-18-at-1.20.47-PM.jpg

M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:

 
Last edited:

Staples

Diamond Member
Oct 28, 2001
4,952
119
106
My two iPad Air 2s will continue to get full support until fall 2021, so that would be 7 years. They are both still working fine, in use every day.
I had the iPad Air (original). The last iOS it supports was iOS 12. iOS came out exactly 5 years after release. Maybe they are getting better at support. If the iPad was an open platform, I'd be able to install Linux or whatever (doubt I'd want to). Anyway, since Macbooks were based on x86, I can run Windows on old ones. But that will no longer be an option with this ARM switch up.
 

Hitman928

Diamond Member
Apr 15, 2012
5,262
7,890
136
Looking into Cinebench a bit more...

So much of the discussion about the M1 is obfuscated by the fact that Intel and AMD have very different definitions of TDP, and Apple doesn't use it at all. It's hard to find power draw numbers for a lot of this stuff, so I'm curious if others here can test this directly.

We saw Andrei's numbers that Cinebench R23 caused the M1 CPU cores to pull 15w. M1 can apparently run hotter elsewhere, but the Cinebench numbers at least are based on 15w power draw.

I've been trying to find similar power draw figures for Renoir but it's challenging. Notebookcheck purportedly has Median Power Draw figures for each chip running Cinebench R15 and reports the following:

- 4700U - 6874 MC on R23 - 38w median power consumption on R15
- 4800U - 10156 MC on R23 - 49.5w median power consumption on R15

Those feel high to me. Is there the potential that R15 just allowed for much more insanely high boost clocks? Can anyone check power draw on R23? Are these numbers just wrong? If these power numbers are similar on the most recent Cinebench version for AMD, that is rather important for comparing Renoir v M1 I imagine.

EDIT - I think that must be package power, so the proper M1 comparison is closer to 20w?
Some more details here: https://www.ultrabookreview.com/41494-lenovo-ideapad-7-slim-review/

Looks like in performance mode the 4800u draws a bit over 30w (and 107 degrees C) for a few runs, and then levels outs around 26-27w. So yes, the M1 is running at roughly half the power.

EDIT - should have said this is the Lenovo Slim 7. Obviously temperature and operating frequency is hugely determined by the manufacturer as well.

The difficult thing with AMD and Intel chips is that they have a configurable TDP and even within that TDP they have boost states which will exceed the TDP for short durations. So you can have a 4700u configured to use 28W or have it configured to use 15W. You can have ones configured to use 15W but will boost for a short time to 28W but long term will settle at 15W. You can also have it configured to 15W and not allow it to boost above that. All of these are possible which makes it very hard to compare. That's why you see graphs like this that have different configurations:

cinebench1-ideapad-slim7-960x553.png


The red line is where a 15W TDP is strictly enforced on the 4800u and would be the closest match to the M1 in Cinebench in terms of power use. The 4800u is actually using less power here as it is using 14.3W package power whereas the M1 is using 15.3W package power (obviously not a large difference, just pointing it out). As you can see, when going from ~25W to ~15W, the 4800u loses just under 20% of its performance. If you apply that to the Cinebench R23 results from computerbase, you get a 4800u scoring ~8200 points when restricted to 15W. This would put it just ahead of the 7833 score Andrei got from the Mac Mini. Of course all of this is rough calculations and it would be better to have a controlled comparison, but that's what I can calculate based upon the best data I could find.
 
Last edited:

Doug S

Platinum Member
Feb 8, 2020
2,261
3,513
136
I/O costs power, that's a fact. Powering large DRAM busses over large distances requires a lot of power. That is a fact. It's not "nonsense" just because you don't like hearing it. A wide of assortment of I/O isn't "legacy junk" just because you don't use it.

  • - PCI-E is not "legacy junk". No PCIe = no NVMe drives, no dGPUs, no express card, no PCIe over thunderbolt passthrough to external enclosures.
  • - No NVMe or SATA = No user serviceable internal storage options. No way to expand internal storage. You are stuck with what you got. Thanks to the integrated SSD controller, if something happens to your SOC (say CPU/GPU/DRAM failure), then all of your data is basically as good as gone unless you find someone capable of desoldering the NAND and somehow extracting the raw data and re-assembling it into something coherent.
  • - Integrated RAM = Not possible to upgrade RAM at all. Stuck with what you got. Many PC notebooks today do have at least some ram soldered, but almost all offer at least one slot to expand.
  • - No integrated hd audio (besides the one 3.5mm port), means I'm forced to use external USB audio devices if I want to do basic line-in recording or use a wired mic.
  • - No integrated display PHYs = I'm forced to use dongles and USB-C/Thunderbolt hubs if I want to connect to multiple monitors
  • - Limited IO ports = piles of pricey dongles and adapters to hook up your peripherals

Sure this stuff works for users who treat their PCs like phones, and many existing Apple users. But all of that is a non-starter for many, many PC users.

You keep acting like this is an attack on your person. I'm pointing out that Apple has made some serious sacrifices and design comprimses to get the M1 power usage down, and performance up. I'm not saying those compromises are "wrong", but they are compromises. You can't just handwave it all away and say it's a non issue when you're trying to draw comparisons to Intel and AMD systems who have ecosystems and customers that rely on that I/O.


What you're really complaining about here isn't the absence of capabilities in the M1 silicon, but the absence of ports on the Macs that contain it. They could have added a specific DP port, a digital audio port, and maybe an eSATA port, but they chose not to on these entry level Macs. That's all packaging, because as you say you can do these things via adapters so the hardware is capable it just lacks a place for you to plug into.

There will be new generations of chips coming down the road for Macs in the midrange and high end categories which may add new capabilities, and those Macs may add additional ports you can plug things into without dongles. I'm sure some will have expandable RAM - the limit for LPDDR4x/LPDDR5 is in the 96 to 128 GB range, which is obviously a non starter so they'll almost certainly have to use DIMMs on the higher end.

Though specifically I wouldn't hold my breath for dGPU support. It looks like Apple intends to scale their own GPU all the way to the Mac Pro over the next couple years. Whether the lack of dGPU support matters will depend whether they are successful in scaling their GPU (personally I'm sure they wouldn't make that choice without already knowing it will work)
 
  • Like
Reactions: IntelCeleron

Doug S

Platinum Member
Feb 8, 2020
2,261
3,513
136
My iPhone 6s and iPhone SE will also likely lose support in fall 2021, so that would be 6 years for those.

No they won't. Even if iOS 15 doesn't support them they'll continue security updates via iOS 14 for some time. They are still delivering security updates for iOS 12 for the 5S and 6 - so that's 7+ years of support for the 5S and still counting...
 

Eug

Lifer
Mar 11, 2000
23,587
1,000
126
I had the iPad Air (original). The last iOS it supports was iOS 12. iOS came out exactly 5 years after release. Maybe they are getting better at support. If the iPad was an open platform, I'd be able to install Linux or whatever (doubt I'd want to). Anyway, since Macbooks were based on x86, I can run Windows on old ones. But that will no longer be an option with this ARM switch up.
I specifically avoided the iPad Air, because I didn't think it would be a good idea to get the last iPad with just 1 GB RAM. It turns out I was correct.

FWIW, I'm running macOS 10.15 Catalina on my 2008 MacBook and 2009 MacBook Pro, and OS X 10.11 El Capitan on my 2008 Mac Pro*. I also have Windows 10 on the Mac Pro, but for some reason sleep stopped working properly. When it wakes up, it performs like it is running a single core 300 MHz CPU. I've tried changing a bazillion settings and nothing solves it. I note that some Windows PCs had the same problem so it's not just a Mac thing. Plus there are some other weird bugs in Windows 10. So I stick with El Capitan on the Mac Pro.

To put it another way, I get what you're saying, but in the real world running Windows 10 on old Intel Macs which don't specifically support Windows 10 is often problematic. You still need the Boot Camp drivers from Apple, because the Windows 10 installer doesn't automatically support the hardware in those Macs. If you don't have specific support on your Mac for those drivers, then you may run into various problems. You can try running older Boot Camp drivers, but often they don't work or else cause big problems, so you end up having to install a mish-mash of different Boot Camp driver components and/or OEM drivers.

*P.S. The Mac Pro was found by the side of the road for garbage pickup. A guy driving by saw it and picked it up. He tried to boot it up, but it failed, so he put it up for sale for CAD$100/US$75. I went and checked it out and confirmed it would not properly boot, but did see a screen with graphics anomalies, and figured it was due to a dead GPU. Turns out I was right. I stuck a different video card in it and now it works perfectly - MacPro2,1 dual Xeon X5365 with 8 cores at 3.0 GHz. :) Interestingly, it has a very late firmware revision that had never been seen before by the Mac Pro firmware hacker guys, probably because that particular one was released just weeks before the MacPro3,1 model was released.
 
Last edited:

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
GB5 doesn't do any meaningful work, and neither does Cinebench. That Cinema4D is good for something is not an argument against GB5, because some of its subtests do meaningful work like the compiler benchmark. More people run compilers than run Cinema4D, so that subtest alone is more valuable than Cinebench.
1
Then compare those subtests only and make statements accordingly - meaning about those workloads. Don't get me wrong, I like benchmarks in general, because I like long empirical arguments, not everything is always about practicality. That said, many benchmarks are just weighted so woefully that they're just not suitable to base such broad statements on, like this or that chip is the best, other than 'this or that chip is the best in this particular benchmark' - of whatever use that may be... my 2 cents
 

IvanKaramazov

Member
Jun 29, 2020
56
102
66
The red line is where a 15W TDP is strictly enforced on the 4800u and would be the closest match to the M1 in Cinebench in terms of power use. The 4800u is actually using less power here as it is using 14.3W package power whereas the M1 is using 15.3W package power (obviously not a large difference, just pointing it out). As you can see, when going from ~25W to ~15W, the 4800u loses just under 20% of its performance. If you apply that to the Cinebench R23 results from computerbase, you get a 4800u scoring ~8200 points when restricted to 15W. This would put it just ahead of the 7833 score Andrei got from the Mac Mini. Of course all of this is rough calculations and it would be better to have a controlled comparison, but that's what I can calculate based upon the best data I could find.

Yep, I actually did that same calculation myself. It's a bit hand-wavey but it does suggest the 4800u might still slightly surpass the M1 at the same wattage. Out of curiosity, what's the deal with the 4800u v 4700u and down? Are these just insanely binned parts? They're extremely different performance-wise, with the former toe-to-toe with the 4900HS while the latter is a good bit slower than the M1. The 4800u basically can't be purchased anywhere, and GB for example only has three pages (!) of user submitted benches. That's way, way less than the individual M1 chips already have.

They make perfect sense for developers who want to test their iOS apps without using the slow translation layer they had to use on x86 Macs. A quick google claimed there were 1.3 million iOS developers worldwide this June, and macOS is required for developing iOS apps so other than a few maybe using a Hackintosh or making macOS run in a VM they will all be using a Mac. Even if there were ZERO other benefits to doing this, that alone justifies making it possible.

Beyond that though, if there is an app that exists on iOS but not the Mac that does what you need, even if it isn't (today) 100% compatible with all features like going full screen on a laptop/desktop, that's better than not having access to that app.

This is often overlooked. I've seen a lot of people say the M1 machines are worse for development because you can't natively test x86 apps on them, and you can't run Windows at all. That's true, but isn't the vast majority of software development (and money) these days in mobile and web? I would think the ability to run iOS apps natively for testing would be hugely beneficial for development. The real wildcard is Android I imagine; if one could find a way to run Android apps natively on Apple Silicon that would be huge.
 
  • Like
Reactions: Tlh97 and scannall

Heartbreaker

Diamond Member
Apr 3, 2006
4,227
5,228
136
This is often overlooked. I've seen a lot of people say the M1 machines are worse for development because you can't natively test x86 apps on them, and you can't run Windows at all. That's true, but isn't the vast majority of software development (and money) these days in mobile and web? I would think the ability to run iOS apps natively for testing would be hugely beneficial for development. The real wildcard is Android I imagine; if one could find a way to run Android apps natively on Apple Silicon that would be huge.


Then it should have been a developer feature.

Not a source of shovel-ware for the Mac.

When I have seen it mentioned in reviews it's only been as a negative. It goes against Apple long standing ethos, of only doing things that can be well done.
 

Eug

Lifer
Mar 11, 2000
23,587
1,000
126
I agree the ability test iOS / iPadOS apps on Macs is useful for developers. However, Apple also said this was a feature for users as well, cuz well, it is user facing. It's not some hidden developer option.
 

Eug

Lifer
Mar 11, 2000
23,587
1,000
126
My suspicions about the multi-media acceleration seem to be correct, if I'm understanding this right. For a long time people have said some video editing applications run much better on the iPad Pros than they did on the Macs. Some attributed this to cut down software specifically optimized for A series chips and iPadOS, but I thought in addition to that it had to be because of very specific non-core hardware acceleration silicon Apple had added for this purpose.

Well, the native apps on M1 are now doing things that can't be done on a $15000 Mac Pro. Specifically, real-time playback and fast scrubbing of certain complex video edits using certain file formats can be jerky on the Mac Pro, but on M1 with the same data it is smooth as butter.

I don't use the professional video editors on Mac, but I had noticed that my 2017 iPad Pro with 3 year-old A10X was doing things in LumaFusion that people with recent MacBook Pros were having performance problems with, yet my 2nd gen iPad Pro is noticeably worse than the 3rd gen 2018 iPad Pros with the same actions. That means that even though it worked OK on my iPad Pro, it was even better on the later iPad Pros.

It turns out that Final Cut on these new M1 machines is now able to do the same things with ease, whereas it was more problematic on the Intel MacBook Pros.

So, as much as we like to marvel about the new M1 CPU cores, I think for many end users doing multimedia content creation, the stuff outside the CPU is probably just as important for them, if not more important.

BTW, I wonder when Intel truly started to believe Apple would leave them. Back in the A7 era, when Apple launched the first 64-bit ARM phone/tablet chip? Probably much earlier actually.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,227
5,228
136
Nothing to be suspicious about. Media acceleration has been an obvious feature on Apple products for some time.

In fact, even Macs with T1 chips get quite a boost.

Most operations that geeks spend all day in bun fights about which high core count CPUs are best for encoding/rendering, are largely in academic arguments about operations that best done on more dedicated HW.

Media encoders are much faster and less power hungry than using CPU cores. 3D rendering (which almost no one does) is significantly faster on Video cards. Machine Learning/AI is much faster on dedicated ML cores, or GPUs after that.

Then you are fast running out of use cases that benefit significantly from high core counts.
 
  • Like
Reactions: shady28

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
My suspicions about the multi-media acceleration seem to be correct, if I'm understanding this right. For a long time people have said some video editing applications run much better on the iPad Pros than they did on the Macs. Some attributed this to cut down software specifically optimized for A series chips and iPadOS, but I thought in addition to that it had to be because of very specific non-core hardware acceleration silicon Apple had added for this purpose.

Well, the native apps on M1 are now doing things that can't be done on a $15000 Mac Pro. Specifically, real-time playback and fast scrubbing of certain complex video edits using certain file formats can be jerky on the Mac Pro, but on M1 with the same data it is smooth as butter.

I don't use the professional video editors on Mac, but I had noticed that my 2017 iPad Pro with 3 year-old A10X was doing things in LumaFusion that people with recent MacBook Pros were having performance problems with, yet my 2nd gen iPad Pro is noticeably worse than the 3rd gen 2018 iPad Pros with the same actions. That means that even though it worked OK on my iPad Pro, it was even better on the later iPad Pros.

It turns out that Final Cut on these new M1 machines is now able to do the same things with ease, whereas it was more problematic on the Intel MacBook Pros.

So, as much as we like to marvel about the new M1 CPU cores, I think for many end users doing multimedia content creation, the stuff outside the CPU is probably just as important for them, if not more important.

BTW, I wonder when Intel truly started to believe Apple would leave them. Back in the A7 era, when Apple launched the first 64-bit ARM phone/tablet chip? Probably much earlier actually.


Yep, this has been a problem with these tech-head sites reviews for some time. They try to isolate the CPU itself for testing, but this is a very 1990s approach.

Encoding is the #1 reason cited for big multi-core chips, but except in very specific cases where the built-in encoder on either a GPU or iGPU doesn't support the target format only a masochist would use their CPU for this.
 

name99

Senior member
Sep 11, 2010
404
303
136
What you're really complaining about here isn't the absence of capabilities in the M1 silicon, but the absence of ports on the Macs that contain it. They could have added a specific DP port, a digital audio port, and maybe an eSATA port, but they chose not to on these entry level Macs. That's all packaging, because as you say you can do these things via adapters so the hardware is capable it just lacks a place for you to plug into.

There will be new generations of chips coming down the road for Macs in the midrange and high end categories which may add new capabilities, and those Macs may add additional ports you can plug things into without dongles. I'm sure some will have expandable RAM - the limit for LPDDR4x/LPDDR5 is in the 96 to 128 GB range, which is obviously a non starter so they'll almost certainly have to use DIMMs on the higher end.

It's not clear to me that "traditional" external DRAM is an optimal solution going forward. Consider an alternative like Gen-Z, which allows for a wider range of DRAM-like alternatives (traditional DRAM, but also Optane and other persistent DRAM equivalents). These solutions give you much more flexibility and larger capacity, at the cost of slightly higher latency -- which may not matter if all the hot data/code can be kept in on-SoC DRAM, and all this other storage is basically being used (with or without persistence) as the equivalent of a fast hard drive.

To me this feels like a very Apple-style solution: something that every technical knows is the way things should be done, but no other company can really push it forward because of the co-ordination problem -- everyone else is scared to go first because for it to work all the pieces (HW, OS, some of the SW use cases) need to be in place simultaneously and no-one else (Intel?, HP?, MS?) can do that.

This whole space is a mess right now if you look at the details with Gen-Z, OMI, and CXL all conceptually possible ways to do this. I make no claims as to which of these particular options is best, generically or for Apple in particular; just that alternatives to "lots and lots of DRAM sockets" do now exist, and are probably a better solution, going forward, for the bulk of Apple's pro users.
(Once, of course, they get over the usual, annual hysteria that something familiar is being changed, and work out of their system the traditional rants about Apple price gouging, closed gardens, and similar nonsense.)
 

jeanlain

Member
Oct 26, 2020
149
122
86
Yep, I actually did that same calculation myself. It's a bit hand-wavey but it does suggest the 4800u might still slightly surpass the M1 at the same wattage. Out of curiosity, what's the deal with the 4800u v 4700u and down? Are these just insanely binned parts?
FWIW, a macrumors poster tested their 4700U laptop constrained to 15W on R23 multicore. The resulting score is ~4800. But I'm not sure if the battery saving feature determines the max power of the CPU package to 15W, or the whole laptop, or something in between.
 
  • Like
Reactions: Entropyq3

name99

Senior member
Sep 11, 2010
404
303
136
  • Like
Reactions: SarahKerrigan

name99

Senior member
Sep 11, 2010
404
303
136
Then it should have been a developer feature.

Not a source of shovel-ware for the Mac.

When I have seen it mentioned in reviews it's only been as a negative. It goes against Apple long standing ethos, of only doing things that can be well done.

Presumably one could create a VM on the M1, and then run as much of Android as one feels like on that VM...

If there's any commercial value in doing this, I expect it will be done. But not by Apple.
 
  • Like
Reactions: IvanKaramazov

IvanKaramazov

Member
Jun 29, 2020
56
102
66
Most operations that geeks spend all day in bun fights about which high core count CPUs are best for encoding/rendering, are largely in academic arguments about operations that best done on more dedicated HW.

Media encoders are much faster and less power hungry than using CPU cores. 3D rendering (which almost no one does) is significantly faster on Video cards. Machine Learning/AI is much faster on dedicated ML cores, or GPUs after that.

Then you are fast running out of use cases that benefit significantly from high core counts.


Yep, this has been a problem with these tech-head sites reviews for some time. They try to isolate the CPU itself for testing, but this is a very 1990s approach.

Encoding is the #1 reason cited for big multi-core chips, but except in very specific cases where the built-in encoder on either a GPU or iGPU doesn't support the target format only a masochist would use their CPU for this.

Spot on.

I personally would be all over an 8x4 Apple Silicon CPU in an iMac or MBP16, and I'm sure it would be plenty fast. But at the end of the day, I expect most users would actually benefit more from Apple using all that extra space to massively increase the number of GPU cores, combined of course with faster unified memory to feed them. An M1 with twice the GPU oomph would be quite something.
 

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
I'd really like to see the M1 running Mac OS x86 in a VM.

And Windows.

So I'm not illegal as I do own a Mac, however a lot of developers do it like this, especially if they use Xamarin like I do. My curiosity is if it could be rearranged a bit, running Mac OS x86 in a VM on an M1. This would allow some degree of testing on both platforms.


1605725317976.png
 

Hitman928

Diamond Member
Apr 15, 2012
5,262
7,890
136
FWIW, a macrumors poster tested their 4700U laptop constrained to 15W on R23 multicore. The resulting score is ~4800. But I'm not sure if the battery saving feature determines the max power of the CPU package to 15W, or the whole laptop, or something in between.

If he is running battery saver, then the CPU is probably restricted to under 10W, not 15W. Either that or there is something seriously wrong with his system because that low a score just doesn't make sense, and to back that up, if you look at the computerbase chart, a 4700u at 15W scores a hair under 7000 points which is right where you would expect it to be based upon my earlier calculation.
 

name99

Senior member
Sep 11, 2010
404
303
136
Spot on.

I personally would be all over an 8x4 Apple Silicon CPU in an iMac or MBP16, and I'm sure it would be plenty fast. But at the end of the day, I expect most users would actually benefit more from Apple using all that extra space to massively increase the number of GPU cores, combined of course with faster unified memory to feed them. An M1 with twice the GPU oomph would be quite something.

It's 4+4 and 8+4.
8+4 makes sense; 8x4 means ???
 
Last edited:

Staples

Diamond Member
Oct 28, 2001
4,952
119
106
Nothing to be suspicious about. Media acceleration has been an obvious feature on Apple products for some time.

In fact, even Macs with T1 chips get quite a boost.

Most operations that geeks spend all day in bun fights about which high core count CPUs are best for encoding/rendering, are largely in academic arguments about operations that best done on more dedicated HW.

Media encoders are much faster and less power hungry than using CPU cores. 3D rendering (which almost no one does) is significantly faster on Video cards. Machine Learning/AI is much faster on dedicated ML cores, or GPUs after that.
Media accelerators are in modern PCs too. Intel Quick Sync, NVIDIA NVENC and whatever AMD calls theirs. I've run older computers (like 3rd gen Intel systems) without GPUs and man does just normal use in Windows show how much media and 2D accelerator hardware can make a huge difference.
 
  • Like
Reactions: Tlh97

insertcarehere

Senior member
Jan 17, 2013
639
607
136
If he is running battery saver, then the CPU is probably restricted to under 10W, not 15W. Either that or there is something seriously wrong with his system because that low a score just doesn't make sense, and to back that up, if you look at the computerbase chart, a 4700u at 15W scores a hair under 7000 points which is right where you would expect it to be based upon my earlier calculation.
The Computerbase chart is just a compilation of Cinebench R23 scores from a forum post with different people's devices at various settings, just looking through many of the 4800u/4700u @ 15w scores, at least some of them are done with the laptops in some sort of "performance" modes, where the APU will draw significantly over 15w.
 

Hitman928

Diamond Member
Apr 15, 2012
5,262
7,890
136
The Computerbase chart is just a compilation of Cinebench R23 scores from a forum post with different people's devices at various settings, just looking through many of the 4800u/4700u @ 15w scores, at least some of them are done with the laptops in some sort of "performance" modes, where the APU will draw significantly over 15w.

Yes, there isn't much control over the results and I may be overstimating the 4700u results due to performance modes being activated. But even then, you're looking at ~6000 points worst case (if the performance mode enables greater than 25W sustained). That 4800 point result from the Mac forum just doesn't make sense though unless it is running at a TDP that is significantly under 15 W (I'm guessing at most it is at 10W TDP).