The graph I posted with 4-8 simultaenous thread occupancy on an Android phone includes every workload needed to render a page, hence there's much more work to do on the CPU than just one JavaScript thread.
Web browsing is multi-threaded.
Yeah, as it touched a subject I'm a bit more familiar with,I'll add my 2 cents.
Modern browsers are actually quite multi-threaded.
JavaScript on a single page in a single tab is indeed single-threaded. But only if:
- That page doesn't use Web Workers - with which you can run things in other threads
- Doesn't use WebAssembly libraries (which does support multi-threading now)
- Doesn't have nested Iframes from other pages on different domains (that in both Chromium and Firefox run in a separate process even, let-alone thread)
And JavaScript is just a small part of the list of things that browsers do to actually deliver your page.
I'll take Firefox as an example (as they have nice blog-posts explaining stuff I'll link below), but many things from here apply to Chrome as well:
- Composition is done off main-thread and can be quite taxing
- (and in fact nowadays GPU is used quite extensively as well, see Firefox Webrender for instance)
- CSS layout can be very parallel if designed well
- HTML Parsing can be parallel (Though this is a different engine not used in FF currently, the Gecko engine also supports that)
- And yet again Chrome and Firefox have different processes for every domain you have open. (for firefox it's very recent, see project fission)
And all of that doesn't even account to the fact that JavaScript itself is not just interpreted but also
JIT (Just In Time) compiled - meaning the hot parts of your code get compiled to native a code to speed it up.
That happens on-the-fly and is also done
off-main-thread.
Browsers aren't parallel in a sense that say Cinebench is (they don't tax all your cores to 100% with embarrassingly parallel workloads) but they absolutely do use multiple threads.
Just try running your browser in a virtual machine with a single vCPU, won't be a pleasant excercise even with a fast processor and using hypervisors (e.g. minimal overhead).
In fact Just going from a 2C/4T cpu to a 4C/8T CPU is often a noticable speedup while browsing (if you also have other stuff open). Less so with more cores, but the it's still there.
The reason for that misconception is simple - Benchmarks.
Old benchmarks (Octane, Kraken etc) are the worst, testing some fringe 100% atypical javascipt functionality (that sometimes get's special paths in browsers just to look good). For instance Firefox totally redesigned their JS engine (
Warp) and got real-life site loading improcements from 12-20% on JS heavy sites (Google Docs, Reddit, Netflix). These benchmarks showed
significant regressions (despite actual browsing experience improving greatly).
Speedometer 2.0 is much better, cause it at least renders an actual javacsript SPA (in different js frameworks) but it's still very simplistic and doesn't do stuff that is actually done on most websites: No embedded twitter/facebook frames, no google analytics, no ads (separate domains) nor adblock (which can be quite taxing on ad-heavy sites). No SVG or huge image-rendering, no worker-threads reading/writing from IndexedDB (separate thread).
All of these things benefit from more cores (at least more than 1-2) in the real world, but aren't shown in any benchmarks.