mikeymikec
Lifer
I wonder how they came up with a battery life test for a browser.
On one hand, one might say "do a looping series of benchmarks, and see which identical machine lasts longest", but if one browser loads each benchmark quicker, then it might fit more benchmarks into the time it takes to empty the battery (and ends up emptying the battery quicker than others). Another browser may perform poorly with every benchmark (and/or take longer to load each one due to inefficiencies in its design which don't necessarily equal high resource usage), but because it doesn't push the machine half as much, the battery lasts longer.
Another way to run such a test would be to come up with say 10 tests to benchmark each browser, then supply an aggregate score at the end. That would reduce the loading 'system cool-down' potential advantage.
The problem with either approach I've considered is that it's similar-ish to the problem of processor benchmarking. Processor x may take longer than processor y to finish a task, but uses much less energy in the process. Reviewers end up showing separate benchmarks to illustrate these various facets of a processor's abilities.
My problem with older versions of IE was that there was a longer period of time than FF to load each page, so more time spent watching a blank screen. This was partly down to FF loading a page as each bit arrives and IE waiting for most of the page to finish downloading before showing anything. Newer versions of IE don't do this, but I think it took until IE9 for IE to quickly load about:blank and be ready to use, rather than sitting, unresponsive, saying "connecting". Nowadays I think it's more of a question of preference aesthetically, and the problem there is that THEY ALL LOOK THE SAME (because everyone is trying to copy Google Chrome).
I don't like Google Chrome's simplification of the UI, so I stay with Firefox with the classic menus at the top of the window.
On one hand, one might say "do a looping series of benchmarks, and see which identical machine lasts longest", but if one browser loads each benchmark quicker, then it might fit more benchmarks into the time it takes to empty the battery (and ends up emptying the battery quicker than others). Another browser may perform poorly with every benchmark (and/or take longer to load each one due to inefficiencies in its design which don't necessarily equal high resource usage), but because it doesn't push the machine half as much, the battery lasts longer.
Another way to run such a test would be to come up with say 10 tests to benchmark each browser, then supply an aggregate score at the end. That would reduce the loading 'system cool-down' potential advantage.
The problem with either approach I've considered is that it's similar-ish to the problem of processor benchmarking. Processor x may take longer than processor y to finish a task, but uses much less energy in the process. Reviewers end up showing separate benchmarks to illustrate these various facets of a processor's abilities.
My problem with older versions of IE was that there was a longer period of time than FF to load each page, so more time spent watching a blank screen. This was partly down to FF loading a page as each bit arrives and IE waiting for most of the page to finish downloading before showing anything. Newer versions of IE don't do this, but I think it took until IE9 for IE to quickly load about:blank and be ready to use, rather than sitting, unresponsive, saying "connecting". Nowadays I think it's more of a question of preference aesthetically, and the problem there is that THEY ALL LOOK THE SAME (because everyone is trying to copy Google Chrome).
I don't like Google Chrome's simplification of the UI, so I stay with Firefox with the classic menus at the top of the window.
Last edited: