Sloppy "marketing jingo" writing from many mobile tech sites is pissing me off...

AndroAsc

Junior Member
Jun 21, 2012
10
0
0
I'm going on a rant here, and this pent up frustration has been building up for a while. I follow a number of mobile tech sites, to keep updated about the latest Android offerings.

One thing that pisses me off is the "marketing lingo" kind of sloppy writing that almost all their articles have. There is always no technical details on new devices, I always have to check pdadb.net to get the details (which is not always correct). Those stupid writers keep on using phrases like "1.2Ghz dual core processor", "1.5Ghz quad core processor"... for fuck sake that's like not writing anything at all.

First, they rarely specify the SoC. Second, the almost never talk about the architecture details. Dual Core processor... which generation? Cortex A9? Krait? A15? And then GPU... it's almost always left out, as if the only tech specification that matters is the god damn CPU... There's this kind of stupid mentality that goes "Oooo... QUAD CORE CPU, so it must be good! 1.5GHz is higher than 1.0Ghz so it must be good". This is the kind of ignorant crap I expect from non-tech sites. Hello! I'd take a dual core A15 with a strong GPU over a quad core A9 with a shitty GPU anyday, especially since most apps are not coded to use all 4 cores.

I'll name a few offending sites - androidauthority.com, gigaom.com, talkandroid.com. These are some of the largest Android mobile tech sites, and the quality of writing is god damned appalling.

And keep up the good job Anandtech. While the mobile/smartphone articles may be a less, the quality is always top notch.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Heh, I read your post and I thought it was aimed squarely at the shockingly pathetic excuses for PR releases AT has been doing lately, guess you have *MUCH* lower standards then what your post makes it seem like.

Lately, if it isn't an article apologizing for Apple or Intel in the mobile market on AnandTech, it doesn't get done.
 

stlc8tr

Golden Member
Jan 5, 2011
1,106
4
76
Heh, I read your post and I thought it was aimed squarely at the shockingly pathetic excuses for PR releases AT has been doing lately, guess you have *MUCH* lower standards then what your post makes it seem like.

Lately, if it isn't an article apologizing for Apple or Intel in the mobile market on AnandTech, it doesn't get done.

So which sites make the grade in your view?
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
So which sites make the grade in your view?

In the UP space? None of them cut it. I use forum feedback more then actual reviews. Frequently I find myself having to look over a dozen reviews for a new device to get a *single* CPU bench, it is shockingly bad.
 

Bateluer

Lifer
Jun 23, 2001
27,730
8
0
I'll name a few offending sites - androidauthority.com, gigaom.com, talkandroid.com. These are some of the largest Android mobile tech sites, and the quality of writing is god damned appalling.

While I agree with your sentiments, and it annoys the crap out of me when even the manufacturer lists 'Dual Core CPU' on their official spec sheets, the sites you listed are minor, bit players.
 

magomago

Lifer
Sep 28, 2002
10,973
14
76
In the UP space? None of them cut it. I use forum feedback more then actual reviews. Frequently I find myself having to look over a dozen reviews for a new device to get a *single* CPU bench, it is shockingly bad.

Let me take a counter position for a moment:

How do we know that the benchmarks (ie: test methods) for a phone are even valid? What is the purpose of running 'GLgears' all day long?

Synthetic benchmarks simply never represent real world usage; as a result, I ignore them for the PC. For the PC, I scroll immediately to video game benchmarks or actual application usage benchmarks if I'm interested. The proof is often that you see big differences in synthetic benchmarks, only to see these differences shrink and may even be marginal when running actual applications.

As a result, half of the content of most CPU reviews is typically useless...yet you have people erroneously making decisions based on synthetic performance.

What I've noticed that AT has tried to do for the phone is introduce Simulated Use Benchmarks, especially to gauge battery life; however, the assumption that drives these benchmarks is somewhat unclear as there is no actual proof that it represents accelerated human usage of a phone.

I actually find it a breath of fresh air to read about people using the phones all day and finding how the battery life is stacking up, or running their favorite apps and assessing how smooth it is. Yes its much more subjective, but I get a FAR better impression of a phone.

I want to know what my phone can do, not how much e-peen it will give me. Does it have trouble with the youtube app? Is there perceived stuttering or lag with the most common games or apps? Does it run its OS smoothly (an issue for Android phones it seems)? How is the battery after a day's worth of use? Did they improve the workflow at all?

Knowing that Phone X lasts 8 Hours, whereas Phone Y lasts 8.75 is meaningless when with average use at the end of the day they both land about 25% and will need to be recharged. The usage pattern of a phone is so varied and battery metrics such as +/- 1 hour don't actually mean much.

So to sum it up: I'm very happy that Smartphone reviews focus on using the damned thing and not coming up with numbers that have a dubious rationale.
 

Geekbabe

Moderator Emeritus<br>Elite Member
Oct 16, 1999
32,229
2,539
126
www.theshoppinqueen.com
A lot of sites are geared for non-tech readers, my focus is primarily on mothers, a busy group who are interested in what a gadget can do to make their lives easier & how technology can help them share & stay connected with friends & family more easily.
They are not for the most part interested in wading through a 1,500 word post loaded with specs & terminology that puts them to sleep. They want to know, how easy is it to use, what features does this thing have that are compelling enough for me to buy it.

I give my readers what they want while also providing them with links for more in depth info about specs should they desire this info. Regular people buy lots of tech devices & deserve to be given the info they want & can use.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Synthetic benchmarks simply never represent real world usage; as a result, I ignore them for the PC.

What would you say are the top processors now for the PC? I want to test your assertion, I don't agree with it at all but I can get a list from you and we can see how it compares to bench charts.

The proof is often that you see big differences in synthetic benchmarks, only to see these differences shrink and may even be marginal when running actual applications.

Most people don't push their phone very hard, the majority of time they spend waiting is because they have some class 2 uSD card or something, if you want to load up a GB pdf on your phone, raw processor power becomes real useful in a hurry. This is the type of thing benches give us a *much* better picture of then how they look and feel to the typical iOS user.

Does it have trouble with the youtube app? Is there perceived stuttering or lag with the most common games or apps? Does it run its OS smoothly (an issue for Android phones it seems)? How is the battery after a day's worth of use? Did they improve the workflow at all?

Then it sounds like you should be quite pleased with USA Today and the NY Times. This is a tech site, I expect a *hell* of a lot more :)

A lot of sites are geared for non-tech readers, my focus is primarily on mothers, a busy group who are interested in what a gadget can do to make their lives easier & how technology can help them share & stay connected with friends & family more easily.

This I think is a very valid approach, and I am not trying to belittle your goals at all. What I have a *HUGE* problem with is a site that will run a full page discussing the delta variance on a display but refuse to post a single CPU bench. I have no issues whatsoever with people writing quality reviews for the typical user. I have a major issue with supposed tech sites waxing poetic about whatever singular spec a company told them to focus on while ignoring the main factors people who come to tech sites care about.
 

Crono

Lifer
Aug 8, 2001
23,720
1,502
136
Solution in two words: Brian Klug

The more people give attention to thorough reviews like AnandTech does, the less (hopefully) we will see of shallow and quick reviews from the generic "tech" and mobile sites.
 
Last edited:

magomago

Lifer
Sep 28, 2002
10,973
14
76
What would you say are the top processors now for the PC? I want to test your assertion, I don't agree with it at all but I can get a list from you and we can see how it compares to bench charts.

Now? Oh , its easily Ivybridge. Get out of town if you want to argue AMD. But real benchmarks clearly show that Intel out performs AMD on almost anything from a "the difference doesn't matter degree [ie: 15% faster when both games nail a minimum of something like 80fps]" to "geez wow its that much faster".

I was thinking more on the generations prior to what we've seen with the core series.

Most people don't push their phone very hard, the majority of time they spend waiting is because they have some class 2 uSD card or something, if you want to load up a GB pdf on your phone, raw processor power becomes real useful in a hurry. This is the type of thing benches give us a *much* better picture of then how they look and feel to the typical iOS user.

I would be okay with loading up PDFs because I think that falls into a real usage. I see co workers do it all the time on their smartphones. I'm assuming you are joking with loading up something on the order of 1GB.

Then it sounds like you should be quite pleased with USA Today and the NY Times. This is a tech site, I expect a *hell* of a lot more :)
Actually I'm most pleased with The Verge's review as well as Phandroid. Their reviews came off way more professionally then reviewers insisting they run benchmarks over and over in freezers. The reviews could be better below, as I'll highlight shortly.

This I think is a very valid approach, and I am not trying to belittle your goals at all. What I have a *HUGE* problem with is a site that will run a full page discussing the delta variance on a display but refuse to post a single CPU bench. I have no issues whatsoever with people writing quality reviews for the typical user. I have a major issue with supposed tech sites waxing poetic about whatever singular spec a company told them to focus on while ignoring the main factors people who come to tech sites care about.

My main concern is if it starts an effect where now everyone needs to do the same benchmarks that doesn't really describe the phone performance.

I'm not opposed to benchmarks at all. Benchmarks that truly reflected actual Smartphone usage would be extremely welcomed. But instead what I've seen come from AT are simulated usage tests (dubious as to whether its a true simulation), or traditional benchmarks where interpreting the numbers and applying it to the phone is equally dubious. That is why 'The Verge's' Approach feels like a breath of fresh air.
The easy answer is to rely on benchmarks of actual apps. But in the world of 'smart phones', most actual apps aren't very demanding, so benchmarks would be useless.
We also don't have insight (unless I'm wrong here, so correct me) into what apps are "CPU intensive" ,and intensive in what way?, or what apps are "GPU intensive", the way we do with PC benchmarks (ie: WOW/SC2 are very cpu intensive) making the ability to benchmark a phone with real applications even more difficult.
What is a bigger pain is that it seems only recently with Android 4 & The Nexus4 are reviewers saying that the interface is actually truly smooth. Yet we don't see any of this reflected in AT benchmarks. Why not? This is where numbers fall short, and a carefully considered qualitative assessment does far better (ie: not USA Today. I've read game reviews where it doesn't actually tell me what the game is about).
I think that creative ways of thinking how to measure real usage (loading a PDF wouldn't be a bad idea and measuring the time it takes to render and display) should be the way to go when:
- a common pool of apps/tools can be collected
- we can validate that they represent the majority of the most common actions
- and we can develop good methods to measure performance that reflect actual usage
- we can state when a numerical difference corresponds to a real perceivable difference.
- we can validate the overall results by applying it to existing pool of phones where a lot of qualitative knowledge already exists.

Its a lot more difficult than throwing up numbers for a bunch of applications, but that is because it requires a lot more brainpower.


I think we both want the same end goal, but I'll stop short of going to the goal if I know we can't achieve it appropriately, and will be willing to settle for a qualitative analysis that does a good job of reviewing the phone. On the other hand, you are willing to settle for something that will partially satisfy it only so we can get some quantitative information.

I can't say what the best approach is. I just hope that benchmarks that will be more ubiqutous don't suck :p
 

dagamer34

Platinum Member
Aug 15, 2005
2,591
0
71
Now? Oh , its easily Ivybridge. Get out of town if you want to argue AMD. But real benchmarks clearly show that Intel out performs AMD on almost anything from a "the difference doesn't matter degree [ie: 15% faster when both games nail a minimum of something like 80fps]" to "geez wow its that much faster".

I was thinking more on the generations prior to what we've seen with the core series.



I would be okay with loading up PDFs because I think that falls into a real usage. I see co workers do it all the time on their smartphones. I'm assuming you are joking with loading up something on the order of 1GB.


Actually I'm most pleased with The Verge's review as well as Phandroid. Their reviews came off way more professionally then reviewers insisting they run benchmarks over and over in freezers. The reviews could be better below, as I'll highlight shortly.



My main concern is if it starts an effect where now everyone needs to do the same benchmarks that doesn't really describe the phone performance.

I'm not opposed to benchmarks at all. Benchmarks that truly reflected actual Smartphone usage would be extremely welcomed. But instead what I've seen come from AT are simulated usage tests (dubious as to whether its a true simulation), or traditional benchmarks where interpreting the numbers and applying it to the phone is equally dubious. That is why 'The Verge's' Approach feels like a breath of fresh air.
The easy answer is to rely on benchmarks of actual apps. But in the world of 'smart phones', most actual apps aren't very demanding, so benchmarks would be useless.
We also don't have insight (unless I'm wrong here, so correct me) into what apps are "CPU intensive" ,and intensive in what way?, or what apps are "GPU intensive", the way we do with PC benchmarks (ie: WOW/SC2 are very cpu intensive) making the ability to benchmark a phone with real applications even more difficult.
What is a bigger pain is that it seems only recently with Android 4 & The Nexus4 are reviewers saying that the interface is actually truly smooth. Yet we don't see any of this reflected in AT benchmarks. Why not? This is where numbers fall short, and a carefully considered qualitative assessment does far better (ie: not USA Today. I've read game reviews where it doesn't actually tell me what the game is about).
I think that creative ways of thinking how to measure real usage (loading a PDF wouldn't be a bad idea and measuring the time it takes to render and display) should be the way to go when:
- a common pool of apps/tools can be collected
- we can validate that they represent the majority of the most common actions
- and we can develop good methods to measure performance that reflect actual usage
- we can state when a numerical difference corresponds to a real perceivable difference.
- we can validate the overall results by applying it to existing pool of phones where a lot of qualitative knowledge already exists.

Its a lot more difficult than throwing up numbers for a bunch of applications, but that is because it requires a lot more brainpower.


I think we both want the same end goal, but I'll stop short of going to the goal if I know we can't achieve it appropriately, and will be willing to settle for a qualitative analysis that does a good job of reviewing the phone. On the other hand, you are willing to settle for something that will partially satisfy it only so we can get some quantitative information.

I can't say what the best approach is. I just hope that benchmarks that will be more ubiqutous don't suck :p

Eh.. are you really arguing that the totally subjective approach of The Verge is really more valid long term than the objective approach of AnandTech? Because the 5 year old review of the original iPhone on AnandTech is the only one worth reading today because it uses measurable stats instead of "feel" which varies from one person to the next.

Some stuff shouldn't be objectively measured because beauty is in the eye of the beholder, which is why I have almost never seen any significant amount of a review dedicated to "OMG this phone looks ugly!" because it's meaningless. Also, when a website like The Verge has to review 50 phones, they start nitpicking on stuff that the average person absolutely will not care about.

And until The Verge starts using a standardized camera test like Brian Klug does, I will absolutely ignore any comments they make about camera performance. They are literally making shit up at that point.