• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Average of abs value vs RMS

Status
Not open for further replies.

TecHNooB

Diamond Member
So I've always wondered about the obsession with RMS values outside the realm of power. I was told by a coworker that the RMS converges statistically and the average of the absolute values does not. Why is this true?

Clarification on absolute value: for a data sequence, the abs value relative to the mean of the data zero.
 
Last edited:
I'm an EE so I don't really know what people outside of electricity use RMS for, but the reason that you don't use the mean of the abs values is because it is not "statistically robust" ie it is "greatly" affected by outliers. I don't know anything about its convergence and what not.
 
One of the problems with the mean of absolute values, however, is the difficulty in algebraic analysis. RMS (or at least squares the mean of the squares) is continuous and smooth, which means that it can be manipulated (e.g. differentiated) straightforwardly.

The absolute value operator is not smooth, in particular, the derivative is undefined at zero. This causes significant problems with any algebra that depends on calculus - any equations you derive algebraically may not be valid if the input data cross zero).

This problem of algebraic analysis is one of the reasons why simple algebraic solutions to least-squares fitting problems exist (e.g. linear regression) - but simple solutions are not available to least-sum-of-errors fitting.
 
It's been a while since I thought about this, but I think the answer is related to convexity. The problem of optimizing parameters based on RMS is a quadratic problem, which has lots of favorable properties. The absolute value of the error, on the other hand, is not quadratic and not even strictly convex (though I believe it is still convex).
 
Least-squares problems in general have nice mathematical properties and you can often find exact formulas for the solution. L2 minimization amounts to solving a linear equation, while L1 (absolute value) minimization involves linear programming.

The absolute value of the error, on the other hand, is not quadratic and not even strictly convex (though I believe it is still convex).

They're both convex (which means they both have basically good properties), but the strict convexity does give L2 problems a unique solution while L1 problems generally lack that.
 
Status
Not open for further replies.
Back
Top