• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

smooth approximations to max and abs

eLiu

Diamond Member
Is there a standard way of doing it? I need to be able to calculate the 1st and 2nd derivatives of this function. It's being used within a newton solver loop, which is why the higher derivs are needed. This is the original implementation; I'm wondering if there's better:

idea: abs(x) \approx x*x/(sign(x)*x+eps), where eps is a small number (e.g. 1e-8)
void
SmoothAbs(double x, double eps, double *y, double *y_x, double *y_xx )
{

double signx;

// Sign function
if (x > 0.0)
signx = 1.0;
else if (x < 0.0)
signx = -1.0;
else
signx = 0.0;

// Calculate smooth absolute value function
*y = x*x / (signx*x + eps);
*y_x = (2*x - signx*(*y)) / (signx*x + eps);

if( y_xx != NULL ) *y_xx = 2.0*(1.0 - signx*(*y_x)) / (signx*x + eps);
}

Max uses the following:
*y = 0.5*( sqrt((x-c)^2 + eps^2) + x + c )

void
PXSmoothMax(double x, double c, double eps,
double *y, double *y_x, double *y_c, double *y_xx, double *y_xc, double *y_cc)
{

double tmp, tmp_x, tmp_c, tmp_xx, tmp_xc, tmp_cc, xpc;

tmp = (x-c)*(x-c) + eps*eps;
tmp_x = 2.0*(x-c);
tmp_xx = 2.0;
tmp_xc = -2.0;
tmp_c = -2.0*(x-c);
tmp_cc = 2.0;

xpc = x + c;

if( (x < c) && (x+c < -1.0e8) ){

/* Protect against subtraction of very large numbers */

*y = c;

if( y_x != NULL )
*y_x = 0.0;

if( y_xx != NULL )
*y_xx = 0.0;

if( y_xc != NULL )
*y_xc = 0.0;

if( y_c != NULL )
*y_c = 0.0;

if( y_cc != NULL )
*y_cc = 0.0;

} else{
*y = 0.5*( sqrt(tmp) + xpc );

if( y_x != NULL )
*y_x = 0.5*( 0.5*tmp_x/sqrt(tmp) + 1.0 );

if( y_xx != NULL )
*y_xx = 0.5*( 0.5*tmp_xx/sqrt(tmp) - 0.25*tmp_x*tmp_x/(tmp*sqrt(tmp)) );

if( y_xc != NULL )
*y_xc = 0.5*( 0.5*tmp_xc/sqrt(tmp) - 0.25*tmp_x*tmp_c/(tmp*sqrt(tmp)) );

if( y_c != NULL )
*y_c = 0.5*( 0.5*tmp_c/sqrt(tmp) + 1.0);

if( y_cc != NULL )
*y_cc = 0.5*( 0.5*tmp_cc/sqrt(tmp) - 0.25*tmp_c*tmp_c/(tmp*sqrt(tmp)) );
}

}
 
Define "better". Faster? Less complex? I see some speed optimization tweaks in PXSmoothMax, but I'm not sure if that's what you're looking for.

Is what you're using now working? I would expect that for functions with discontinuous derivatives like this, Newton's method might not converge, even with the derivative smoothing. I imagine in particular that the range of guaranteed convergence might be proportional to eps, making the approximation more or less worthless.
 
Ken: "better" = more robust with respect to the newton solver, sorry for the lack of clarity.

Well currently it works in the sense that the solver converges. But it's VERY sensitive to what value of epsilon is used.

CycloWizard: it's part of a numerical solver for PDEs. We're solving: M*U + R(U) + Source = 0, where the source is specified (terms dependent on U are discretized into R), M is the mass matrix (known), U is the vector of states, and R(U) is the residual (finite element weak form) evaluated at U. R is generally nonlinear (e.g. navier stokes), so we use newton iteration to "invert" that operator.

I mean my main complaint about these two functions is that they're dimensionally inconsistent. It seems more reasonable to do something like:
abs(x) \approx x*x/(sign(x)*x*(1+eps))
max(x,c) \approx 0.5*((x-c)*(1+eps) + (x+c))

I mean those examples fail b/c they lack continuous derivatives, but with expressions like x*x/(sign(x)*x + eps), different values of x get different amounts of "smoothing"
 
Back
Top