• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Can you solve this equation?

While talking about blackbody radiation, we came across the following equation:
e^-x = 1-x/3

The professor claimed that the equation cannot be solved nicely for x (i.e. you have to resort to graphical analysis). Is he correct?

The answer is about x=2.82, BTW.

EDIT: x=0 is obviously also a solution
 
Yeah, there's a name for that type of equation but I can't remember it. We went over it in one of my modern physics classes and I wrote a little Fortran program to solve it.
 
Originally posted by: JohnCU
There are infinite answers, it is a function of x, put in a different value of x and get a different answer each time.
Um, no it isn't. x^2=2x+3. Is that a functioon of x with an infinite number of answers? No, no it isn't.
 
You cannot solve nicely for x without resorting to some extreme measures.

But, your professor is incorrect that you "have to resort to graphical analysis". There are plenty of good numerical methods that give more accurate results in much less time.

There are two answers to that specific problem. Why not just use x=0 if you need just one of the answers?
 
The x is both a power and a coefficient....you would have take the xth root o nthe left, and do the same on the right. Ireally can't explain why it isn't practical except to say that it'd be a mathematical mess to do.

I can tell you, however, why it is not integratable😀

When you integrate something that is to an X power, ther is no "n+1" available since "n" should be a number😀
 
Originally posted by: dullard
You cannot solve nicely for x without resorting to some extreme measures.

But, your professor is incorrect that you "have to resort to graphical analysis". There are plenty of good numerical methods that give more accurate results in much less time.

There are two answers to that specific problem. Why not just use x=0 if you need just one of the answers?

Dullard is right, this is the kind of equation you should use some numerical method algorithm to solve.
 
The function is differentiable, so you could use Newtons method to get a good approximation of the nonzero solution
 
Originally posted by: JohnCU
There are infinite answers, it is a function of x, put in a different value of x and get a different answer each time.

no.

There are two functions of x, they want the value(s) of x that make them equal (graphically, the intersects).
 
Use newton's method, you can do it by hand and it converges quickly if you start with a decent approximation

to use it, let
f(x) = e^(-x)-1+x/3
f'(x) = 1/3 - e^(-x)

x[n+1] = x[n] + f(x[n])/f'(x[n])

where x[n] is the nth iteration

so if you pick x = 2 as an initial guess
x[1] = 2
x[2] = 3
x[3] = 2.8244
x[4] = 2.8214

Voila. There are probably methods that converge more quickly but this one is really easy to work out with a scientific calculator 🙂
 
Originally posted by: RaynorWolfcastle
There are probably methods that converge more quickly but this one is really easy to work out with a scientific calculator 🙂
Like I and others above said, numerical methods can solve this quite well. If all you have is a calculator, then there are better methods than Newton's method for simple problems. I personally like the method of successive substitution.

Start with this:
[*]e^-x = 1-x/3

Rearrange:
[*]x=3*(1-e^-x)

Type in a guess (such as 2) into your calculator. Hit enter, equals, or whatever.

Next type in: 3*(1-e^-ANS) where ANS is your calculator button for the last thing displayed.

Hit enter or equals or whatever your calcualtor uses to repeat the last calculation. Keep pressing that one button until you are converged. In this case you get:
2
2.5939
2.7758
2.8131
2.8199
2.8211
2.8213
2.8214
And the answer never changes from there (unless you go to way more digits). This is the easiest to do with a calculator. No long calculations, no derivatives, etc. The function doesn't even need to be differentiable. Of course, it can diverge with complex problems. And of course it took a few more iterations than what you did. With a TI-86, this method only required a total of 20 keystrokes and virtually no math knowledge at all. 😉
 
Originally posted by: RaynorWolfcastle
Use newton's method, you can do it by hand and it converges quickly if you start with a decent approximation

to use it, let
f(x) = e^(-x)-1+x/3
f'(x) = 1/3 - e^(-x)

x[n+1] = x[n] + f(x[n])/f'(x[n])

where x[n] is the nth iteration

so if you pick x = 2 as an initial guess
x[1] = 2
x[2] = 3
x[3] = 2.8244
x[4] = 2.8214

Voila. There are probably methods that converge more quickly but this one is really easy to work out with a scientific calculator 🙂

Indeed. For a problem like this, where f/f' is well behaved (specifically, f' doesn't get near 0, so the quotient won't blow up), Newton's Method is pretty much preferred.

It has quadratic convergence properties here (i.e. you double your accuracy each time, more or less).

There are faster methods that take advantage of higher order terms of the Taylor Series expansion (Newton uses up to the linear term), such as the Halley Method, which takes the quadratic term as well. Though the denominator in Halley's method looks something like [f']^2 - f*f''...so if that term is ~0, you get big problems & convergence falls to linear. The payoff is that the convergence is cubic--3 times the accuracy per iteration (after the first 2 or so runs though if your initial guess isn't "good" enough).

But in general, Newton's is preferred among numerical folks b/c it's fast + requires relatively few function evals. By comparision, Halley's method requires something like 2 times the number of function evaluations, in addition to the constraints on f, f', and f''.

In general, there's something called Laguerre's Method that yields the general formula for starting from an n-th order Taylor approximation. Laguerre w/n=1 is Newton, n=2 is Halley, and so forth.

edit: oh yeah, these are some of the more famous superlinear root-finders. Other nifty algorithms include the Secant method, which has a convergence rate = golden ratio (i thought that was cute when I derived it), or the inverse quadratic method (better than secant, worse than Newton, riskier than both). There's also the fixed-point method (what dullard has), which is the slowest of them all (linear convergence)...but there are cute ways of speeding it up.
 
Originally posted by: eLiu
There are faster methods that take advantage of higher order terms of the Taylor Series expansion (Newton uses up to the linear term), such as the Halley Method, which takes the quadratic term as well.
Now I'm going to have to call you out for overkill here. If you really want to go to complex problems, yes you can do it and get a great answer in just a few iterations. But really, why do an iterative solution if you are starting with a Taylor Series? Why not explicitly solve for x if you are doing the Taylor Series expansion? Try this instead:

Guess x = 3. Use a Taylor Series around x=3:
[*]e^-x ~= e^(-3)*(1+3-x)

Thus
[*]e^(-3)*(1+3-x)~=1-x/3

Solve for x:
x~=[(1+3)*e^-3-1]/[e^-3-1/3] = 2.82

No iteration required. Although you could iterate once and replace the 3s with 2.82 in the equation to get a very accurate explicit solution (the first 7 digits are correct).

With two terms in the Taylor's series, it is even more accurate! Note: referring to my earlier post, this is an "extreme" measure.
 
Originally posted by: dullard
Originally posted by: eLiu
There are faster methods that take advantage of higher order terms of the Taylor Series expansion (Newton uses up to the linear term), such as the Halley Method, which takes the quadratic term as well.
Now I'm going to have to call you out for overkill here. If you really want to go to complex problems, yes you can do it and get a great answer in just a few iterations. But really, why do an iterative solution if you are starting with a Taylor Series? Why not explicitly solve for x if you are doing the Taylor Series expansion? Try this instead:

Guess x = 3. Use a Taylor Series around x=3:
[*]e^-x ~= e^(-3)*(1+3-x)

Thus
[*]e^(-3)*(1+3-x)~=1-x/3

Solve for x:
x~=[(1+3)*e^-3-1]/[e^-3-1/3] = 2.82

No iteration required. Although you could iterate once and replace the 3s with 2.82 in the equation to get a very accurate explicit solution (the first 7 digits are correct).

With two terms in the Taylor's series, it is even more accurate! Note: referring to my earlier post, this is an "extreme" measure.

Well like I said in the OP, Newton's method is one of the most popular (and overused I might add) fixed point methods. I pointed out the other methods because they're natural extensions of the Newton Method...however their complexity (in function evaluations & constraints on f, f', f'', etc) is prohibitive for real-world use. It's kind of one of those cases where the fancy general formula is nifty, but not useful.

And a reason for not solving the taylor series explicitly is basically b/c we can write a 5 line program that will run Newton's Method for any f & f' we throw at it (even if the situation doesn't call for Newton, lol). But solving an equation every time can be annoying...especially if you (like me) make lots of careless mistakes 🙁

But yeah I definitely didn't mean to suggest the use of the more complicated methods. I've never seen an example of those being used in practice...ever, lol.
 
Back
Top