• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Zero to the zeroth power

lcoy

Junior Member
Someone told me 0^0 was undefined but I always thought that any number to the zeroth power was 1. So what does 0^0 equal?
 
it is undefined, much like 0 times infinity and 1 raised to the infinity and so on. however, this article shows you what the value "should" be... although that's more of fun with limits...
 
Nice site for arguing that it should be 1...


However, for a counter-argument, (just a simple one)
y=sinx/x is still undefined at x = 0, regardless of what the limit is. (the limit is 1)
This is for the simple reason: you cannot divide by 0. It makes absolutely no sense.
We certainly don't define the value of sinx/x to be 1 when x = 0; and the limit in this case is a two-sided limit; a stronger argument, perhaps, than the one sided limit.

Now, (x^12)/(x^12) is x^0 using a simple rule from basic algebra (subtract the exponents). However, this function is not defined when x = 0.
(
 
Hi,

Originally posted by: lcoy
Someone told me 0^0 was undefined but I always thought that any number to the zeroth power was 1. So what does 0^0 equal?


But the same person probably told you that zero to any positive power is 0; and what they should have said was:
1. any non-zero number to the zeroth power is 1.
2. zero to any strictly positive power is 0


Leaving 0*0 undefined.


There are weak arguemnts for defining 0*0 as 1; because x*0 = 1 whether x is positive or negative ( so long as it's not zero), so this preserves some sort of continuity.

By contrast 0*x is zero, for all strictly positive x, but infinite for all strictly negative x, so there is a discontinuity there anyway.

But these arguements or weak - they amount to a whole series of special case pleadings.


The truth is deeper. 0*0 is defined as a limit. This means it is undefined in abstract, in isolation, but potentially defined if it represents the answer to some actual question where the variable x approaches zero.


I.e:
Lt {x -> 0} x*0 = 1
Lt {x -> 0, for positive x} 0*x = 0
Lt {x -> 0, for negative x} 0*x = infinity (or maybe undefined)
Lt {x -> 0, for positive x} x*x = 1 (but I can't prove this)
Lt {x -> 0, for negative x} x*x is definately undefined



Peter
 
Originally posted by: lcoy
Someone told me 0^0 was undefined but I always thought that any number to the zeroth power was 1. So what does 0^0 equal?
It is most convenient to define it to be 1. You can write polynomials nicely that way.
 
Originally posted by: pcy
Lt {x -> 0, for positive x} x*x = 1 (but I can't prove this)
d/dx (x ln x)=lnx+1<0 for small x, so x ln x is decreasing for small x, so tends to l in (-inf, 0] as x->0.
Consider a sequence x_(n+1)=x_n/e tending to 0 with y_n=x_n ln x_n. y_n tends to l also.
y_(n+1)=x_n ((ln x_n) -1)/e=(y_n-x_n)/e
(x_n,y_n)->(0,l)
(x_(n+1),y_(n+1))=f(x_n,y_y), f given above, f continuous.
f(0,l)=(0,l)
So l=(l-0)/e, so l=0.

So e^(x ln x)=x^x tends to e^0=1.
 
I think they're just trying to give a logical extension to define 0^0 based on pre-existing theory. The set theory analogy was interesting. As was the concept of extending the definition to the limit of x^x to preserve continuity. The examples you guys gave were a nice read, too.

It doesn't seem like it can't be proven, so evaluating the arguements for defining 0^0=1 as you would a proof doesn't really make a whole lot of sense. What is probably a better point to argue is whether the extension would be appropriate in certain applied settings. Caclulus Limits generally make pretty decent approximations in practice. I wouldn't use it in algebra, but in analysis, it shouldn't make a difference. Like using deleted residuals to get t-statistics.
 
Hi CSMR,


Originally posted by: CSMR
Originally posted by: pcy
Lt {x -> 0, for positive x} x*x = 1 (but I can't prove this)
d/dx (x ln x)=lnx+1<0 for small x, so x ln x is decreasing for small x, so tends to l in (-inf, 0] as x->0.
Consider a sequence x_(n+1)=x_n/e tending to 0 with y_n=x_n ln x_n. y_n tends to l also.
y_(n+1)=x_n ((ln x_n) -1)/e=(y_n-x_n)/e
(x_n,y_n)->(0,l)
(x_(n+1),y_(n+1))=f(x_n,y_y), f given above, f continuous.
f(0,l)=(0,l)
So l=(l-0)/e, so l=0.

So e^(x ln x)=x^x tends to e^0=1.


Hmmm.


I agree x ln x is important, becase x*x = e* x ln x
And I agree that d/dx (x ln x)= 1+ln x
With the result that d/dx x*x = x*x . 1+ln x

I looked at that to see how x*x behaved close to zero; and teh gfact that the derivitive tends to minus infinity did discourage me from thinking that any totally satisfoctory value for 0*0, in general exists.


However after that I lose your proof.
I also agree that 1+ln x is (much) < 1 for small x, so x ln x is decreasing for small x; and clearly x ln x is -ve for small +ve x

But I'm not sure what the log of a -ve number is, so the idea that
x ln x tends to l in (-inf, 0] as x->0.
worries me.

After that I'm totally lost because I dion't know what your _ means.


For what it's worth I am also trying to figure out why the derivation for d/dx x*x using the known formula for d/dx x*n is invalid i.e:

d/dx x*x = x . x*n-1 . d/dx x = x*x = rubbish
since this depends only on d/dx f (g(x)) = df/dx . dg/dx



I'm missing somethiong here.



Peter




Peter
 
_ denotes a subscript. It's common notation. I find that speaking mathematics in text is easiest using LaTeX code. I haven't tried it (becaus eI'm at work), but if you're having problems using the Generalized Power Rule to differentiate x^x, you may try using the definition of derivative....

(Meanwhile, I'll buy huge amounts of stock in Georgia Pacific....😀)
 
Yes subscripts make things messy in text.

d/dx x^x=d/dx e^(x ln x)=d/dx (xlnx) e^(x ln x)=(1+lnx)(x^x)
For what it's worth.

d/dx f(x,x)=f1(x,x)+f2(x,x). Here f(x,y)=x^y. You are missing the second part for which you need to work out partial d/dy x^y. For that it's best to use logs.

Now as for my proof:
x ln x is decreasing, and I have looked at its values y_n for a particular sequence x_n which tends to 0. I show the sequence tends to a limit which has to equal the limit of x ln x since x ln x is monotonic.

And I'm just working with positive x so no logs of negative numbers.
 
Hi,


All sorts of notaion issues here... I'l work through it.


That's how I derived the correct value for d dx x*x also


I tried the general defintion of a drrivitive i.e Lt {delta ->} (f(x+delta) - f(x)).delta and for x*x it's very messy.

You are right, the erroneous derivation treats x*x as if it were f(g(x)) and its actually f(x,x).


I clearly mis-understood this then:

d/dx (x ln x)=lnx+1<0 for small x, so x ln x is decreasing for small x, so tends to l in (-inf, 0] as x->0.

as I read this x in the interval (-inf, 0] when you possibly meant x ln x in (-inf, 0]




Peter

 
Hi,


Yes... as soonb as I re-wrote that proof in familair notaion it was clear - valid and elegant. Thanks.


And treating x*x as a*b and differerentiating it wrt a and b with both defined as x also gave the corret derivitive.


So Lt { x -> 0 for positive x} x*x is proven to be 1


I still think that 0*0 in a vacuum is undefined though, because we don't, in general, know how the two variables are linked.


Peter


 
Hi,

^ is the symbol for "boolean and" in propositional calculus, so it's not a good choice for power.

By contrast * is the symbol used for "raised to the pwoer of" in teh computer languages I use.


If we are restricted to this one font, any choice will upset someone.



Peter
 
?? and what computer languages would those be?
It's been a long time since I did much programming, but I'm pretty sure that * means multiply in:
Ada, Pascal, Fortran, Basic, C, and in Mathematica's programming language.
 
Originally posted by: inspire
It doesn't seem like it can't be proven, so evaluating the arguements for defining 0^0=1 as you would a proof doesn't really make a whole lot of sense. What is probably a better point to argue is whether the extension would be appropriate in certain applied settings. Caclulus Limits generally make pretty decent approximations in practice. I wouldn't use it in algebra, but in analysis, it shouldn't make a difference. Like using deleted residuals to get t-statistics.

This makes a lot of sense, and was what I was thinking of while I read the other posts. It really depends on where you're using it:

"Exponentiation: x^0 = 1, except that the case x = 0 may be left undefined in some contexts. For all positive real x, 0^x = 0."

Further, x^0 for any negative real is positive infinity for any even degree, and undefined at any odd degree.



 
In simple words, The expression 0^0 is an example of an unusual kind of mathematics. The value, x^y can be examined and watching what happens to the value as each variable is changed independently. Keeping the (X) fixed at 0 and shrinking y from a real number, the value of the expression should remain 0. The square root of 0 is 0, the cube root, fourth root are all 0.

Likewise taking any x^0 is 1. Shrinking x to an infinitesimal value should continue to yield the number 1.

So, 0^0 is an expression whose value depends on how you calculate it.

Rather like the expression, 4 x 3 +2. Whether multiplication or addition is performed first, the results are different. By convention, we agree that multiplication precedes addition so we are really saying (4 x 3) +2.

Likewise, we have to agree first on the variable which takes precedence in calculating an exponent.

Does this make sense?
 
Originally posted by: bwanaaa
In simple words, The expression 0^0 is an example of an unusual kind of mathematics. The value, x^y can be examined and watching what happens to the value as each variable is changed independently. Keeping the (X) fixed at 0 and shrinking y from a real number, the value of the expression should remain 0. The square root of 0 is 0, the cube root, fourth root are all 0.

Likewise taking any x^0 is 1. Shrinking x to an infinitesimal value should continue to yield the number 1.

So, 0^0 is an expression whose value depends on how you calculate it.

Rather like the expression, 4 x 3 +2. Whether multiplication or addition is performed first, the results are different. By convention, we agree that multiplication precedes addition so we are really saying (4 x 3) +2.

Likewise, we have to agree first on the variable which takes precedence in calculating an exponent.

Does this make sense?

Said mathematically, 0^0 is an expression whose value is undefined. We can't define it mathematically in any one way because any definition would be inconsistent with the current axioms. The convention you're talking about - the order of operations - is more of a notational convention than a strict axiom, since multiplication of real numbers is simply nested addition.

The 'shrinking' you allude to is, conceptually speaking, equivalent to taking a limit. So, what you've been contemplating seems to be the difference between:

Lim_{x->0} x^0

and

Lim_{y->0} 0^y

in the context of

Lim_{x->0, y->0} x^y

But, these are limits; they don't necessarily imply that the function can be defined at the limit point. However, the existence of a limit can lend itself to approximation via removable discontinuities.

Thus, the problem is slightly more complicated than simply defining an 'order of precedence' when taking the limit. We can't mathematically define something based on its limit.
 
Originally posted by: inspire
Said mathematically, 0^0 is an expression whose value is undefined. We can't define it mathematically in any one way because any definition would be inconsistent with the current axioms.
What axiom is 0^0=1 inconsitent with?
 
Back
Top