• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

sum of reciprocals

toughwimp11

Senior member
so i'm trying to create a program in drScheme that can basically add the reciprocals of the first n natural numbers. so if n=3 then it should return 1/1+1/2+1/3=1.833333.
now i'm supposed to do this program iteratively and i've figured out how to do that but it runs VERY slow. when n=10000000, it takes a couple of minutes. it should run in under 10 seconds.

a hint that was given was: 1/i is approximately the integral of 1/x from i-1/2 to i+1/2.
but i dont really know how to apply that

can anyone help?
 
It looks they expect you to estimate the sum and not compute it directly. Plugging that integral into the sum gives a telescoping expression and everything but the first and last terms cancels out, so you get log(n+1/2)-log(1/2) for the whole thing.

Asymptotic methods can be used to improve that estimate to log(n+1/2)+gamma, where the latter is the Euler gamma constant.
 
Sum of the first n reciprocals of the natural numbers is called the Harmonic Numbers: H_n = sum(x=1 to n of 1/x) which grow logarithmically and converge on a logarithmic curve (to some constant difference). Now if you remember anything about calculus, then you should know that integral gives the area under a curve, like summation, except that the pieces are infinitesimally small in width. integral(1/x dx) = ln(x). (i+1/2) - (i-1/2) is 1 unit long, like summation, hence the approximation. That's your hint. ln(i+1/2) - ln(i-1/2) = ln((i+1/2)/(i-1/2)) but also remember that ln((i-1)+1/2) = ln (i-1/2), in other words, the bounds overlap. In the end you should get a fairly simple expression and can apply basic facts about logarithms. Maybe write the terms for i, i-1, i-2, on paper and see what happens.

NB: when I tested the approximation it is about 0.1 off from the actual computation. But it only takes me about 2 seconds to compute the summation, and that's without compiling it. So, maybe you are just doing something wrong with that? Are you sure you wrote it iteratively? You must use TAIL-recursion: no computation can happen after the recursive call returns, it must simply return directly the result.

(define (f x r) (if (= x 0) r (f (- x 1) (+ r (* 2.0 x)))))
(print (f 10000000 0))
(newline)

completes in less than a second when I load it into MzScheme (DrScheme's compiler). It adds all the numbers from 1 to x, multiplied by 2.0. The answer to your problem should be very similar. Notice how calling F recursively is the last thing it does in the body of the function, the result is accumulated in the parameter R.
 
Have we tuned into the Homework Channel again? 🙂

Iteratively:

double result=0
for (i=1; i<=N; i++)
result += 1.0/(double)i;

Hardly "highly technical", or is it?
 
Opps, I wrote down the wrong n. The RIGHT n = 1000000000. I did tail-recursion and it still takes a really long time. I think that CP5670 is on to something but the i cannot have an error that exceeds 1e-12.
 
You should get pretty good accuracy with the second formula I posted, at least for large n, since the error in that decays like 1/n. (note that the logs are all base e)

If you still need better accuracy than that, there are better approximations available based on the gamma function's asymptotic series. The first few terms are log(n) + gamma + 1/(2n) - 1/(12n^2) + 1/(120n^4), which is more than enough.
 
Is there any way to find what the error actually is (I assume not, since if we could calculate the true value we wouldn't need an approximation) or do we just use gamma to correct for it until we are reasonably certain the error shouldn't be more than the correction for n is sufficiently large?
 
The error decays rapidly as i increases, so use the approximation once i > 1000000 or some other large number
For smaller values of i sum conventionally.
 
Originally posted by: Peter
Have we tuned into the Homework Channel again? 🙂

Iteratively:

double result=0
for (i=1; i<=N; i++)
result += 1.0/(double)i;

Hardly "highly technical", or is it?


What's wrong with this approach?

 
Originally posted by: Special K
Originally posted by: Peter
Have we tuned into the Homework Channel again? 🙂

Iteratively:

double result=0
for (i=1; i<=N; i++)
result += 1.0/(double)i;

Hardly "highly technical", or is it?


What's wrong with this approach?

works too slow in DrScheme (like 5 mintues)
 
Originally posted by: toughwimp11
a hint that was given was: 1/i is approximately the integral of 1/x from i-1/2 to i+1/2.
but i dont really know how to apply that

Doesn't this hint make the calculation pretty trivial? If:

1/1 + 1/2 + 1/3 =
Int(1/x from 0.5 to 1.5) + Int(1/x from 1.5 to 2.5) + Int(1/x from 2.5 to 3.5) =
Int(1/x from 0.5 to 3.5)

This you can directly solve and make a simple one line program to compute the integral.
 
another problem i've run across is that i'm not exactly sure what the actual value should be. doing it in itertively in drscheme gives me one number while matlab and java give me another number. and all these numbers are different that the log(x) + gamma + ... number. (btw: these numbers are different at about the 10 or 11th place digit which is very close.
 
Originally posted by: toughwimp11
no, that's not what that mean. if fact, i'm kinda confused as to what it means but it doesnt mean that

Let's try a couple examples:
1/1 = 1
Integral of (1/x from 0.5 to 1.5) = ln(1.5)-ln(0.5) = 1.09

1/2 = 0.5
Integral of (1/x from 1.5 to 2.5) = ln(2.5)-ln(1.5) = 0.51


So the approximation is ok if you're willing to take that level of error. So why do you say the hint doesn't mean what I think it means?

 
Originally posted by: toughwimp11
because the error is too great, the gamma thing works quite nicely until about the 11th digit

What's your error bound? It seems that if they give you that hint, for n = something huge, you can just stick with that estimation.
 
Make sure you're using at least double precision for everything. As I said, the error can be proved to decay like 1/n (or 1/n^6 for the thing in my second post), so if you're getting a bigger error than that then it's something wrong on the computational side and not with the estimate itself.
 
Originally posted by: toughwimp11
another problem i've run across is that i'm not exactly sure what the actual value should be. doing it in itertively in drscheme gives me one number while matlab and java give me another number. and all these numbers are different that the log(x) + gamma + ... number. (btw: these numbers are different at about the 10 or 11th place digit which is very close.
As CP5670 mentioned, this sounds like rounding error due to the use of different variable types in different languages. I'm not familiar with DrScheme, but I know that getting the precision you are looking for is going to be difficult or impossible even with double precision, which is pretty standard (and is the default for MATLAB at least). The roundoff error will likely cause even the exact solution to deviate from the true solution by more than your allowable error if double precision is used. I don't have my computer with me right now, so I can't test it to make sure, but I know there are relatively straightforward methods for combining two doubles to obtain quad precision. Again, I don't have the papers on this computer that would help, but I'll have them tomorrow. 😛

The best thing to do in this case for now is to try it for some simple cases with relatively few n and make sure that your computation is correct. Then do it for larger n (say, n=10^3, 10^4, 10^5) and compare it with a lookup table.

This problem has been examined in detail for quite some time, so you can always poke around online for help. I'll give you a hint: the series is called a harmonic series and you're looking for partial sums.
 
There is no exactness in numerical math (i.e. math using finite precision numbers).

Creating and arranging algorithms that minimize the inherent error is making an entire subject in CS. Fun it is not, but very useful.
 
Back
Top