- Jul 23, 2006
- 3,934
- 0
- 76
I've been playing with some C++, and I was working on a routine that calculates all the primes below some number. I've been doing this using a Sieve of Eratosthenes, so I have declared an array of integers called 'numberPool' that contains all the numbers that might be prime. If a number is deemed non-prime (since it's a multiple of a prime) then that numbers value is set to zero.
Well the routine I wrote works fine, I tested it at 10,000, and 100,000 and it works fine, returning all the numbers below those numbers that are prime just fine, however when I get up around 500,000 or 1,000,000, the program crashes.
I've narrowed the problem down to the line:
int numberPool[ceiling + 1]; // ceiling is an arg passed to the function, in this case 1,000,000.
However I found using dynamic memory allocation the problem went away. So replacing above line with:
int *numberPool;
numberPool = new int[ceiling + 1];
This way works fine for larger values such as one million.
My question is why does dynamic memory allocation not crash a program where static memory allocation does? As far as I have known thus far they should accomplish the same thing.
Well the routine I wrote works fine, I tested it at 10,000, and 100,000 and it works fine, returning all the numbers below those numbers that are prime just fine, however when I get up around 500,000 or 1,000,000, the program crashes.
I've narrowed the problem down to the line:
int numberPool[ceiling + 1]; // ceiling is an arg passed to the function, in this case 1,000,000.
However I found using dynamic memory allocation the problem went away. So replacing above line with:
int *numberPool;
numberPool = new int[ceiling + 1];
This way works fine for larger values such as one million.
My question is why does dynamic memory allocation not crash a program where static memory allocation does? As far as I have known thus far they should accomplish the same thing.
