- Sep 7, 2001
- 628
- 0
- 0
Hello.
I'm a hobbyist programmer and have stumbled across a curious problem that I hope someone can give me some insight on.
I wrote an app in C++ using g++ on my OS X 10.6 MacBook. It's command-line only. It reads some numbers from a small TXT file, does a lot of number crunching using two threads (one per core), and it uses a lot of memory (dozens to hundreds of mb depending), which it makes many frequent short writes to in a very random pattern. The program doesn't access the hard drive or network until it dumps out a tiff file at the very end.
I've tweaked it and it performs well on my old 2.16 Ghz core2duo MacBook.
I compiled the same code on a newer HP laptop running 64bit Fedora 17 with a faster Core2Duo CPU and a larger quantity of faster RAM than the MacBook has using the same g++ options, but it executes at about 1/4 the speed of my slower MacBook. (There is nothing running in the background in either case to affect performance.)
I compile with: "g++ -o -ltiff -lm -lpthreads program.o program.cc -march=core2 -O3"
Then execute with: ./program.o input.txt
I've tried a number of variations on the Linux machine such as skipping -march=core2 and -O3 (or trying -O1 and -O2). These have influenced run time on the Linux machine by up to 20%, but in the best case it is still running 1/4 the speed as the same code on the slower MacBook. In one test case the MacBook completed the run in 1'19" while the Linux machine took 6' (same source code, same parameter file.)
The Linux system is 2.4 GHz Core2Duo with 1066Mhz FSB & RAM and the MacBook is 2.16 GHz Core2Duo with 667Mhz FSB & RAM.
I know GHz doesn't always say much, but I would expect at least comparable performance, so I think something is wrong for the faster system to perform so much more slowly. I've tried both Fedora 17 as well as Ubuntu with similar (slower) results.
Any thoughts or suggestions? I'm not a Linux guru so there may be some factor I overlooked or didn't set optimally.
Thanks in advance!
-William Milberry
I'm a hobbyist programmer and have stumbled across a curious problem that I hope someone can give me some insight on.
I wrote an app in C++ using g++ on my OS X 10.6 MacBook. It's command-line only. It reads some numbers from a small TXT file, does a lot of number crunching using two threads (one per core), and it uses a lot of memory (dozens to hundreds of mb depending), which it makes many frequent short writes to in a very random pattern. The program doesn't access the hard drive or network until it dumps out a tiff file at the very end.
I've tweaked it and it performs well on my old 2.16 Ghz core2duo MacBook.
I compiled the same code on a newer HP laptop running 64bit Fedora 17 with a faster Core2Duo CPU and a larger quantity of faster RAM than the MacBook has using the same g++ options, but it executes at about 1/4 the speed of my slower MacBook. (There is nothing running in the background in either case to affect performance.)
I compile with: "g++ -o -ltiff -lm -lpthreads program.o program.cc -march=core2 -O3"
Then execute with: ./program.o input.txt
I've tried a number of variations on the Linux machine such as skipping -march=core2 and -O3 (or trying -O1 and -O2). These have influenced run time on the Linux machine by up to 20%, but in the best case it is still running 1/4 the speed as the same code on the slower MacBook. In one test case the MacBook completed the run in 1'19" while the Linux machine took 6' (same source code, same parameter file.)
The Linux system is 2.4 GHz Core2Duo with 1066Mhz FSB & RAM and the MacBook is 2.16 GHz Core2Duo with 667Mhz FSB & RAM.
I know GHz doesn't always say much, but I would expect at least comparable performance, so I think something is wrong for the faster system to perform so much more slowly. I've tried both Fedora 17 as well as Ubuntu with similar (slower) results.
Any thoughts or suggestions? I'm not a Linux guru so there may be some factor I overlooked or didn't set optimally.
Thanks in advance!
-William Milberry
