I am writing a data analysis program using a SQL server database which is quite large. It has some some 30 million rows of data.
I have gotten the program runtime down from 7 hours down to 12 minutes by indexing the database.
I would like to get the rundown down dramatically more (down to 1 minutes if possible).
Options are far as I understand are:
1. Hyperthread the program (cpu usage right now is around 50%)
2. Purchase 2xOperton dual cores
3. Purchase a separate computer to act as the database server. (right now the database and the program are running on the same computer)
How do you analyze the way to optimize this and what the bottleneck is? I dont want to go out and spend more money on cpus when it seems like the bottleneck is in the dbase access time and not the computations. But i dont have much experience on this.
Any thoughts/suggestions/recommendations?
System specs are:
a. Athlon dual core 4800+
b. 4gb RAM
c. 2 WD Raptor SATA HDD 10k rpm w/ raid0
I have gotten the program runtime down from 7 hours down to 12 minutes by indexing the database.
I would like to get the rundown down dramatically more (down to 1 minutes if possible).
Options are far as I understand are:
1. Hyperthread the program (cpu usage right now is around 50%)
2. Purchase 2xOperton dual cores
3. Purchase a separate computer to act as the database server. (right now the database and the program are running on the same computer)
How do you analyze the way to optimize this and what the bottleneck is? I dont want to go out and spend more money on cpus when it seems like the bottleneck is in the dbase access time and not the computations. But i dont have much experience on this.
Any thoughts/suggestions/recommendations?
System specs are:
a. Athlon dual core 4800+
b. 4gb RAM
c. 2 WD Raptor SATA HDD 10k rpm w/ raid0