Hello Anandtech members:biggrin:
I am an engineering student at Purdue University and am currently working on a research project related to the fatigue behavior of loudspeakers. I am interested in building a linux workstation, which uses CalculiX FEA software to solve a fully coupled system model (acoustic, mechanical, thermal, electromagnetic). Simulations will be contrasted with the collected experimental data.
In addition, these models will be integrated within a multi-objective genetic algorithm to solve for the global optimum of a particular system.
Being a full time student, I do not have a large budget. We'll see how much money I'll be able to scrape together. I recently placed the vast majority of my loudspeaker drivers up for sale to finance this project. I should be able to manage $1500-2000, MAYBE $4000.
I am aware that FEA software (ie CalculiX) is VERY memory intensive. If the RAM on the system can accommodate the entire sparse matrix, it can be solved in-core. However, if the amount of RAM is not sufficient, the sparse matrix is written to a file and required to pass through the disk I/O subsystem. I believe the system is bandwidth limited, rather than CPU limited.
I have found very little information with regards to specific hardware recommendations for running CalculiX. I expect this is due to the fact that the complexity of the model and the desired runtime have a significant effect on the computational requirements for the hardware.
I've spoken with Dr. Crossley, a faculty member in Purdue's Aerospace Engineering department who teaches a graduate level course on Multidisciplinary Design Optimization. He expressed that the solution associated with very advanced models may require a significant amount of time (ie weeks) before it is realized.
If the system simply has to be solved once, processing time will be a minor inconvenience. However, if the system has to be solved 1,000,000 times (initial population = 1000, number of generations = 1000), advanced models may quickly approach impracticality. I would like to avoid this, if at all possible.
Assuming a genetic algorithm is involved, an absolute minimum runtime will obviously be desired.
I am not sure if 36gb or 72gb is the appropriate quantity of memory for the aforementioned application.
Any thoughts?
Best Regards,
Thadman:biggrin:
I am an engineering student at Purdue University and am currently working on a research project related to the fatigue behavior of loudspeakers. I am interested in building a linux workstation, which uses CalculiX FEA software to solve a fully coupled system model (acoustic, mechanical, thermal, electromagnetic). Simulations will be contrasted with the collected experimental data.
In addition, these models will be integrated within a multi-objective genetic algorithm to solve for the global optimum of a particular system.
Being a full time student, I do not have a large budget. We'll see how much money I'll be able to scrape together. I recently placed the vast majority of my loudspeaker drivers up for sale to finance this project. I should be able to manage $1500-2000, MAYBE $4000.
I am aware that FEA software (ie CalculiX) is VERY memory intensive. If the RAM on the system can accommodate the entire sparse matrix, it can be solved in-core. However, if the amount of RAM is not sufficient, the sparse matrix is written to a file and required to pass through the disk I/O subsystem. I believe the system is bandwidth limited, rather than CPU limited.
I have found very little information with regards to specific hardware recommendations for running CalculiX. I expect this is due to the fact that the complexity of the model and the desired runtime have a significant effect on the computational requirements for the hardware.
I've spoken with Dr. Crossley, a faculty member in Purdue's Aerospace Engineering department who teaches a graduate level course on Multidisciplinary Design Optimization. He expressed that the solution associated with very advanced models may require a significant amount of time (ie weeks) before it is realized.
If the system simply has to be solved once, processing time will be a minor inconvenience. However, if the system has to be solved 1,000,000 times (initial population = 1000, number of generations = 1000), advanced models may quickly approach impracticality. I would like to avoid this, if at all possible.
Assuming a genetic algorithm is involved, an absolute minimum runtime will obviously be desired.
I am not sure if 36gb or 72gb is the appropriate quantity of memory for the aforementioned application.
Any thoughts?
Best Regards,
Thadman:biggrin:
