Intel's LCC on HEDT Should Be Dead

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

tamz_msc

Diamond Member
Jan 5, 2017
3,826
3,654
136
I understand his point. In many use-cases a $1000 price difference for the CPU simply doesn't matter and realistically companies buy finished servers not CPUs. So we need to see that the actual servers are cheaper. But back to his main point. $1000 is nothing if you include the costs of the full server, required infrastructure, employees and software.
It's one thing to argue that since software costs disproportionately higher than hardware, it is reasonable to assume that one might agree to spend the little extra on the hardware if it is proven to be able to deliver higher performance.

It is an entirely different thing to argue using the above fact that the higher performing hardware would in turn allow for a linear increase in productivity as reflected in the numbers he cooks up, which muddles the fact that CFD engineers are being paid salaries, not wages, and they aren't operating the tools(hardware and software) as if working on an assembly line.
 

dullard

Elite Member
May 21, 2001
25,113
3,487
126
It's one thing to argue that since software costs disproportionately higher than hardware, it is reasonable to assume that one might agree to spend the little extra on the hardware if it is proven to be able to deliver higher performance.

It is an entirely different thing to argue using the above fact that the higher performing hardware would in turn allow for a linear increase in productivity as reflected in the numbers he cooks up, which muddles the fact that CFD engineers are being paid salaries, not wages, and they aren't operating the tools(hardware and software) as if working on an assembly line.
Your first paragraph is one of my major points. The software, engineer time, and computer memory are where the money goes. The CPU is so minimal of a cost that it usually isn't even a blip in the decision. This fact that CPU cost is nearly meaningless on HEDT and servers is such an important lesson that so few people understand. If we want AMD to thrive, we want them to value their processors properly.

I won't even bother going into your link where the EPYC computer had double the memory, double the memory channels, more cores, and faster turbo than the Xeon computer. If you aren't going to compare like-for-like, why even bother? At least they did one thing right, which was to turn off hyperthreading which often destroys CFD performance.

Having done CFD for about a decade and having worked with many CFD engineers, I can say we are far closer to working on an assembly line than you describe. CFD isn't as much of a science as it is an art. The solvers tend to hone in on local minimums/maximums and not on the global minimums/maximums, especially at the start of calculations. If you aren't there watching and guiding the solutions as they go, you most often get diverging results that lead to meaningless answers (some number goes to infinity and the whole calculation crashes). Second most often you get many different local minimums/maximums with no clear answer as to which is the right one. So, the engineer babysits the calculations intensely to guide it to the global minimums/maximums. Yes, you can go to a meeting as it crunches away. But, I personally don't measure worker output by how many meetings were accomplished.
 
Last edited:

StefanR5R

Elite Member
Dec 10, 2016
5,582
7,978
136
Intel just showcased a 28 core CPU
So while the enthusiasts are still arguing that LCC on HEDT Should Be Dead, signs are that the recent separation between server socket and workstation socket may be dead in a matter of a few months, by abandonment of the latter. (Possible socket re-unification aside, we can rely on Intel to continue to carefully segment the market between HEDT on one hand and WS/server on the other hand.)

[Edit: On the other hand, considering Intel's track record WRT HEDT product lines, they might keep the existing LGA 2066 lines and add non-ECC LGA 3647 as yet another line...]
 
Last edited:

tamz_msc

Diamond Member
Jan 5, 2017
3,826
3,654
136
Having done CFD for about a decade and having worked with many CFD engineers, I can say we are far closer to working on an assembly line than you describe. CFD isn't as much of a science as it is an art. The solvers tend to hone in on local minimums/maximums and not on the global minimums/maximums, especially at the start of calculations. If you aren't there watching and guiding the solutions as they go, you most often get diverging results that lead to meaningless answers (some number goes to infinity and the whole calculation crashes). Second most often you get many different local minimums/maximums with no clear answer as to which is the right one. So, the engineer babysits the calculations intensely to guide it to the global minimums/maximums. Yes, you can go to a meeting as it crunches away. But, I personally don't measure worker output by how many meetings were accomplished.
That is true for a number of applications, not just CFD. The situation you describe is especially true in ML, just to cite another example. The situation you describe exposes the pitfalls of your arguments, which is that engineers spend more time figuring out what the results of the computations mean rather than just running the computations in the hope of getting an answer, and try to hone in on the desired solution by tweaking parameters, fine-tuning initial conditions, checking boundary values, etc. which means that someone with experience and insight has the capability of choosing the right set of values which arrives at the result at much less time compared to other sets of values. Therefore, the actual wall time your hardware has to spend crunching the numbers is only a part of the overall engineering man-hours spent on solving a particular problem, and whether or not the hardware is 30% faster matters will depend on how much time is actually saved in proportion to the overall time spent on the problem.
 

dullard

Elite Member
May 21, 2001
25,113
3,487
126
That is true for a number of applications, not just CFD. The situation you describe is especially true in ML, just to cite another example. The situation you describe exposes the pitfalls of your arguments, which is that engineers spend more time figuring out what the results of the computations mean rather than just running the computations in the hope of getting an answer, and try to hone in on the desired solution by tweaking parameters, fine-tuning initial conditions, checking boundary values, etc. which means that someone with experience and insight has the capability of choosing the right set of values which arrives at the result at much less time compared to other sets of values. Therefore, the actual wall time your hardware has to spend crunching the numbers is only a part of the overall engineering man-hours spent on solving a particular problem, and whether or not the hardware is 30% faster matters will depend on how much time is actually saved in proportion to the overall time spent on the problem.
In my experience, it is pretty much close to 80+% of your productive time is watching and waiting for the results. Often you just stop at the next iteration, turn off a problematic equation (or set of problematic cells), start the calculations, watch if it is converging past the pain point, turn the equation back on, rinse and repeat. Once you have it stable, you can leave it overnight or over the weekend and cross your fingers. True, not 100%. But it isn't like you set it and forget it either.