Intel's LCC on HEDT Should Be Dead

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

moonbogg

Lifer
Jan 8, 2011
10,635
3,095
136
Intel have been sandbagging on core count for years. They got to hex-core all the way back in 2010, and mainstream sockets got quad cores back in 2009. They should have moved on years ago. I bet they had a prototype hex-core mainstream socket Haswell (or at least had it on a drawing board), but AMD just weren't competitive at the time.

And 14nm has shown how they've also been throwing away gains from each node. Tick tock looks like it was wasteful in hindsight, although I know there was always this race toward lower and lower powered chips, so the node shrinks provided the biggest gains there. Still, I can't help but view Intel's current problems in an odd light. It seems they skipped past so much refinement they could have done, and money that they could have extracted from each node, but they just threw it all away in their race to the bottom where they now seem destined to crash and burn (seriously).
AMD has a cheaper, more efficient way to make CPU's with more than 4 cores. It wasn't perfect at first, but its improving with each release. Pretty soon those latency issues will be forgotten and the clock speed increases combined with lots of healthy, cost effective cores will completely overshadow any initial issues with their new strategy. I bet Intel almost wishes they could just go back a few nodes and start refinement back then and extract more revenue from each node. They treated each node like they were disposable, extracting out the most benefit with a couple of sloppy squeezes and then tossing it in the trash like a batch of impartially drained lemons. They could have extracted more along the way and treated the shrinks like the finite resource strategy that they are rather than foolishly filling their cup and tossing the left overs.
Intel tried their best to gank us with another $600 HEDT low core chip with the 7820X, but many knew better. Things were different this time and they will never be the same again. Also, if anyone got truly scammed this round, it was the i7 7800X owners. $400 for another HEDT 6 core, but this time you get thermal paste, crappy temps, and congratulations 'cause you've also been MESH gimped like a mother to the point where no overclock can save you.
 

ehume

Golden Member
Nov 6, 2009
1,511
73
91
I think 'scammed' may be incorrect, although I remember my father's comment about us being "on the wrong end of a business formula."

I think instead the right word for 7820Z owners is 'disappointed.'
 

SPBHM

Diamond Member
Sep 12, 2012
5,056
409
126
the reviews were not all that positive, it was released after Ryzen, so it looked overpriced against $3xx CPUs at launch, and it was slower for gaming than Broadwell-e on many tests...
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,627
14,618
136
I just checked again. For motherboard/CPU/memory/AIO cooling/SSD, the 7980XE is $3050, for the 1950X its $2033. If I just pick motherboard/CPU, its like 2200 vs 1300 or almost 100% more, other components being the same.

So for 90% more, you get what, 30% more performance ? Yes, I would say the socket 2066 platform is on its death bed.
 

jpiniero

Lifer
Oct 1, 2010
14,675
5,300
136
I just checked again. For motherboard/CPU/memory/AIO cooling/SSD, the 7980XE is $3050, for the 1950X its $2033. If I just pick motherboard/CPU, its like 2200 vs 1300 or almost 100% more, other components being the same.

So for 90% more, you get what, 30% more performance ? Yes, I would say the socket 2066 platform is on its death bed.

That's the thing about the e-penis crowd... they don't care about value.
 

WingZero30

Member
May 1, 2017
29
9
36
Engagdet did an interview with Gregory Bryant who is general manager of intel client computing group. In the interview Gregory discussed and outlined some of the strategy and his vision for the future of computing which he would be presenting at computex.

http://www.engadget.com/amp/2018/06/01/intel-computex-2018-preview/

However one thing stood out from the article possibly regarding the next hedt flagship chip:

"While he couldn't get into specifics, Bryant said we can expect an even more impressive chip announcement than last year's 18-core."

I wonder what it would be ?? :eek:

 

WingZero30

Member
May 1, 2017
29
9
36
Yeah I think the prospect of 7nm 24 core zen 2 threadripper in 2019 has definitely spooked intel and given their current troubles with 10nm it would not be surprising if intel goes all out on hedt too utilising socket 3647 and releasing 28 core hedt flagship chip to try to maintain performance lead.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,627
14,618
136
Things are more like today than they ever have been before.
I am not sure what you mean.

BUT, here is what I see. Starting way back, I see Intel is the only real price/performance leader, and no one close. Then AMD strikes back and becomes a competitor. Then Intel comes back and is the only real price/performance leader again, Then (last year) AMD comes back stronger then ever as price/performance leader and in a few places, the performance leader.

So to me, this is all about back and forth, and glad to see it, as it means competition and lower prices for good hardware.

And if you all haven't noticed, I always buy the best bang/buck, no favoritism, but at the MOMENT its AMD.
 

beginner99

Diamond Member
Jun 2, 2009
5,211
1,582
136
That's essentially what intel did with Skylake-X and the 8700k

I can argue everyone with internet and basic google skill could have known even before Skylake-X release that there will be a 6-core mainstream part. If you aren't doing your homework as a consumer, your bad.

Part of that homework would have also been to know that 8700k and 7820X have 2 completely different use-cases. 8700k is for high-end gaming and HEDT 7820X for workstation usage or someone that requires a lot of PCIe connectivity.
 
  • Like
Reactions: CHADBOGA

dullard

Elite Member
May 21, 2001
25,113
3,487
126
I just checked again. For motherboard/CPU/memory/AIO cooling/SSD, the 7980XE is $3050, for the 1950X its $2033. If I just pick motherboard/CPU, its like 2200 vs 1300 or almost 100% more, other components being the same.

So for 90% more, you get what, 30% more performance ? Yes, I would say the socket 2066 platform is on its death bed.
Suppose you pay an engineer salary + benefits that costs you $100k/year and the software on that machine costs $1k/year/core. What does that 30% faster get you? $35,400/year more work done. I'd gladly pay $1000 more for a CPU to get $35,400/year more work done.

Yes, Intel costs more. We get that. But businesses don't care about that minuscule cost difference. If AMD wants to survive long term, they need to learn that lesson too.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,627
14,618
136
Suppose you pay an engineer salary + benefits that costs you $100k/year and the software on that machine costs $1k/year/core. What does that 30% faster get you? $35,400/year more work done. I'd gladly pay $1000 more for a CPU to get $35,400/year more work done.

Yes, Intel costs more. We get that. But businesses don't care about that minuscule cost difference. If AMD wants to survive long term, they need to learn that lesson too.
Except that its a business, so overclocking is out, and the 30% now disappears. Oh, and it sucks more power, and costs more to cool as well(AC), so now you are loosing money.

I also doubt your math, but no sense in arguing, since your mind is made up.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
Yeah I think the prospect of 7nm 24 core zen 2 threadripper in 2019 has definitely spooked intel and given their current troubles with 10nm it would not be surprising if intel goes all out on hedt too utilising socket 3647 and releasing 28 core hedt flagship chip to try to maintain performance lead.
But Intel will very likely get 10nm going properly in 2019, and if they solve the issues, even though it's so late it almost died, it will probably be very good.
 

dullard

Elite Member
May 21, 2001
25,113
3,487
126
Except that its a business, so overclocking is out, and the 30% now disappears. Oh, and it sucks more power, and costs more to cool as well(AC), so now you are loosing money.

I also doubt your math, but no sense in arguing, since your mind is made up.
You seem to be highly biased against Intel right now, but there is no reason to insult me by claiming that my mind is made up. Also, there is no reason for you to threadcap either (bringing up AMD vs Intel HCC in a LCC thread).

There is no need to overclock. If you don't believe my numbers, then give me the numbers that you wish to work with. The 30% number was your own number, I just went with it.

The median aerospace engineer salary is $107k, not including any bonus or benefits. I rounded to a simple $100k. https://www.google.com/search?q=aer...2j69i57j0l3.4263j1j4&sourceid=chrome&ie=UTF-8
Automotive engineer salaries are a bit lower, but after benefits will often be about $100k.

Sorry that I can't give you a direct price for professional engineering software, since it is not available online. Feel free to request quotes and post them here. Or you can use one of many forum posts though:
https://www.cfd-online.com/Forums/cfx/117027-license-costs.html#post424443
For additional computational power, HPC pack per 8 cores : £15000
That works out to $2500/core.

I'm sorry that I don't have direct benchmarks for your two off-topic processors. But if you allow me to go even more off-topic into Xeon vs EPYC, the best EPYC (7601) here gets 99.2 points per core at the highest tested configuration (128 cores) while a humble Xeon 2695v4 gets 113.7 points/core (72 cores) and 111.1 points/core (144 cores) in aerospace software. It isn't your 30%, but there is quite a difference and that is a 2 year old Xeon chip. https://www.ansys.com/solutions/sol...elease-18/external-flow-over-an-aircraft-wing

Here the 2 year old Xeon is more than double the speed per core (72 cores of Xeon 2695v4 is faster than 128 cores of EPYC 7601) in automotive software: https://www.ansys.com/solutions/sol...t-benchmarks-release-18/vehicle-exhaust-model
And that doesn't even include the massive extra licensing cost for all those extra EPYC cores.

Finally, if you don't understand how a business with 9 engineers can do the same work as the same business with 10 engineers if the software that they wait days/weeks/months to get answers from runs 10% to 122% faster, then I don't know how to talk with you.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,655
136
But Intel will very likely get 10nm going properly in 2019, and if they solve the issues, even though it's so late it almost died, it will probably be very good.

Well there is a question of how many dime turns they can make. None of them are unreasonable.

When we look at X299 it's apparent that at the start they didn't plan on using HCC till pretty late and that was only with the 12c version (so they could bin the most mediocre of chips for that). There needs to be a couple mindset changes and those decisions probably already need to have been made.

So even then looking at '19 Intel has to focus on getting these right.

8c+ Consumer chips
12c+ LCC
20+ HCC
40+ XCC

The last two are likely. Intel was always going to seek high core counts on their higher end server dies. But are they going to make a LCC that's competitive core count wise with Ryzen 3? Or Even a consumer one. We think we know a 8 core Coffee Lake is in the works. But is that where Cannon Lake stops? If it is, then they have an uphill battle. Personally I think the Starship roadmap was based on a lack of 7nm information and they wanted to account for bad Yields. That means Zen 2 could be up to 16 cores making a TR3 a 32c beast and EPYC 2 being a 64c monster.

Some of this is academic if they get EMIB up and running. But the silence on that is almost as deafening.

So while I am confident Intel will work out the Fab issue. If they haven't already as of two years ago decided to fight AMD on cores at most level's Zen 2 and Zen 3 will have a wide berth to shake things up.

If they haven't then its going to get desperate. That includes as the guy above mentioned using 3647 for HEDT. I mean in the end Intel can pull an AMD and just not feel they have to compete in that market. It's not like it makes Intel that much money (ton's of margin though). But they are a company that absolutely refuses to leave any door open, even if they can't avoid it.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,826
3,654
136
You seem to be highly biased against Intel right now, but there is no reason to insult me by claiming that my mind is made up. Also, there is no reason for you to threadcap either (bringing up AMD vs Intel HCC in a LCC thread).

There is no need to overclock. If you don't believe my numbers, then give me the numbers that you wish to work with. The 30% number was your own number, I just went with it.

The median aerospace engineer salary is $107k, not including any bonus or benefits. I rounded to a simple $100k. https://www.google.com/search?q=aer...2j69i57j0l3.4263j1j4&sourceid=chrome&ie=UTF-8
Automotive engineer salaries are a bit lower, but after benefits will often be about $100k.

Sorry that I can't give you a direct price for professional engineering software, since it is not available online. Feel free to request quotes and post them here. Or you can use one of many forum posts though:
https://www.cfd-online.com/Forums/cfx/117027-license-costs.html#post424443

That works out to $2500/core.

I'm sorry that I don't have direct benchmarks for your two off-topic processors. But if you allow me to go even more off-topic into Xeon vs EPYC, the best EPYC (7601) here gets 99.2 points per core at the highest tested configuration (128 cores) while a humble Xeon 2695v4 gets 113.7 points/core (72 cores) and 111.1 points/core (144 cores) in aerospace software. It isn't your 30%, but there is quite a difference and that is a 2 year old Xeon chip. https://www.ansys.com/solutions/sol...elease-18/external-flow-over-an-aircraft-wing

Here the 2 year old Xeon is more than double the speed per core (72 cores of Xeon 2695v4 is faster than 128 cores of EPYC 7601) in automotive software: https://www.ansys.com/solutions/sol...t-benchmarks-release-18/vehicle-exhaust-model
And that doesn't even include the massive extra licensing cost for all those extra EPYC cores.

Finally, if you don't understand how a business with 9 engineers can do the same work as the same business with 10 engineers if the software that they wait days/weeks/months to get answers from runs 10% to 122% faster, then I don't know how to talk with you.
As usual, your typical use-case scenario involves proprietary software licensed on a per-core basis, and your particular example is of a software that might very well suit the requirements of a certain industry, but its developers display the usual time lag in keeping their software optimization up-to-date with new CPU architectures as they emerge.

It might very well be the case that one can get different results even on Fluent, depending on what image is being portrayed with benchmark results, as I found user-run benchmarks which portray a completely different picture.

Of course, in applications where no proprietary software on per-core licensing is used, it's going to be up to the user to determine which products give the most performance.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,655
136
Suppose you pay an engineer salary + benefits that costs you $100k/year and the software on that machine costs $1k/year/core. What does that 30% faster get you? $35,400/year more work done. I'd gladly pay $1000 more for a CPU to get $35,400/year more work done.

Yes, Intel costs more. We get that. But businesses don't care about that minuscule cost difference. If AMD wants to survive long term, they need to learn that lesson too.

But then you have to look at scale. It's one of the reasons a lot of cloud services looks to buying AMD solutions. It's why we have bean counters. If there is a significant price difference and a middle of the road performance difference (30% is too high but I would suggest that it's not even close to that big a difference). Companies don't have infinite funds. Buying the cheaper solution that means that you can get more systems, or have more funds for employees in the long run.

Also the paper math also doesn't calculate when the actual work would be held up. For example at my company, there are MT workloads that our engineers need better faster systems for. But when push comes to shove if I was to calculate the actual impact. They only run those jobs 1 a week at, 2 at most as they get down to crunch time. In fact most of their day to day work would run better with more cores than faster cores even if lets say a 8c system would do that one task 50% faster than a 16c solution. That 16c would let them run more day to day tools throughout the week which would have a large impact on performance. So one gives them maybe a 30 minutes a week in reduced downtime. The other lets them be more productive during the rest of the time. People seem to always want to attach a hard number to everything. But unless you are dealing with some protein folding, or high level modeling (not talking the CGI kind) the numbers are never 1:1. In your use case that 35k could quickly become 5k or 1k, depending on how much they have to actually do the task and if 30% in CPU speed actually means 30% in the task.
 
  • Like
Reactions: Vattila

StefanR5R

Elite Member
Dec 10, 2016
5,583
7,979
136
if you don't understand how a business with 9 engineers can do the same work as the same business with 10 engineers if the software that they wait days/weeks/months to get answers from runs 10% to 122% faster, then I don't know how to talk with you.
I am confused. First, why are compute servers suddenly on topic in this thread? Second, you seem to argue that engineers are paid to wait for answers from the computer, or that a CFD solver performs the work of an engineer.

If the engineering firm is able to run the computationally intensive parts of their projects 10 % faster, then they can consider to acquire 10 % more projects in the same time frame, for which they will need to employ 10 % more engineers also.

Or speaking of CFD, 10 % faster computation would enable them to increase their model resolution by (at most) 2.4 % in each dimension without change in solver time (with an explicit solver; not sure about the scaling of implicit solvers).

PS,
the FLUENT implicit solver benchmark which you linked to takes a nose dive in scaling efficiency on the white box EPYC server(s) already when running on a single machine but on two sockets, while it still scales well to a 16 nodes Cray cluster. I conclude that this benchmark says very little about processor performance, and much about the performance of the MPI software + hardware stack. Though I suspect the Cray Aries network is going to cost you.

--------
@tamz_msc, thanks for the interesting link.
 
Last edited:

beginner99

Diamond Member
Jun 2, 2009
5,211
1,582
136
As usual, your typical use-case scenario involves proprietary software licensed on a per-core basis, and your particular example is of a software that might very well suit the requirements of a certain industry, but its developers display the usual time lag in keeping their software optimization up-to-date with new CPU architectures as they emerge.

It might very well be the case that one can get different results even on Fluent, depending on what image is being portrayed with benchmark results, as I found user-run benchmarks which portray a completely different picture.

Of course, in applications where no proprietary software on per-core licensing is used, it's going to be up to the user to determine which products give the most performance.

I understand his point. In many use-cases a $1000 price difference for the CPU simply doesn't matter and realistically companies buy finished servers not CPUs. So we need to see that the actual servers are cheaper. But back to his main point. $1000 is nothing if you include the costs of the full server, required infrastructure, employees and software.
 

coercitiv

Diamond Member
Jan 24, 2014
6,253
12,171
136
you seem to argue that engineers are paid to wait for answers from the computer, or that a CFD solver performs the work of an engineer.
His argument basically boils down to the computing machine (HW&SW) becoming a money printing machine limited only by CPU&SW performance. In fact, the only limitation in linearly increasing profits seems to be personnel and licensing costs, which are supposedly following the law of diminishing returns, while CPU performance does not.

the FLUENT implicit solver benchmark which you linked to takes a nose dive in scaling efficiency on the white box EPYC server(s) already when running on a single machine but on two sockets, while it still scales well to a 16 nodes Cray cluster. I conclude that this benchmark says very little about processor performance, and much about the performance of the MPI software + hardware stack. Though I suspect the Cray Aries network is going to cost you.
Actually it takes the first nose dive while running on the first socket (from 16 to 32 cores), suggesting software optimization is direly needed.