14nm 6th Time Over: Intel Readies 10-core "Comet Lake" Die to Preempt "Zen 2" AM4

Page 14 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
If it's late 2021 then who cares?

That's a long time from now, and the whole CPU / desktop / console world will have changed a lot.

It's only interesting if it's coming soon, imo.
 

PotatoWithEarsOnSide

Senior member
Feb 23, 2017
664
701
106
They'll be 2 years too late to 7nm by 2021. AMD will be on a more mature process by then, and will almost certainly have optimized.
If Intel are relying on 7nm to get back in the game, then its time to stop trying to stem the leak in the ship and just jump overboard already.
 

jpiniero

Lifer
Oct 1, 2010
14,509
5,159
136
They'll be 2 years too late to 7nm by 2021. AMD will be on a more mature process by then, and will almost certainly have optimized.
If Intel are relying on 7nm to get back in the game, then its time to stop trying to stem the leak in the ship and just jump overboard already.

It shouldn't be that bad. TSMC is saying they don't expect much in the way of revenue for their 5 nm node until the 2H of 2020, and of course that means you wouldn't expect to see anything from AMD until 2021. It's also not that big of a shrink; Intel's 7 should be decently smaller. Now if they don't end up shipping product until 2022 or later, that would be a problem.
 

Thunder 57

Platinum Member
Aug 19, 2007
2,647
3,706
136
It's been a core wore since 4 cores. Clocks are yester-year. Anything between 3Ghz-4.7ghz is comfy space.
There are no diminishing returns because there's something called Memory stalls which most clock crazed individuals seemingly know nothing about. Your CPU doesn't operate on magic, it has to fetch things from memory. Those fetches take time and the CPU can't do anything until it gets the data. The big issue is people writing crappy software because everything is all about leveraging open source code they know noting about or frameworks... Many of them horrible for multi-core processing. And please don't mention gaming and higher FPS as the driver for higher clocks. IPC = variable based on the instructions. There's no such thing as an IPC measure. Another pop science data point for people with no formal education on computer architecture. The future is moar cores, corelets, a range of different cores, and a heterogeneous execution environment that punts tasks around to the compute device that can handle it most efficiently. No one is talking about frequency anymore and IPC has to be one of the most ridiculous terms ever popularized by e-celeb reviewers. AMD's new architecture is aiming towards heterogeneous computing which is what they laid out years ago as their ultimate vision and they're leaps and bounds ahead of intel which has been crippling I/O and cutting corners in branch look-ahead and other areas of their CPUs for years.

OK, so there's a lot in there. First, I guess it wasn't clear I meant AMD needs more clocks. A 4.2 base, 4.5 ACT, and 4.8 max turbo would do quite a bit to help while still being in the "comfy space" as you put it.

Of course CPUs don't operate on magic. And of course memory can stall them. Look at the 2990WX. The 2950X can outperform it when the 2990WX is starved for memory. Handbrake is particularly brutal, as I recall.

IPC is most certainly a thing. It just is not a fixed number. It depends on what the first bottleneck is, whether that be memory, ALUs, cache latency, any number of things. Clock a 2600 and 7700 at the same 4.0GHz, and the 7700 will be faster. It's not magic. There are design choices that affect performance, which people will commonly call IPC. You may not like the term, but it makes for easier discussion.

If you look at my post history, you would see I am among the last person who will say gaming and higher FPS drive anything. A big pet peeve of mine is when people post a question asking about hardware, and the people who reply instantly assume that they want to know if it is any good for gaming. I mean, if all you care about is gaming, buy a console. A PC should be thought of as a multipurpose tool. That doesn't mean people can't have gaming rigs. It's just annoying when someone asks about a part and you see a reply like "x won't matter for gaming" or the like.

Heterogeneous computing, what happened there? I thought AMD was all on board with HSA. They have had it since what, Kaveri? I understand it never went anywhere because the Con cores weren't great. Now AMD has a good CPU and GPU (at lower power levels), why is HSA never talked about anymore? Maybe well see a bigger push with Zen 2 as 7nm gives more breathing room as far as die size goes to include a GPU on more parts. We shall see.
 

CHADBOGA

Platinum Member
Mar 31, 2009
2,135
832
136
If you look at my post history, you would see I am among the last person who will say gaming and higher FPS drive anything. A big pet peeve of mine is when people post a question asking about hardware, and the people who reply instantly assume that they want to know if it is any good for gaming. I mean, if all you care about is gaming, buy a console. A PC should be thought of as a multipurpose tool. That doesn't mean people can't have gaming rigs. It's just annoying when someone asks about a part and you see a reply like "x won't matter for gaming" or the like.
I can always wait a few more seconds or minutes for a video to render/decode/whatever, what I can't put up with is dropped frames in games.

That's why pretty much only gaming performance matters to me.

Heterogeneous computing, what happened there? I thought AMD was all on board with HSA. They have had it since what, Kaveri? I understand it never went anywhere because the Con cores weren't great. Now AMD has a good CPU and GPU (at lower power levels), why is HSA never talked about anymore? Maybe well see a bigger push with Zen 2 as 7nm gives more breathing room as far as die size goes to include a GPU on more parts. We shall see.
HSA has always been a pipe dream and the only people who ever took it seriously were AMD partisans who hoped it would give AMD an edge over Intel.
 

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
HSA has always been a pipe dream and the only people who ever took it seriously were AMD partisans who hoped it would give AMD an edge over Intel.

HSA is still out there, though at this point I'm fairly certain it's connected to AMD's OpenCL 2.0 driver stack (which is, to the best of my knowledge, still only fully-available on Linux). It is possible that native HSA code may offer more capabilities than can be enjoyed with OpenCL 2.0 alone, but of that I am not certain.

In any case you can still use HSA/OpenCL 2.0 (including SVM features, etc) with Raven Ridge if you so choose. You are pretty much hand-coding your own application at that point. Unless you're using the spreadsheet program in LibreOffice.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
So what you’re saying is that CPU performance doesn’t matter for gamers? News to me.

Depends on how you look at it. As it is right now with games developed for PowerPC (X360/PS3) or 8 Cat cores (XB1/Ps4) The 8 cat cores gave us MT gaming but utilization in the ports is still too low because at its heart games still run with the CPU power those 8 cat cores provides.That and all optimization for PC has been for 4c8t CPU's at max. Things like AI are severelly limited without compute power and we are only now getting back to the where we were with F.E.A.R.
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
OK, so there's a lot in there. First, I guess it wasn't clear I meant AMD needs more clocks. A 4.2 base, 4.5 ACT, and 4.8 max turbo would do quite a bit to help while still being in the "comfy space" as you put it.

Of course CPUs don't operate on magic. And of course memory can stall them. Look at the 2990WX. The 2950X can outperform it when the 2990WX is starved for memory. Handbrake is particularly brutal, as I recall.

IPC is most certainly a thing. It just is not a fixed number. It depends on what the first bottleneck is, whether that be memory, ALUs, cache latency, any number of things. Clock a 2600 and 7700 at the same 4.0GHz, and the 7700 will be faster. It's not magic. There are design choices that affect performance, which people will commonly call IPC. You may not like the term, but it makes for easier discussion.

If you look at my post history, you would see I am among the last person who will say gaming and higher FPS drive anything. A big pet peeve of mine is when people post a question asking about hardware, and the people who reply instantly assume that they want to know if it is any good for gaming. I mean, if all you care about is gaming, buy a console. A PC should be thought of as a multipurpose tool. That doesn't mean people can't have gaming rigs. It's just annoying when someone asks about a part and you see a reply like "x won't matter for gaming" or the like.

Heterogeneous computing, what happened there? I thought AMD was all on board with HSA. They have had it since what, Kaveri? I understand it never went anywhere because the Con cores weren't great. Now AMD has a good CPU and GPU (at lower power levels), why is HSA never talked about anymore? Maybe well see a bigger push with Zen 2 as 7nm gives more breathing room as far as die size goes to include a GPU on more parts. We shall see.

Good all around reply !
I don't feel AMD needs more clocks. Nothing on my intensive work flows on an 8 core @ 3.7Ghz or a 16 core @ 3.7Ghz is screaming for higher clocks. I'd like higher memory clocks and lower latency of course to improve efficiency in more cases. I'd like lower latency storage as well which is why NvMe is so great... Intel having done nothing to expand the I/O plane on their processors thus why every single one of their CPUs is a writeoff.

But no, I could care less about higher CPU clocks. The primary case where that is an obsession is gaming. People who conduct billions of dollars of computational business run on processors that are clocked under 3Ghz.

System Memory and storage are the bottlenecks not CPU.. Has been that way for quite some time while 2 core and 4 core processors have achieved clocks north of 4ghz. The real speedup came with higher clocked ram, SSDs, and NvME.

Again :
http://www.brendangregg.com/blog/2017-05-09/cpu-utilization-is-wrong.html
IPC = Inter-process communication
and
IPC = Instruction per cycle.

Outside of this is idiocy as no one who uses IPC to refer to Instructions per clock does any in-depth analysis beyond .. Goof application completed in 10sec on this processor vs 20sec on this processor.

HSA is a thing. Nvme is a part of it. I don't think people appreciate what already exists.
My GPU, CPU, and a remote computer can access NVME storage directly. I can write a program that does so.
Rome separates I/O completely from compute cores. The future doesn't happen in one leap. It happens in steps and stages.

The majority of startups and quick money chasers are too busy playing with and leveraging inefficient framework code from python to understand what's going on at the lower levels. The game has already changed and CPU clocks have little to nothing to do with it.

Intel's pain hasn't even begun. I have an intel machine that can clock about 800Mhz higher than my Ryzen and yet my Ryzen kicks the snot out of it because it has lower latency and faster memory. It's not about the clocks at this stage. It's about storage/sys memory and having that available to any device. AMD I there as of next year. Intel's head is in the dirt.
 
Last edited:
Aug 11, 2008
10,451
642
126
Good all around reply !
I don't feel AMD needs more clocks. Nothing on my intensive work flows on an 8 core @ 3.7Ghz or a 16 core @ 3.7Ghz is screaming for higher clocks. I'd like higher memory clocks and lower latency of course to improve efficiency in more cases. I'd like lower latency storage as well which is why NvMe is so great... Intel having done nothing to expand the I/O plane on their processors thus why every single one of their CPUs is a writeoff.

But no, I could care less about higher CPU clocks. The primary case where that is an obsession is gaming. People who conduct billions of dollars of computational business run on processors that are clocked under 3Ghz.

System Memory and storage are the bottlenecks not CPU.. Has been that way for quite some time while 2 core and 4 core processors have achieved clocks north of 4ghz. The real speedup came with higher clocked ram, SSDs, and NvME.

Again :
http://www.brendangregg.com/blog/2017-05-09/cpu-utilization-is-wrong.html
IPC = Inter-process communication
and
IPC = Instruction per cycle.

Outside of this is idiocy as no one who uses IPC to refer to Instructions per clock does any in-depth analysis beyond .. Goof application completed in 10sec on this processor vs 20sec on this processor.

HSA is a thing. Nvme is a part of it. I don't think people appreciate what already exists.
My GPU, CPU, and a remote computer can access NVME storage directly. I can write a program that does so.
Rome separates I/O completely from compute cores. The future doesn't happen in one leap. It happens in steps and stages.

The majority of startups and quick money chasers are too busy playing with and leveraging inefficient framework code from python to understand what's going on at the lower levels. The game has already changed and CPU clocks have little to nothing to do with it.

Intel's pain hasn't even begun. I have an intel machine that can clock about 800Mhz higher than my Ryzen and yet my Ryzen kicks the snot out of it because it has lower latency and faster memory. It's not about the clocks at this stage. It's about storage/sys memory and having that available to any device. AMD I there as of next year. Intel's head is in the dirt.
Highly oversimplified. Clockspeed is only one metric of course. But all else being equal, would you not prefer faster clocks? If it doesnt matter, why dont you downclock all your cpus to 1 ghz and save some energy.
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
Highly oversimplified. Clockspeed is only one metric of course. But all else being equal, would you not prefer faster clocks? If it doesnt matter, why dont you downclock all your cpus to 1 ghz and save some energy.
e
Nothing is over-simplified about what I posted. I in fact provided far more detailed information about digging into CPU performance than the run of the mill (IPC) reference. I would prefer more cores and less clocks for my particular workflow. This evidenced in my choice of a 8 core and 16 core processor that's at 3.7Ghz vs a 4 core at 5Ghz. There is no equality. You can have more cores and lower clocks or less cores and higher clocks. The future has been more cores and lower clocks for some time. I could have bought an intel and shot for 4.7Ghz, I bought a 16 core AMD processor instead. Next will be a purchase of a 32 core processor with even lower clocks. My work flow will increase not decrease in performance. I suggest you take a read of : http://www.brendangregg.com/blog/2017-05-09/cpu-utilization-is-wrong.html
if you think my framing was simple. It isn't.

What you're really fighting in the upper tier of CPU speeds on the consumer desktop is crappy software.
5200dee3527ea8ad896a2932abffcc541fc83ef1d2bbecbde19d663c4432988d.png

b56ca7486dd5fa105678e2136da80bd62b2f8acf433855c80afa00027722cead.png
 

Spartak

Senior member
Jul 4, 2015
353
266
136
e
Nothing is over-simplified about what I posted. I in fact provided far more detailed information about digging into CPU performance than the run of the mill (IPC) reference. I would prefer more cores and less clocks for my particular workflow. This evidenced in my choice of a 8 core and 16 core processor that's at 3.7Ghz vs a 4 core at 5Ghz. There is no equality. You can have more cores and lower clocks or less cores and higher clocks. The future has been more cores and lower clocks for some time. I could have bought an intel and shot for 4.7Ghz, I bought a 16 core AMD processor instead. Next will be a purchase of a 32 core processor with even lower clocks. My work flow will increase not decrease in performance. I suggest you take a read of : http://www.brendangregg.com/blog/2017-05-09/cpu-utilization-is-wrong.html
if you think my framing was simple. It isn't.

What you're really fighting in the upper tier of CPU speeds on the consumer desktop is crappy software.

Not everything can be perfectly multithreaded and claiming everything that doesnt multithread nice is crappy software is a simplification.

Also, you make blanket statements in which your own usage is the golden standard that applies to anyone. You know, even if it's crappy coded, if an app you use is the industry standard without any alternative, have fun with your processors running lower then they can. The rest of us professionals will look at the processor that gives us the best possible performance.

My usage is CAD and whether it's coded crappy or not many programs are poorly multithreaded. So single core performance is king.

You use an awful lot of words to 'proove' what everybody here already knows: for code that can be properly multithreaded it's better to have more cores run at the optimal power/performance range of that chip.

But with todays high core count chip designs having one or two cores run at the highest possible frequency is always a performance win and wont take that much extra power. It's the all core boost state f.i. where the 9900 breaks the 95W TDP, not the single/dual core boost state.
 
Last edited:

Spartak

Senior member
Jul 4, 2015
353
266
136
All the while, you're literally trying to claim there's this massive group of people who can afford $300+ processors but need to skimp on their motherboard to the tune of $30.... :D.

B350/B450 boards cost less because they use cheaper parts and have less features : FACT.
Flagship processor but bargain bin mobo = a bad build : FACT

For someone pretending to know the facts please explain me why my Asrock AB350 M-ITX had completely identical VRM's to the X370 M-ITX? As others have stated VRM choice is made by the motherboard manufacturer and not depending on the chipset.
 

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
For someone pretending to know the facts please explain me why my Asrock AB350 M-ITX had completely identical VRM's to the X370 M-ITX? As others have stated VRM choice is made by the motherboard manufacturer and not depending on the chipset.

Because they're both M-ITX boards. M-ITX gives you less PCB real estate to work with, so a layout with fewer VRMs is intrinsically friendlier to that setup. It's also cheaper, and ASRock can reasonably assume that anyone building in a cramped case will have enough trouble keeping processors cool that they won't want to throw a lot of power through the socket anyway. Having a robust VRM capable of 200-250W or more output and octocore CPUs makes little sense when your cooling solution and case probably top out between 65W-95W. If you were to try something like a 2700x in that scenario, it wouldn't boost very far above its base just from the temps alone.
 
  • Like
Reactions: ub4ty

Spartak

Senior member
Jul 4, 2015
353
266
136
The point is that the mobo manufacturer decides what the VRM setup is for a particular board / chipset depending on a host of considerations, like the ones you mention for mini-ITX.
Making blanket statements like ub4ty are not only factually wrong but it doesnt really proove anything. Any enthusiast looking for maximum performance will know what type of board to pick. Any enthusiast looking for other considerations will investigate their options as well.

This whole 'Intel TDP' scandal is just silly. I for one am glad Intel gives you full control over PL1 and PL2 states. Especially for SFF enthusiasts that's a godsend not available on AMD boards to my knowledge.
 
Aug 11, 2008
10,451
642
126
e
Nothing is over-simplified about what I posted. I in fact provided far more detailed information about digging into CPU performance than the run of the mill (IPC) reference. I would prefer more cores and less clocks for my particular workflow. This evidenced in my choice of a 8 core and 16 core processor that's at 3.7Ghz vs a 4 core at 5Ghz. There is no equality. You can have more cores and lower clocks or less cores and higher clocks. The future has been more cores and lower clocks for some time. I could have bought an intel and shot for 4.7Ghz, I bought a 16 core AMD processor instead. Next will be a purchase of a 32 core processor with even lower clocks. My work flow will increase not decrease in performance. I suggest you take a read of : http://www.brendangregg.com/blog/2017-05-09/cpu-utilization-is-wrong.html
if you think my framing was simple. It isn't.

What you're really fighting in the upper tier of CPU speeds on the consumer desktop is crappy software.
5200dee3527ea8ad896a2932abffcc541fc83ef1d2bbecbde19d663c4432988d.png

b56ca7486dd5fa105678e2136da80bd62b2f8acf433855c80afa00027722cead.png
Ok, if you could have those 8 cores, or 16 cores, or however many you have at 5.0 ghz instead of 3.7, at the same price and power comsumption, you would do it, of course? So as I said, it is not so simple as clockspeed doesnt matter,. For your workload, you are willing to accept the compromise of lower clockspeed for more cores. BTW Intel does make more than 4 core processors you know, right?? And of course, we go back the the AMD time immemorial argument that the software is at fault for demanding high clockspeed. Still, the software "is what it is" so that is and always has been a lame excuse for AMD's low ipc (Vishera) and lack of clockspeed ( Zen)
 
  • Like
Reactions: Zucker2k

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
Making blanket statements like ub4ty are not only factually wrong but it doesnt really proove anything.

Okay.

How many Z370 boards are suitable for a 9900k running with the "full" 210W PL2? How many B350 boards are suitable for a 2700x (NO throttling)? Be specific, since you don't want blanket statements.
 

Spartak

Senior member
Jul 4, 2015
353
266
136
Okay.

How many Z370 boards are suitable for a 9900k running with the "full" 210W PL2? How many B350 boards are suitable for a 2700x (NO throttling)? Be specific, since you don't want blanket statements.

The blanket statement regarding B350/X370 was that VRM's are tied to the chipset. I already proved this isnt the case.

Regarding your second question, please specify the actual statement to which I need to respond. May I also remind you it's the motherboard manufacturers themselves that are violating the spec by putting the PL2 state to 210W.
One would assume the factory setting of motherboards that violate the formal specification would be able to run at that factory setting. Which motherboard manufacturer will violate the official specification on a board they designed themselves without providing VRM's suitable to their own default setting? This is too silly for words.

Also, you fail to understand the difference between the PL2 state which was set to 210W I believe on Asus boards, and the actual full load which is 160W all core turbo. Changing the turbo settings doesnt count, that's a factory overclock beyond spec.
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
Not everything can be perfectly multithreaded and claiming everything that doesnt multithread nice is crappy software is a simplification.

Also, you make blanket statements in which your own usage is the golden standard that applies to anyone. You know, even if it's crappy coded, if an app you use is the industry standard without any alternative, have fun with your processors running lower then they can. The rest of us professionals will look at the processor that gives us the best possible performance.

My usage is CAD and whether it's coded crappy or not many programs are poorly multithreaded. So single core performance is king.

You use an awful lot of words to 'proove' what everybody here already knows: for code that can be properly multithreaded it's better to have more cores run at the optimal power/performance range of that chip.

But with todays high core count chip designs having one or two cores run at the highest possible frequency is always a performance win and wont take that much extra power. It's the all core boost state f.i. where the 9900 breaks the 95W TDP, not the single/dual core boost state.
Autocad is crappy software. The forums and elsewhere have been filled for years with complaints about how horribly it runs.
Thanks for highlighting this and reflecting on the accuracy of my post. Monopolistic overpriced nonsense.. a match made in heaven w/ intel (an industry standard).

9900 ... LOL, says it all.
No reason to rehash old arguments. I recall linking to a dissection of Autocad performance done by a professional and breaking apart your common themed pro-intel post some time ago. Essentially, the major work flows that people conduct in Autocad benefit largely from more cores and not higher single core performance. It's 2018 not 1990 after-all.

I've proven you wrong in detail on autocad some time ago yet here you are again w/ the same pro-intel single core is everything praise. Somethings never change. If you knew the details of what I highlighted (again) your comments wouldn't be the same.

Nothing I posted was an oversimplification.. It was a detailing of convenient facts you chose to ignore. Something you learn in basic comp. arch.
 
Last edited:

ub4ty

Senior member
Jun 21, 2017
749
898
96
For someone pretending to know the facts please explain me why my Asrock AB350 M-ITX had completely identical VRM's to the X370 M-ITX? As others have stated VRM choice is made by the motherboard manufacturer and not depending on the chipset.
2700x .. M-ITX
2700x .. B350
Seems you answered your own question...
If you want to run a flagship processor stop playing w/ meme level motherboards.
 
  • Like
Reactions: spursindonesia

ub4ty

Senior member
Jun 21, 2017
749
898
96
The point is that the mobo manufacturer decides what the VRM setup is for a particular board / chipset depending on a host of considerations, like the ones you mention for mini-ITX.
Making blanket statements like ub4ty are not only factually wrong but it doesnt really proove anything. Any enthusiast looking for maximum performance will know what type of board to pick. Any enthusiast looking for other considerations will investigate their options as well.

This whole 'Intel TDP' scandal is just silly. I for one am glad Intel gives you full control over PL1 and PL2 states. Especially for SFF enthusiasts that's a godsend not available on AMD boards to my knowledge.
I try to ensure my statements are factual and provide supporting data/facts.. A b350 is not an X370 motherboard and an X370-Mini ITX is not a X370-ATX motherboard. You asked why a b350 mobo has the same vrms as an X370-mini itx? Answer it yourself.. such motherboards are priced and meant for low end processors that don't consume much power (DUH). Stating this fact is now a 'blanket statement'? What in the world is going on in modern society?

Any enthusiast looking for maximum performance will know what type of board to pick.
Clearly not when people are suggesting AMD will have done something wrong if you can't run the most high end Ryzen 3 series processor on a b350 or x370-mini itx motherboard. Hat tip to asrock for making a mini-itx Threadripper motherboard because the SFF meme wouldn't die. This cut down board is along with the subsequent heat density is an absolute riot
ASRock-X399M-Taichi-upoutavka.jpg

as is the shoehorn builds people make with it :
0ImrWgN.jpg

This whole 'Intel TDP' scandal is just silly. I for one am glad Intel gives you full control over PL1 and PL2 states. Especially for SFF enthusiasts that's a godsend not available on AMD boards to my knowledge.
SFF with high end components that consume lots of heat and power = meme
Every generation of builders is filled with a meme segment... comedy for days
 
  • Like
Reactions: lightmanek

ub4ty

Senior member
Jun 21, 2017
749
898
96
Ok, if you could have those 8 cores, or 16 cores, or however many you have at 5.0 ghz instead of 3.7, at the same price and power comsumption, you would do it, of course?
at the same price and power consumption...
at the same price and power consumption...


https://en.wikipedia.org/wiki/False_dilemma
If I could ride a magical unicorn across the universe at lightspeed I would too. Sadly, I can't and sadly you will never have such a scenario. So, back in reality, if you want higher clocks, you will have higher power consumption due to physics. The higher the clocks the higher the power inefficiency. So, even though I could run my processors at 4Ghz and higher, 3.7Ghz is the ceiling for me before drastic inefficiencies. So, that's where I draw the line ... At a sensible point before full on idiocy. The focus at such a point is on writing software and utilizing the hardware to its fullest.. for a good number of years before sensibly upgrading. In no time, my power house is obsoleted by the new gen's mid tier which is why I never go full retard on hardware specs or silly cooling solutions.

So as I said, it is not so simple as clockspeed doesnt matter,.
At modern day clocks it most certainly is my friend. Outside of playing around with gaming and having the most uber rig and fps, the focus is on core count, memory technology, and i/o. The big innovations happened in memory/storage. There's nothing to rant about 24/7 there because its plug and play hardware (typically). Meanwhile, on CPUs : Muh Mhz... Were literally hitting the limits of physics on clocks and people are still ranting about Muh Mhz. Roasting alive for fps prizes.

For your workload, you are willing to accept the compromise of lower clockspeed for more cores.
For my workload, I focus on what's sensible. Always have and will in computing. I'm not paying double to shintel for a clock speed crown. I won't pay double to AMD either which is why I didn't buy their X series processors. It's just not sensible.
For my workload, I was previously on a quad core that clocks into the 4Ghz range... higher than both of my AMD processors even though it came out years before. Yet, I'm on a lower clocked setup because clocks don't matter for what I do or anyone doing a serious workload when you have cores to throw at it. A serious workload is what enterprise faces. They're in the 2Ghz/3Ghz range. Their focus is on power efficiency, storage technology, and in-memory software. Meanwhile, in the meme segment of the consumer market, we have people ranting about Muh clock speeds. WE HAZ more serious workloads than google. No, meme level workflows/software and gaming drive such commentary. It's not impressing anyone who deals w/ serious enterprise hardware, software, and workflows.

BTW Intel does make more than 4 core processors you know, right?? And of course, we go back the the AMD time immemorial argument that the software is at fault for demanding high clockspeed. Still, the software "is what it is" so that is and always has been a lame excuse for AMD's low ipc (Vishera) and lack of clockspeed ( Zen)
Yeah, I still have my 4 core intel. I play goofy games on it from time to time.
And yeah, I know AMD made an 8core that cost 1/2 as much as I paid for my 4 core intel.
I bought one. Then AMD came out with 16 core. I bought one of those too.
There's 32 core epyc but I decided I'll hold off until 64 core rome to buy an enterprise rig. I'll likely sell off my AMD rigs and consolidate at that point.

Gaming.. given the crappy state of the industry, I'll probably still be using my 4 core 4Ghz+ intel for many more years having a blast. Oh and I have a NUC too which I toss in my bookbag for from time to time... A stellar product from intel. I love the skull NUCs but intel has lost their mind on pricing with them so its a quick writeoff.

So, an interesting revelation. I own both intel and AMD machines. I was a long time Intel builder. And yet here we are....
AMD began kicking intel's but and since I'm not a brand shill, I jumped ship to the best valued hardware platform and will do so whenever it happens again and again and again.
 
Last edited:

ub4ty

Senior member
Jun 21, 2017
749
898
96
The blanket statement regarding B350/X370 was that VRM's are tied to the chipset. I already proved this isnt the case.

Regarding your second question, please specify the actual statement to which I need to respond. May I also remind you it's the motherboard manufacturers themselves that are violating the spec by putting the PL2 state to 210W.
One would assume the factory setting of motherboards that violate the formal specification would be able to run at that factory setting. Which motherboard manufacturer will violate the official specification on a board they designed themselves without providing VRM's suitable to their own default setting? This is too silly for words.

Also, you fail to understand the difference between the PL2 state which was set to 210W I believe on Asus boards, and the actual full load which is 160W all core turbo. Changing the turbo settings doesnt count, that's a factory overclock beyond spec.
Cheap motherboard = cheap components.
You didn't prove or disprove anything.
X370-ITX != X370-ATX.
B350 = bargain basement motherboard.
You get what you pay for.
If you buy a Ferrari, don't expect stellar performance from the cheapest tires at walmart.. That was the point (clearly).
 
Last edited:
Aug 11, 2008
10,451
642
126
at the same price and power consumption...
at the same price and power consumption...


https://en.wikipedia.org/wiki/False_dilemma
If I could ride a magical unicorn across the universe at lightspeed I would too. Sadly, I can't and sadly you will never have such a scenario. So, back in reality, if you want higher clocks, you will have higher power consumption due to physics. The higher the clocks the higher the power inefficiency. So, even though I could run my processors at 4Ghz and higher, 3.7Ghz is the ceiling for me before drastic inefficiencies. So, that's where I draw the line ... At a sensible point before full on idiocy. The focus at such a point is on writing software and utilizing the hardware to its fullest.. for a good number of years before sensibly upgrading. In no time, my power house is obsoleted by the new gen's mid tier which is why I never go full retard on hardware specs or silly cooling solutions.


At modern day clocks it most certainly is my friend. Outside of playing around with gaming and having the most uber rig and fps, the focus is on core count, memory technology, and i/o. The big innovations happened in memory/storage. There's nothing to rant about 24/7 there because its plug and play hardware (typically). Meanwhile, on CPUs : Muh Mhz... Were literally hitting the limits of physics on clocks and people are still ranting about Muh Mhz. Roasting alive for fps prizes.


For my workload, I focus on what's sensible. Always have and will in computing. I'm not paying double to shintel for a clock speed crown. I won't pay double to AMD either which is why I didn't buy their X series processors. It's just not sensible.
For my workload, I was previously on a quad core that clocks into the 4Ghz range... higher than both of my AMD processors even though it came out years before. Yet, I'm on a lower clocked setup because clocks don't matter for what I do or anyone doing a serious workload when you have cores to throw at it. A serious workload is what enterprise faces. They're in the 2Ghz/3Ghz range. Their focus is on power efficiency, storage technology, and in-memory software. Meanwhile, in the meme segment of the consumer market, we have people ranting about Muh clock speeds. WE HAZ more serious workloads than google. No, meme level workflows/software and gaming drive such commentary. It's not impressing anyone who deals w/ serious enterprise hardware, software, and workflows.


Yeah, I still have my 4 core intel. I play goofy games on it from time to time.
And yeah, I know AMD made an 8core that cost 1/2 as much as I paid for my 4 core intel.
I bought one. Then AMD came out with 16 core. I bought one of those too.
There's 32 core epyc but I decided I'll hold off until 64 core rome to buy an enterprise rig. I'll likely sell off my AMD rigs and consolidate at that point.

Gaming.. given the crappy state of the industry, I'll probably still be using my 4 core 4Ghz+ intel for many more years having a blast. Oh and I have a NUC too which I toss in my bookbag for from time to time... A stellar product from intel. I love the skull NUCs but intel has lost their mind on pricing with them so its a quick writeoff.

So, an interesting revelation. I own both intel and AMD machines. I was a long time Intel builder. And yet here we are....
AMD began kicking intel's but and since I'm not a brand shill, I jumped ship to the best valued hardware platform and will do so whenever it happens again and again and again.
OK, whatever you say. Obvioulsy, nobody's use case is valid except yours, and you are happy in your black and white universe where there are no shades of gray to confuse the issue.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
For someone pretending to know the facts please explain me why my Asrock AB350 M-ITX had completely identical VRM's to the X370 M-ITX? As others have stated VRM choice is made by the motherboard manufacturer and not depending on the chipset.

It's not worth it, 8 core CPU's regardless of the actual board you choose, if it has a B350 chipset on it, is struggling, or doesn't work work. If it does work it will be dead in a couple years. Doesn't matter what the actual VRM situation is or what the tolerances of the VRM's are. /s

What I will not and give to them is that most boards with that chipset probably are not expecting water cooling. A H80 or H100 are kind of defualt coolers now adays for guys looking for decent cooling for their CPU's. My guess is with simple passive coolers and over worked VRM's can lead to power throttling if these boards aren't getting air pushed over them by the CPU cooler. This is where I agree with Ub4ty. I can get behind getting a 350/450 and 1700 or 2700. I can even understand getting 1700x or 2700x and using the OEM cooler/212+ like cooler. What I don't understand is someone getting a decently expensive CPU and really expensive cooler and then saving $20 on the motherboard, the most important choice imho for a system. But this is something they do all the time though.

But to your point. It's another reason why I don't like fear mongering blanket statements. Many B350 boards have similar or better layouts than some X370 boards. That doesn't stop some of the 350's from being bad buys, or them designing them for the 65w CPU's and not the 90w CPU's. But as whole I don't think this problem at least as sold by these two really exists.
 

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
The blanket statement regarding B350/X370 was that VRM's are tied to the chipset. I already proved this isnt the case.

No you didn't. All you proved is that people can skimp on X370 boards. A bad X370 board does not mean that B350s actually have good VRM configs. There might be some B350s out there that can handle one, and if you want a hint, I think @Markfw actually had an ASRock B350 that could handle a 2700x without thermal throttling. MAYBE. But he had a first-gen Ryzen at around 3.8-3.9 GHz on his so I'm not so sure. Still that's one board that might fit the bill.

List the other B350s that can handle a 2700x without overheating/throttling. I'm waiting.

Also, you fail to understand the difference between the PL2 state which was set to 210W I believe on Asus boards, and the actual full load which is 160W all core turbo. Changing the turbo settings doesnt count, that's a factory overclock beyond spec.

I think more than one board OEM set their default PL2 to 210W for reviewers. The reviewers had to dial it back to get within Intel spec (160W). Unless the end user is skilled enough to a). know about the problem in the first place and b). know how to change the setting, then they're going to treat the PL2 setting as the "stock" performance for their chip. If all you run is defaults, the board's gonna shoot for that power limit anyway. If it can't get there, you're leaving something on the table as far as the end user is concerned, right?

Is there even one Z370 board out there that can hit PL2 without the VRMs throttling it back? How about the 160W "all core turbo" power limit? I'll bet there's at least one that can do it.
 
Last edited: