Article "AMD CTO Mark Papermaster: More Cores Coming in the 'Era of a Slowed Moore's Law'" - @ Tom's

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,139
14,157
136
That setup is quite comparable to the environment I have to work in. Such was the reason why I made use of a bring your own PC program that IT supported a few years back in order to have a 4C 'workstation' laptop rather than the 2C ultrabooks which were being provided. The difference between them was quite notable exactly as you'd expect.

But that's also why I'd wholeheartedly disagree that 4 cores isn't adequate for non-productivity workloads. Even with all the corporate bloatware installed the current 4C thin and light laptops have no issues running all office applications, e-mail client, and other non-productivity programs without a hitch. Sure once you run a threaded workload more cores would help, but if that's a regular work task then it shouldn't be getting done on a 4C thin and light laptop in the first place.

Here's to hoping that software continues to progress. But AMD's age old "moar cores" marketing stance isn't going to magically make it happen. There are workloads which are threaded and there are those which are never going to be within the current computing paradigm. Pushing the standard up to 4C is a good thing for consumers. Pushing it beyond that will do little more than raise prices.
One problem with the "bring your own pc" program in healthcare, is that if there is a security breach on a personal computer (likely) Then possibly MILLIONS of peoples HIPAA data is compromised, and the lawsuit that ensues can cost the company millions. Thats why personal PC's in the workplace quite often are not allowed, and for good reason. Also that approved work PC is laden with software to keep breaches out, thus the requirement for more than 2 or even 4 cores.
 
  • Like
Reactions: A/// and Thunder 57

coercitiv

Diamond Member
Jan 24, 2014
5,966
10,899
136
you are asking the wrong question
measure your OEE of staff and machines, you will find bottleneck
OEE is not about when machines work, but about when they don't
The post you replied with the OEE argument already addressed idle power consumption. Here it is again, with bolded parts for clarity.
All these power numbers are greatly inflated by disabled or partially enabled power saving tech on all desktop platforms by default, both Intel and AMD.

A Skylake based CPU - SKL, KBL, CFL - will only consume ~2W at idle when properly configured for low power consumption and running on iGPU. That number can jump to 10W+ or even more when sleep states are partially disabled (common default config) and a dGPU is present. The simple presence of an enabled dGPU pushes CPU package power consumption by a few watts. (I'm talking about the CPU, the dGPU idle power consumption comes on top of that).

We're getting bigger and bigger CPU TDPs because it's worth it, because multiple cores and the software to take advantage of that increases productivity in compute intensive environments. We have more choices than ever before to build productivity machines, from fully fledged workstations to NUCs the size of an open palm - 6C/12T CPUs with 25W TDP. Complaining about the status quo only highlights the tech bubble some of us live in.
We have computers that consume less than computers ever did both idle and under load, we have computers that can compute more than ever (and more efficiently per joule than they ever did), all we have to do is to buy the right tool for the job.
 
Last edited:

Atari2600

Golden Member
Nov 22, 2016
1,409
1,655
136
you are asking the wrong question
measure your OEE of staff and machines, you will find bottleneck
OEE is not about when machines work, but about when they don't

You are asking the wrong question.

OEE is not about when machines work, but about when they stop the staff working.

Say you have machine A using 400W at the wall in use and 100W idle. 5*8hrs day in use and the remainder idle.
Overall weekly power consumed ~30 kWh.
Currently in the UK, electricity is around 15p per kWh. So that is ~£4.50 per week.


Then you have machine B using 200W at the wall in use and 0W idle. Same use conditions as above, but processing power is half that of machine A.
Overall weekly power consumed ~8 kWh. At same price of electric as above, that is ~£1.20 per week


Currently in the UK, the average wage is ~£28k /year which is £540/week or ~£13.5/hr

So if your staff spend 15mins longer waiting on machine B to do something (relative to machine A) every week - then you are at a net loss.




But anywayz, most workspaces are not limited by the computer. They are limited by the utterly insane interfaces that are forced upon users.

If any manager with more than 2 brain cells ever sat down and worked out their costs, every employee would have at least 2x Samsung 32R59C in front of their desk. At total cost of £600 - a productivity improvement of just 2% (relative to the usual single 20" dross many offices have) would see those monitors paid for in around a year.
 

TheGiant

Senior member
Jun 12, 2017
748
353
106
The post you replied with the OEE argument already addressed idle power consumption. Here it is again, with bolded parts for clarity.

We have computers that consume less than computers ever did both idle and under load, we have computers that can compute more than ever (and more efficiently per joule than they ever did), all we have to do is to buy the right tool for the job.
it seems you dont understand OEE
when a computer idles, it doesnt produce anything it us unused so OEE goes down
computer is not just a computer- its place in bulding, table etc, lights, electro stuff around, licenses ..lots of expenses

with age of moar coarz, computers become industrial machines and will require industrial level of resource planning, which is an advantage if its used properly and can be more effective as local computers
but employees dont see it that way, ofc they get it and if dont, the bitch
OEE (labour and computer) in content creation is one of the best parameters to measure

You are asking the wrong question.

OEE is not about when machines work, but about when they stop the staff working.

Say you have machine A using 400W at the wall in use and 100W idle. 5*8hrs day in use and the remainder idle.
Overall weekly power consumed ~30 kWh.
Currently in the UK, electricity is around 15p per kWh. So that is ~£4.50 per week.


Then you have machine B using 200W at the wall in use and 0W idle. Same use conditions as above, but processing power is half that of machine A.
Overall weekly power consumed ~8 kWh. At same price of electric as above, that is ~£1.20 per week


Currently in the UK, the average wage is ~£28k /year which is £540/week or ~£13.5/hr

So if your staff spend 15mins longer waiting on machine B to do something (relative to machine A) every week - then you are at a net loss.




But anywayz, most workspaces are not limited by the computer. They are limited by the utterly insane interfaces that are forced upon users.

If any manager with more than 2 brain cells ever sat down and worked out their costs, every employee would have at least 2x Samsung 32R59C in front of their desk. At total cost of £600 - a productivity improvement of just 2% (relative to the usual single 20" dross many offices have) would see those monitors paid for in around a year.
this is exactly what I said, measure your labour and machine OEE and decide
I was adressing the typical thinking- buying a new faster machine means I am producing more results and cheaper- my guess 95-99 % wrong
the biggest influence has First Pass Yield and especially in services or content creation
ppl are waiting for the machine because it is slow, or because first pass yield is low and we need a faster machine to repair errors and castastrophic task management faster?
but that requires the ppl to look at themselves instead saying always "because they didn't..insert something" or it is slow
not one company I own started with real need of new machines instead of fixing bad or missing process and performance management
 
Last edited:

Atari2600

Golden Member
Nov 22, 2016
1,409
1,655
136
My experience in engineering is that most toolsets and workflows are adhoc and cobbled together.

Very few companies (i.e. none that I am aware of) devote enough resources to fixing their workflows. If they did, they would get far more done faster, cheaper and to better quality. But the bean counters are too quick to wail about the cost of developing proper tools aligned to processes - they aren't bright enough to realise the cost of not doing so.

I can see why it can't be perfectly optimal as no two problems are ever exactly the same - but that does not excuse the shambles that currently passes as processes and tools in many places.


But regardless of the workflow - it'll be a very small niche of computer based work that won't benefit from significant increase in interface width (i.e going from a 20" 1280x1024 to 2x 32" 3840x2160).
 

rbk123

Senior member
Aug 22, 2006
743
345
136
But the bean counters are too quick to wail about the cost of developing proper tools aligned to processes - they aren't bright enough to realise the cost of not doing so.
Not that they aren't bright enough, it's more that it is very difficult to measure so they will fall back on the easy numbers. Same with the hidden costs of low quality offshore developers - the balance sheet is easy to display; the increased cost due to poor quality code and requirements misses is very difficult to measure. And lastly - the same for HR - the cost of turnover from rehiring and retraining replacements always exceeds the cost of just paying the employee more (for those who leave for salary reasons), but is very difficult to measure.
 

Atari2600

Golden Member
Nov 22, 2016
1,409
1,655
136
Not that they aren't bright enough, it's more that it is very difficult to measure

Its very difficult for their little brains to measure. Its relatively simple for a non-bean counter to measure.


Workflow A is crap. It takes B minutes when it really should be done in C minutes. There are D people doing that E times per week.

Therefore, every year, F is wasted on unproductive time. That same F could be spent refining the tools and processes for workflow A.

Then a dev team is approached and asked to estimate the scope of modifying toolsets to change B to C.

Same as any big problem - break it down into chunks, scope the chunks and then at a point where they might be better amalgamated, do so.



As for offshore costs. Its not hard if they'd ever bother their holes to do a proper post-mortem on every project and then gather statistics on rework cost. But they never want to do that as it'd show them up for the incompetent fools they are for suggesting the offshore in the first place.
 
  • Like
Reactions: Saylick

moinmoin

Diamond Member
Jun 1, 2017
4,731
7,249
136
But anywayz, most workspaces are not limited by the computer. They are limited by the utterly insane interfaces that are forced upon users.
My experience in engineering is that most toolsets and workflows are adhoc and cobbled together.
Both spot on. As somebody working in IT, leading tech for a couple startups it repeatedly blows my mind how many simple workflow optimization just fly above the responsible people's heads. It's such a constant uphill battle that nowadays I first focus on optimizing all my own work for them before having the fun at hand holding them through self inflicted workflows that technically aren't even part of my job description.
 

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
It sounds like the IT systems I use are not so dissimilar to yours, yet not only does everything run fine on a 4 core Dell i5-6500 @3.2Ghz, we even have a move to switch people over to Dell desktop replacement laptops, that dock into a setup at people's desks.
I'm very glad that you guys selected the right chip for your needs.

So did our IT department.

I don't see the need to continue to argue about this. I think the end result is that there ARE a lot of situations, perhaps millions of end-users in corporate environments, who would benefit from the use of more than 4 threads. And there are a lot where 4 threads is enough.
 
Last edited:

rbk123

Senior member
Aug 22, 2006
743
345
136
Its very difficult for their little brains to measure. Its relatively simple for a non-bean counter to measure.
No, it is very difficult to quantify and then translate that into dollars (for all of those cases) because it's so subjective and for many other reasons. It can be done to a certain extent, but it's much harder and riskier, so inevitably they choose the easy/safe route.
 

Atari2600

Golden Member
Nov 22, 2016
1,409
1,655
136
No, it is very difficult to quantify and then translate that into dollars (for all of those cases) because it's so subjective and for many other reasons. It can be done to a certain extent, but it's much harder and riskier, so inevitably they choose the easy/safe route.

Like anything complex you start to attach error bars and confidence to it.

They can't even try because they don't know how.

Put an executive of good engineers (with maybe a few scientists thrown in) in charge instead of accountants (or especially MBAs who are uniquely pea-brained) and you might get something done.
 

rbk123

Senior member
Aug 22, 2006
743
345
136
Exactly. However, when you work in finance, assumptions and tolerance only work for budget planning, not for actuals. Engineers in charge only happens in small firms. Corporations are too large and bureaucratic; and finance leads, not follows, so they always go with black/white.