Speculation: Ryzen 4000 series/Zen 3

Page 78 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

moinmoin

Diamond Member
Jun 1, 2017
4,934
7,619
136
What will be interesting is if and when AMD are in a position to strike out and take on some of the special projects that Intel does and keeps failing at.

If AMD were able to succeed in an area outside of their core competency where Intel has failed or is failing in.
The huge difference is that Intel acts like major foundry that tries to apply its (formerly) leading foundry tech to enter different markets.

AMD acts more like ARM in that they try to push their CPU and GPU designs into different markets. And for AMD the good thing is with semi custom they already started doing exactly that years before they recovered in their primary market with Zen.

But... what if he had a team that helped him do all that?

:laughing: :laughing:
Oh I'm sure that there are more people that deserve credits. If you can name some I'm all ear. :smile:
I guess easy picks would be all the employees made Corporate Fellows over the years, does anybody happen to know if there's a complete list of them anywhere?
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
The huge difference is that Intel acts like major foundry that tries to apply its (formerly) leading foundry tech to enter different markets.

AMD acts more like ARM in that they try to push their CPU and GPU designs into different markets. And for AMD the good thing is with semi custom they already started doing exactly that years before they recovered in their primary market with Zen.


Oh I'm sure that there are more people that deserve credits. If you can name some I'm all ear. :smile:
I guess easy picks would be all the employees made Corporate Fellows over the years, does anybody happen to know if there's a complete list of them anywhere?
Oh it was obviously Jim Keller, who secretly organized a team around Mark Papermaster from outside of AMD, just to get them to hire him and the other team members of Keller's choice. Then one morning Papermaster looked in the mirror and said: "I'm not sure how or when it happened, but damn I assembled a pretty good team, didn't I? Too bad everyone will recognize Keller's genius exclusively..... but maybe, maybe some free thinkers realize how much harder it is to get the right people working together!" Little did he know. About all this. :laughing:
Same goes for Keller though. After he mistyped one more 0 to the end of a number in an excel sheet, and accidentally made Zen's L3 caches 10x bigger than originally designed by his careful planning and unparalleled microarchitectural expertise, randomly finding +52% IPC instead of just +6%, he immediately opened LinkedIn and put AMD and Zen back in to his CV, which he deleted the day before, out of embarrassment.

I inherently know all the true stories, because I'm the ninja janitor in every company at the right time, and I'll write a book!
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
Its just dumb rockstar fallacy stuff . People like Jim Keller are managers of managers of managers.
This is true communistic opinion (all the hard work is done by poor worker while the bloody capitalist steeling all the money and credit) when somebody like you try to devaluate people like Jim Keller in such a terrible way. All of you who like this communists lies should be ashamed. Engineers who became an vice-president those people are the most valuable (Jobs, Musk etc). Another big thing is moral integrity, Keller left AMD when managers killed his projects. Simply he didn't want to be part of sinking company. And he was damn right once (K10 was bad, and BD worse). I don't give any credit people like Mike Clark who were probably participating on such a terrible 2xALU Dozer uarch (actively supporting company destruction). There is big difference between just smart engineer and true leader. It's called Capacity for existential flexibility, ability to abandon and stop normal project and put all resources into something new (like Jobs with MacIntosh and iPhone). When you play safe you can bankrupt very easy (like Sculy brought Apple almost to bankruptcy, AMD with K10 lost performance lead and server market, Kodak played safe into bankrupt)
If AMD put all their resources into K12 they would probably be bankrupt right now.
We know you had no clue about server market is 7x smaller than smartphone market. You are the right economy expert to predict AMD bankruptcy. No wonder ou have no clue again. AMD has huge debt almost whole history. Investors could push AMD into bankruptcy after BD fail very easy. But they didn't because they wanted their money back and the only way how to achieve this is to make AMD great again (invest a lot of money into successful future products like Zen and its successor). There is no way that AMD would go into bankruptcy due to an ambitious K12 project.


The K12 is kind of mystery but some estimations can be done:
  • Keller was pushing K12 x86/ARM project because he knew Apple was going to develop 6xALU high-IPC uarch (they migh have done some simulations already when he was there).
  • He also knew the power efficiency of those chips and how comparable is this to x86 because there is not many guys who worked on x86 and ARM at such a level.
  • Server multi-core CPUs works around 3 GHz (64c EPYC has 2.25GHz base clk) due to TDP limitation. He knew that 6xALU power efficient ARM core running at 3 GHz is big threat for x86. K12 as a competitor for this had to have 6xALUs too.
  • Both variants of K12, x86 and ARM versions would be faster at different code due to ISA. Probably x86 could achieve higher IPC at clock >4GHz due to more compressed instructions while being more power hungry due to decoder (CFD, FEM and other hard to thread apps). ARM version would suite well for high-core low clocked servers. They would be able to penetrate much wider market and become ARM server leader (in 2015 it looked silly because all those slow in-order Cortex cores was totally useless but today with Amazon 64-core Graviton2 and 80-core eMag all monolithic based on Cortex A76 with IPC similar to Skylake...now it looks pretty scary for x86).
  • We know Zen3 is completely new uarch started at same time as K12 (2014). Also Zen2 was started in 2014 so it's very strange that AMD started development of three architectures in the same time. Highly probable is that AMD is able to develop only two uarchs, one simple evolution (Zen2) and second is completely new uarch based on technology brought from Apple (K12: 6xALU, unified INT and FPU ports). One year later K12 was canceled and Keller left AMD.

So Zen3 might be :
  1. renamed K12 x86 version (while ARM version was canceled). So still very wide and powerful (6xALUs) - this doesn't show actuall leaks so this is less probable. Also canceling ARM version in favor of x86 (to make it even more powerful) is weak reason to leave company.
  2. re-designed in very conservative way based on K12 technology (crippled like his original K8 and that's why he left AMD again)
We'll see.


You've already been warned this week about keeping this thread Ryzen centric. Your failure to follow the warning is going to earn you additional sanction.

AT Moderator ElFenix
 
Last edited by a moderator:

CHADBOGA

Platinum Member
Mar 31, 2009
2,135
832
136
This is true communistic opinion (all the hard work is done by poor worker while the bloody capitalist steeling all the money and credit) when somebody like you try to devaluate people like Jim Keller in such a terrible way. All of you who like this communists lies should be ashamed. Engineers who became an vice-president those people are the most valuable (Jobs, Musk etc). Another big thing is moral integrity, Keller left AMD when managers killed his projects. Simply he didn't want to be part of sinking company. And he was damn right once (K10 was bad, and BD worse). I don't give any credit people like Mike Clark who were probably participating on such a terrible 2xALU Dozer uarch (actively supporting company destruction). There is big difference between just smart engineer and true leader. It's called Capacity for existential flexibility, ability to abandon and stop normal project and put all resources into something new (like Jobs with MacIntosh and iPhone). When you play safe you can bankrupt very easy (like Sculy brought Apple almost to bankruptcy, AMD with K10 lost performance lead and server market, Kodak played safe into bankrupt)
We know you had no clue about server market is 7x smaller than smartphone market. You are the right economy expert to predict AMD bankruptcy. No wonder ou have no clue again. AMD has huge debt almost whole history. Investors could push AMD into bankruptcy after BD fail very easy. But they didn't because they wanted their money back and the only way how to achieve this is to make AMD great again (invest a lot of money into successful future products like Zen and its successor). There is no way that AMD would go into bankruptcy due to an ambitious K12 project.


The K12 is kind of mystery but some estimations can be done:
  • Keller was pushing K12 x86/ARM project because he knew Apple was going to develop 6xALU high-IPC uarch (they migh have done some simulations already when he was there).
  • He also knew the power efficiency of those chips and how comparable is this to x86 because there is not many guys who worked on x86 and ARM at such a level.
  • Server multi-core CPUs works around 3 GHz (64c EPYC has 2.25GHz base clk) due to TDP limitation. He knew that 6xALU power efficient ARM core running at 3 GHz is big threat for x86. K12 as a competitor for this had to have 6xALUs too.
  • Both variants of K12, x86 and ARM versions would be faster at different code due to ISA. Probably x86 could achieve higher IPC at clock >4GHz due to more compressed instructions while being more power hungry due to decoder (CFD, FEM and other hard to thread apps). ARM version would suite well for high-core low clocked servers. They would be able to penetrate much wider market and become ARM server leader (in 2015 it looked silly because all those slow in-order Cortex cores was totally useless but today with Amazon 64-core Graviton2 and 80-core eMag all monolithic based on Cortex A76 with IPC similar to Skylake...now it looks pretty scary for x86).
  • We know Zen3 is completely new uarch started at same time as K12 (2014). Also Zen2 was started in 2014 so it's very strange that AMD started development of three architectures in the same time. Highly probable is that AMD is able to develop only two uarchs, one simple evolution (Zen2) and second is completely new uarch based on technology brought from Apple (K12: 6xALU, unified INT and FPU ports). One year later K12 was canceled and Keller left AMD.

So Zen3 might be :
  1. renamed K12 x86 version (while ARM version was canceled). So still very wide and powerful (6xALUs) - this doesn't show actuall leaks so this is less probable. Also canceling ARM version in favor of x86 (to make it even more powerful) is weak reason to leave company.
  2. re-designed in very conservative way based on K12 technology (crippled like his original K8 and that's why he left AMD again)
We'll see.

Do you think the K12 had SMT4?
 

ElFenix

Elite Member
Super Moderator
Mar 20, 2000
102,414
8,356
126
Jim Keller is human. He is not Midas. Everything he touches does not turn to gold. The guy is very talented but he has been sensationalized to the point of being annoying. Besides, since he is now at Intel, shouldn't they be worrying about them instead? Well, they should be, since they are the only real competitor in x86. Why should AMD push ARM and give ARM more credibility? it is not in their best interests at this point, if ever.
Don't respond to posts that have gotten warnings for going off topic.

AT Moderator ElFenix



Do you think the K12 had SMT4?
Don't bait him.

AT Moderator ElFenix
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
Do you think the K12 had SMT4?
It's definitely on their technology map. IMHO original K12 had SMT4 (maybe tha't where the rumor comes from). If Zen 3 is crippled K12 then it depends how much it's was reduced. Anyway SMT4 doesn't cost many transistors (cca 5%) so IMHO it's one of the feature which might survive there. I also consider SMT4 as a conservative feature, effectively reducing IPC/thread hence helping to deal with some bottlenecks (especially latency and speculative execution depth).
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
This is true communistic opinion (all the hard work is done by poor worker while the bloody capitalist steeling all the money and credit) when somebody like you try to devaluate people like Jim Keller in such a terrible way. All of you who like this communists lies should be ashamed.

You've already been warned this week about keeping this thread Ryzen centric. Your failure to follow the warning is going to earn you additional sanction.

AT Moderator ElFenix
Edit: nevermind, I just realized that I replied to a post with an off topic warning.
So I edit now with acknowledging you.

Zen 3 is an arm design crippled into a scrawny x86 embarrassment. Now that is some speculation at its best. Couldn't be more faithful to the title of the thread.
 
Last edited:
  • Like
Reactions: amd6502

Ajay

Lifer
Jan 8, 2001
15,332
7,792
136
I'm telling you guys, it just easier to put him on ignore than to answer his magical thinking.
 

DisEnchantment

Golden Member
Mar 3, 2017
1,590
5,722
136
Very interesting patent to use the CPU conventional cache to store micro-ops evicted from the micro-op cache.
With those massive L2/L3 on N7+/N5 this could bring some tangible gains.

METHOD AND APPARATUS FOR VIRTUALIZING THE MICRO-OP CACHE

Abstract
Systems, apparatuses, and methods for virtualizing a micro-operation cache are disclosed. A processor includes at least a micro-operation cache, a conventional cache subsystem, a decode unit, and control logic. The decode unit decodes instructions into micro-operations which are then stored in the micro-operation cache. The micro-operation cache has limited capacity for storing micro-operations. When new micro-operations are decoded from pending instructions, existing micro-operations are evicted from the micro-operation cache to make room for the new micro-operations. Rather than being discarded, micro-operations evicted from the micro-operation cache are stored in the conventional cache subsystem. This prevents the original instruction from having to be decoded again on subsequent executions. When the control logic determines that micro-operations for one or more fetched instructions are stored in either the micro-operation cache or the conventional cache subsystem, the control logic causes the decode unit to transition to a reduced-power state.
 

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
Very interesting patent to use the CPU conventional cache to store micro-ops evicted from the micro-op cache.
With those massive L2/L3 on N7+/N5 this could bring some tangible gains.

Wow... zen2 uop cache is already the biggest we can find, and isn't enough for Amd

The latencys of normal caches just seem to high for this patent work well... Will Amd split it's L2 cache as I/D? Or L1i cache goes back to 64kb? Or both?
 

moinmoin

Diamond Member
Jun 1, 2017
4,934
7,619
136
Wow... zen2 uop cache is already the biggest we can find, and isn't enough for Amd

The latencys of normal caches just seem to high for this patent work well... Will Amd split it's L2 cache as I/D? Or L1i cache goes back to 64kb? Or both?
The difference in latency between decoding code to uops and just fetching already decoded uops from L2$ will be interesting indeed. The patent makes it seem more like a power saving measure though, mentioning "the control logic causes the decode unit to transition to a reduced-power state".

Personally I wonder if replacing all occurrences of code in memory with the decoded uops counterpart would lead to a measurable performance gain. This way any code that was run at least once (or preloaded) before doesn't need to be decoded again regardless of the uop cache size.

In any case this is an interesting development since the fewer times the decoder is actually fired up the closer an x86 chip can theoretically get to ARM-like efficiency.
 

DisEnchantment

Golden Member
Mar 3, 2017
1,590
5,722
136
A member of the technical staff at AMD posted MR for family 19H Milan/Zen3. (See LinkedIn Profile https://www.linkedin.com/in/yazenghannam)



Changes
No functional changes were present, mainly a reuse of the existing Family 17 functions
Machine Type and Memory subsystem for the new CPU
New PCI Ids
Addition of a new Load Store unit bank type. Could this mean an updated Load store subsystem?


[AMD Official Use Only - Internal Distribution Only]

> -----Original Message-----
> From: linux-edac-owner@xxxxxxxxxxxxxxx <linux-edac-owner@xxxxxxxxxxxxxxx> On Behalf Of Borislav Petkov
> Sent: Thursday, January 16, 2020 10:51 AM
> To: Ghannam, Yazen <Yazen.Ghannam@xxxxxxx>
> Cc: linux-edac@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; bp@xxxxxxx; tony.luck@xxxxxxxxx; x86@xxxxxxxxxx
> Subject: Re: [PATCH 1/5] x86/MCE/AMD, EDAC/mce_amd: Add new Load Store unit McaType
>
> On Fri, Jan 10, 2020 at 01:56:47AM +0000, Yazen Ghannam wrote:
> > From: Yazen Ghannam <yazen.ghannam@xxxxxxx>
> >
> > Future SMCA systems may see a new version of the Load Store unit bank
> ^^^^^^^^
>
> Yah, you've been hanging around with hw people too much. I can just as
> well reply: "well, I'll apply the patch when I see future SMCA systems"
> :)
>

Yes, you "may" be right. :)

> All I'm saying is, forget those "may" formulations when it comes to
> kernel patches. :)
>

Seriously though, I'll work on it. Thanks!

-Yazen
Very nice.

Next step is to look out for LLVM changes, but I bet that is going to be a bit far out since they will want to not disclose the latency, costs etc for the various operations until the last moment.

Also from @KOMACHI_ENSAKA
 
Last edited:

DisEnchantment

Golden Member
Mar 3, 2017
1,590
5,722
136
For info,
GCC patches for Zen2 appeared 7 months before product launched

Also Zen2 FPU changes were visible in this line from that patch

Regarding gcc and LLVM I think AMD might be withholding stuff right now. But with regards to the kernel the MCA and EDAC patches are being posted early.
I believe this was the cause of the MCE in Linux 5.3 when Phoronix was reviewing Rome last year.
Good thing they are adding it now early.
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
Wow... zen2 uop cache is already the biggest we can find, and isn't enough for Amd

The latencys of normal caches just seem to high for this patent work well... Will Amd split it's L2 cache as I/D? Or L1i cache goes back to 64kb? Or both?
I know it's OT, but reading your signature has probably been the most eye-opening experience I had in the past months.
 
Last edited:
  • Haha
Reactions: Saylick

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
The benefits of the uop cache are threefold.

1. You save power because you don't need to have the fetch and decode logic active. Since CPUs are power constrained nowadays, you can generally translate that into more performance.
2. Performance can be increased because a hit means that you get to skip the extra stages needed for decode. This was the idea behind the Trace Cache in Pentium 4. It was 20 stages if it was a cache hit in the Trace cache, and 20-something if it wasn't.
3. uop cache's throughput can be greater than the decoder. Typically its about 6-wide, when decode throughput is less.

For Sandy Bridge they were talking about 14-19 stages. 19 regularly, and 14 with a uop cache hit. Now the gains aren't straightforward as reducing the number of pipeline stages, because the size is still quite small. 1.5k uop cache in SNB is equal to 6KB I-cache.

Personally I wonder if replacing all occurrences of code in memory with the decoded uops counterpart would lead to a measurable performance gain.

Not that straightforward. Because decoded instructions are larger than non-decoded ones. They don't say its a 1.5KB uop cache. They say it holds 1.5k instructions.

AMD said they had to reduce the L1 I-cache size to make up for the increased uop cache. That should give you a rough idea.

Potentially faster and lower power, but at a much reduced capacity. It's always about trade-offs.
 

maddie

Diamond Member
Jul 18, 2010
4,723
4,628
136
The benefits of the uop cache are threefold.

1. You save power because you don't need to have the fetch and decode logic active. Since CPUs are power constrained nowadays, you can generally translate that into more performance.
2. Performance can be increased because a hit means that you get to skip the extra stages needed for decode. This was the idea behind the Trace Cache in Pentium 4. It was 20 stages if it was a cache hit in the Trace cache, and 20-something if it wasn't.
3. uop cache's throughput can be greater than the decoder. Typically its about 6-wide, when decode throughput is less.

For Sandy Bridge they were talking about 14-19 stages. 19 regularly, and 14 with a uop cache hit. Now the gains aren't straightforward as reducing the number of pipeline stages, because the size is still quite small. 1.5k uop cache in SNB is equal to 6KB I-cache.



Not that straightforward. Because decoded instructions are larger than non-decoded ones. They don't say its a 1.5KB uop cache. They say it holds 1.5k instructions.

AMD said they had to reduce the L1 I-cache size to make up for the increased uop cache. That should give you a rough idea.

Potentially faster and lower power, but at a much reduced capacity. It's always about trade-offs.
The patent says cache subsystem. Seeing that data uses L1, L2 & L3 anyhow, would storing the evicted uops really mean such a big reduction in throughput?

In another question, how much space would be needed for a (most used instructions) uop store. In other words could we have a design where you hardly ever used the decoder after some amount of run time? Any idea of potential power savings?
 

moinmoin

Diamond Member
Jun 1, 2017
4,934
7,619
136
What makes everything even more fuzzy is that there are several representations. Instructions are first translated into one or two macro-ops, which are a fixed length, uniform representation of the instructions fitted to the backend (i.e. in Zen 1 AVX256 instructions resulted in two macro-ops, in Zen 2 in one). Only these are then translated into micro-ops which are then dispatched. To add to that in all three stage, (macro-)op cache, micro-op queue and dispatch unit in specific conditions multiple instructions or branches can be fused, reducing size and cycles need again.

See Wikichip on Zen 2 for more.
Shorter discussion on uop-fusion in Zen 1.

I'd imagine especially for fused ops at the different stages there's a lot of room for optimizing for common use cases (like more extensive fusing based on branch prediction for simpler loops etc.) without us necessarily ever getting to know them as these are purely internal representations that the world technically never needs to know about (and much already isn't known).

The patent says cache subsystem. Seeing that data uses L1, L2 & L3 anyhow, would storing the evicted uops really mean such a big reduction in throughput?
Especially considering the ridiculous size the L3$ has I think the decisive factor for feasibility is latency. Both L1$ and L2$ are much faster, but also respectively smaller, so there is the trade-off mentioned.

What I'd like to see is a more global branch predictor that can detect commonly used branches/loops that are then kept and replayed in a simplified, optimized uop representation.
 

itsmydamnation

Platinum Member
Feb 6, 2011
2,744
3,078
136
AMD said they had to reduce the L1 I-cache size to make up for the increased uop cache. That should give you a rough idea.

Potentially faster and lower power, but at a much reduced capacity. It's always about trade-offs.
I think this was likely floor plan limited ,as we have now learnt from AMD/forest Zen2 was more of a tick then a tock. It will be interesting to see in Zen3 what size the L1i is.
 

Zstream

Diamond Member
Oct 24, 2005
3,396
277
136
This just means to me, that Zen 3 will be even faster at multi-threaded apps. As more cache is commonly evicted, it’s simpler in terms of power utility to fetch it be decode through the pipeline.

This is one benefit that will be greatly improved on as years go by. It’s what I believe Toshiba has announced in its fight to get closer to quantum computing. Simply store a bunch of pre-calculated math, and pull it down.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I think this was likely floor plan limited ,as we have now learnt from AMD/forest Zen2 was more of a tick then a tock. It will be interesting to see in Zen3 what size the L1i is.

There's no such distinction nowadays. It's become very blurry. It's become necessary to modify all parts of the chip including the uarch significantly whether its a new process or not. Intel nor AMD has a choice to just do a pure shrink now. They'll just fall behind.

Once what sounded like a genius approach with Intel's Tick/Tock seems like a child's play compared to how fast the ARM vendors are moving. One of the guys said concept to production took 3 years!

They are just doing... better.
 

cytg111

Lifer
Mar 17, 2008
23,049
12,719
136
Its just dumb rockstar fallacy stuff . People like Jim Keller are managers of managers of managers. There job is to set the focus, drive the people/program processes, manage the high level targets. Even people like Mike Clark have teams of high skilled architects doing the work, coming up with the idea's. This one person stuff is so juvenile you can tell which people have never worked on big technology programs/projects.

Upper layers of management (people/technical/etc) have a massive effect on the effectiveness of teams to deliver products to specifications but they don't do the work or come up with the idea's , they have far to much to manage to focus on the millions and millions of specifics. People at that level involving themselves in specifics (outside of the need for specific issue resolution/decision) are bad managers, I think we can see from Jim illustrious record over many high level, high importance roles that he is not a bad manager! Keeping everything moving in the same direction on the same time scale, make the choices to add or cull, triaging issues and resources, that stuff over millions of man hours of effort is the really hard stuff.

The other dumb thing @Richie Rich statements is the only really new thing in the Zen Core at a functional block level is the Uop cache which Apple didn't do, everything else has a very clear AMD legacy just look at the FPU and how FMA4 works, how the SMU is a straight evolution of excavator, the ALU latencies, AGU work like they always did, seperate FPU etc . What changed was the target and focus.
As a software architect driving semi large projects I dont have a single developer on my team which role I cant stand in myself. Having a tech management role requires you to be bleeding edge yourself.
 
  • Like
Reactions: Richie Rich

itsmydamnation

Platinum Member
Feb 6, 2011
2,744
3,078
136
As a software architect driving semi large projects I dont have a single developer on my team which role I cant stand in myself. Having a tech management role requires you to be bleeding edge yourself.
Being an infrastructure /network /gateway architecture I'm going to beg to disagree :) (based on experience of developers of course :) ) . But I never said he couldn't do those jobs, but his job isn't to do those jobs and if he fiddles with things from that level because of his preference it will end in pain just like with dirk you end up with bulldozer.
 

Veradun

Senior member
Jul 29, 2016
564
780
136
Being an infrastructure /network /gateway architecture I'm going to beg to disagree :) (based on experience of developers of course :) ) . But I never said he couldn't do those jobs, but his job isn't to do those jobs and if he fiddles with things from that level because of his preference it will end in pain just like with dirk you end up with bulldozer.
Agree, it's called micromanagement and it's inherently bad.
 
  • Like
Reactions: DisEnchantment

cytg111

Lifer
Mar 17, 2008
23,049
12,719
136
Being an infrastructure /network /gateway architecture I'm going to beg to disagree :) (based on experience of developers of course :) ) . But I never said he couldn't do those jobs, but his job isn't to do those jobs and if he fiddles with things from that level because of his preference it will end in pain just like with dirk you end up with bulldozer.
There is no substitute for brains. If you dont listen you wont hear the good ideas. That being said, if I were to adhere to every “good” suggestion we’d be rocking 30 different languages and a multitude of other semi custom procedures all a single point of failure in them selves.
Hardware may be different. Im in over my head, Im out :)
 

moinmoin

Diamond Member
Jun 1, 2017
4,934
7,619
136
I think we all agree it's always a good thing when higher ups are familiar with the matters they are supposed to manage and decide about. :wink:

In that regard it's interesting how much AMD is focused on recruiting and promote senior staff with a background in science and engineering:
position at AMDeducation
Nazar Zaidi (LinkeIn)Senior vice president of Cores, Server SoC and Systems IP EngineeringPh.D. from the University of Texas at Austin in electrical engineering
Andrej Zdravkovic (LinkedIn)Senior vice president of Software DevelopmentBachelor and Master of Science degree in Electrical Engineering
Spencer PanSenior vice president of Greater China Sales and president of AMD Greater ChinaBachelor and Master degrees in electronic engineering
Jane Roney (LinkedIn)Senior vice president of Business OperationsBachelor of Science in physics and math
Daniel (Dan) McNamara (LinkedIn)Senior vice president and general manager, Server Business UnitBachelor and Master of Science degrees in electrical engineering
Joshua Friedrich (LinkedIn)Corporate Vice PresidentBachelor Electrical and Electronics Engineering
Table from planet3dnow.de