Article [SA] "Samsung kills off a major silicon design program" ... RIP Mongoose?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

NTMBK

Lifer
Nov 14, 2011
10,232
5,012
136

Story is behind a paywall, but speculation and rumours on Twitter suggest that it might be the Austin CPU team that has been shut down. If true, sad to see a CPU architecture bite the dust. NVidia's CPU seems to have gone embedded only, Samsung's is dead, Qualcomm are just tweaking ARM designs instead of making their own... are Apple the only ones left making a fully custom CPU in phones?
 
  • Like
Reactions: dark zero

soresu

Platinum Member
Dec 19, 2014
2,656
1,857
136
It's a fact. How else would Valve be able to make their ACO shader compiler be compatible with several AMD architectures ? Valve would be one of the last places I'd expect to see specialists in shader compilers ...
You have the wrong end of the stick here - Valve is dedicated to improving the Linux/*nix gaming scene, so they have hired driver developers, some from the existing Mesa open source community, though I wouldn't be surprised if they had branched out further.

Valve are far from the company that released HL2:Episode 2 and Portal 2 so long ago - at this point it's barely permissable to call them a games developer.

They have become a platform provider, which means investing in hardware (Vive), and software (Steam OS, ACO, DXVK/Proton/Wine).

As to what ACO is compatible with, I suggest you dig a little deeper.

For now it only works with the RADV Vulkan driver, not AMDVLK Vulkan nor RadeonSI OpenGL - and it only works fully with Polaris and Vega (though V2 compatibilty is not stated to my knowledge).

Navi support is only partial at the moment, though obviously a priority now ACO is commited to Mesa master.

As to how Valve could do it, again they are using established driver devs, and AMD tend to document their GPU's uArch and ISA openly better than nVidia, though I could be wrong on the documentation part.
 

soresu

Platinum Member
Dec 19, 2014
2,656
1,857
136
That being said it's not a bad idea for AMD to go with a maintenance friendly strategy and it could potentially pay off if the industry decides to one day converge to AMD's architecture ...
AMD has already gone with a degree of similarity when designing RDNA, it can consume code written for GCN without a drastic amount of effort, presumably something which helped them keep the next gen Sony and MS contracts no doubt, as backwards compatibility should be a doddle for the previous gen consoles at least (and given emulators likely already exist in PS4 and XB1 code, they will run even better on the next gen).
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136

Umm, yes ? Have you even looked at examples of VLIW code ? Practically all of them are capable of doing multi-issuing instructions ...

You have the wrong end of the stick here - Valve is dedicated to improving the Linux/*nix gaming scene, so they have hired driver developers, some from the existing Mesa open source community, though I wouldn't be surprised if they had branched out further.

Valve are far from the company that released HL2:Episode 2 and Portal 2 so long ago - at this point it's barely permissable to call them a games developer.

They have become a platform provider, which means investing in hardware (Vive), and software (Steam OS, ACO, DXVK/Proton/Wine).

As to what ACO is compatible with, I suggest you dig a little deeper.

For now it only works with the RADV Vulkan driver, not AMDVLK Vulkan nor RadeonSI OpenGL - and it only works fully with Polaris and Vega (though V2 compatibilty is not stated to my knowledge).

Navi support is only partial at the moment, though obviously a priority now ACO is commited to Mesa master.

As to how Valve could do it, again they are using established driver devs, and AMD tend to document their GPU's uArch and ISA openly better than nVidia, though I could be wrong on the documentation part.

Not really because how the else can a company with less than 500 employees afford to create an entirely new shader compiler ?

As for ACO, nobody really cares about driver stack compatibility since AMD's proprietary shader compiler is likely superior to the community project. ACO has very good hardware compatibility which is arguably the bigger challenge but the fact that they were able to overcome it that easily means that AMD is nowhere near as aggressive as Nvidia are in terms of breaking binary compatibility and then there's the fact that AMD's architecture doesn't have anywhere near as complex scheduling rules compared to Nvidia architectures so compiler designers for AMD have it very easy compared to the Nvidia folks ...

If you take a look at AMD's open source LLVM shader compiler, very little code actually changes from generation to generation which seems to suggest that GCN generations are largely binary compatible with each other ...

Documentation isn't the only part that's important since Intel has better open source documentation than AMD does but you don't see anybody producing a 3rd party shader compiler for Intel GPUs now, do you ?

AMD has already gone with a degree of similarity when designing RDNA, it can consume code written for GCN without a drastic amount of effort, presumably something which helped them keep the next gen Sony and MS contracts no doubt, as backwards compatibility should be a doddle for the previous gen consoles at least (and given emulators likely already exist in PS4 and XB1 code, they will run even better on the next gen).

It's mostly because AMD is trying to get the industry to converge on their architecture so that they have very little driver maintenance! Why else does AMD badly want to go low level ? It's advantageous to them if developers are going to keep relying on AMD specific HW behaviour since it'll be a performance win for them without AMD having to do much work at all to make the software hit the fast-paths in their drivers ...

AMD's thought process is like this. Why should they do driver work when they can act as if they have the x86 equivalent of GPUs ? (DX12/Vulkan are starting to become fashionable so it's only a matter of time before it becomes fashionable to use intermediate representations or shader extensions that closely matches the RDNA ISA)
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
Instruction bundling isn't unique to VLIW, nor is nV execution model is anything like it.
Please stop.

Prove otherwise or show evidence to the contrary ...

The capability to multi-issue instructions per bundle is EXCLUSIVE to VLIW architectures ...

Nvidia's most popular GPUs are VLIW by the nature of their SASS, deal with it ...
 
  • Like
Reactions: Kirito

soresu

Platinum Member
Dec 19, 2014
2,656
1,857
136
Not really because how the else can a company with less than 500 employees afford to create an entirely new shader compiler ?
By hiring established driver developers (including the RADV head dev), I literally just went over this.

As for the afford part, you do realise they own Steam right?

They made $4.3 billion in 2017.

Pause and think about that.

Their first party games are basically an afterthought now, their revenue stream is 100% Steam.
Documentation isn't the only part that's important since Intel has better open source documentation than AMD does but you don't see anybody producing a 3rd party shader compiler for Intel GPUs now, do you ?
Intel is doing pretty well on their own over the last few years, they literally just announced another GPU compiler project of their own called IBC, and they have plenty of other work coming forward towards their OneAPI efforts.

Unlike AMD, Intel has pleeeeenty of money to afford driver dev efforts, so while I imagine that Valve is certainly in contact with them, they aren't as concerned, not to mention there is less wasted potential with current Intel iGPU's compared to AMD discrete cards.

The hiring of Raja Koduri will ensure that Intel drivers are well addressed, he did so well at AMD in that department with such a meager budget that he can't fail given Intel's deeper pockets - I wouldn't even be surprised to find him in communication with the people at Valve about ongoing work.
As for ACO, nobody really cares about driver stack compatibility since AMD's proprietary shader compiler is likely superior to the community project. ACO has very good hardware compatibility which is arguably the bigger challenge but the fact that they were able to overcome it that easily means that AMD is nowhere near as aggressive as Nvidia are in terms of breaking binary compatibility and then there's the fact that AMD's architecture doesn't have anywhere near as complex scheduling rules compared to Nvidia architectures so compiler designers for AMD have it very easy compared to the Nvidia folks ...
No, no, no. AND NO.

I literally just told you that ACO hardware compatibility is low, only encompasing 2 generations in high Vulkan CTS percentage of success.

It will get better in time certainly, it was only announced very recently.

As for the breaking binary compatibility part, I would argue that AMD's GCN/SI architecture (while having significant FLOPS to FPS issues) was well designed from the start to be extensible with minimum breaking changes to an established code base.

nVidia's on the other hand, while very optimised in what it does, was not well designed for extensibility, hence the oft broken binary compatibility you mentioned.

Basically AMD paid for easy extensibility with a loss of efficiency - which given their lesser budget for driver development was probably better for them.

nVidia also has more money to make aggressive hardware changes in this fashion.

Make that ALOT more money x10 - people often underestimate just how much of a gulf this is for AMD to cross, it forces them (or it did) to take the most budget efficient approach possible in design.
 

soresu

Platinum Member
Dec 19, 2014
2,656
1,857
136
AMD is throwing dies right and left now, so budget efficiency is out of the window.
Hence the 'or it did' part, I figure now that Zen and RDNA are out of the box, they will ramp up operations as much as they dare - 'go big or go broke' as they say.

Though, Su's strategy remains quite shrewd even now - still very little indicators of when 3950X and Navi 12/14 are landing.
 

dark zero

Platinum Member
Jun 2, 2015
2,655
138
106
Oh no, no, definitely no, for phone vendors have even less devrel than the likes of Google.
Phone game market is either gacha or proper gameplay coated with ludicrous amounts of gacha.
Wait... That is not the current situation of most PC games too?
At this pace a crash might happen at the end
 

dark zero

Platinum Member
Jun 2, 2015
2,655
138
106
And now thinking well... With Huawei down and Samsung with issues.... Android won't resist with their 2 pillars down, so Apple is the winner at the end...
 
Mar 11, 2004
23,070
5,546
146
And now thinking well... With Huawei down and Samsung with issues.... Android won't resist with their 2 pillars down, so Apple is the winner at the end...

Down in what way? You know Huawei is working with ARM still, right? What Samsung issues? That they decided to ditch their half-baked attempt at making custom ARM cores? Chances are they'll just use the stock ones, so I don't think this even means they're stopping doing their own SoC like they've been doing.

Android won't resist what? Its not like Samsung is stopping Android phones, so no clue what you're even talking about there (although that has been talked about is that going after Huawei actually might be most hurtful for Google). Apple already was (not sure if it had changed but wasn't Apple getting something like 90% of the profits made in smartphones and tablets?), but I'm not sure that Apple would want to be the sole company as it'll open them up to lots of regulatory hurt (imagine in the 90s if Microsoft had even more control over the software and also completely controlled the hardware).
 

soresu

Platinum Member
Dec 19, 2014
2,656
1,857
136
Down in what way? You know Huawei is working with ARM still, right? What Samsung issues? That they decided to ditch their half-baked attempt at making custom ARM cores? Chances are they'll just use the stock ones, so I don't think this even means they're stopping doing their own SoC like they've been doing.
I have my suspicions that they (Samsung) believe they may receive a better reception with Exynos in the US if they use stock ARM instead, I think the US is one of the few markets they substitute Exynos for a Snapdragon, so I can imagine that they want to make it all Samsung SoC's even if it means abandoning Mongoose.

After all, Mongoose was only barely keeping a gen ahead on perf while being inferior in perf/watt.
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
I have my suspicions that they (Samsung) believe they may receive a better reception with Exynos in the US if they use stock ARM instead, I think the US is one of the few markets they substitute Exynos for a Snapdragon, so I can imagine that they want to make it all Samsung SoC's even if it means abandoning Mongoose.

After all, Mongoose was only barely keeping a gen ahead on perf while being inferior in perf/watt.

AFAIK the US got snapdragon because some networks use cdma which Samsung modem doesn't support and isn't used in rest of world.