- Mar 31, 2009
Yes, they actually are according to some guys who reverse engineered the SASS (Shader ASSembly) for Kepler/Maxwell/Pascal ISAs. Nvidia doesn't publicly document their ISAs but if you disassemble the .cubin files with nvdisasm, it's pretty clear as day that their GPUs uses a VLIW instruction encoding scheme. Even Volta/Turing still uses control codes but I guess it's anything to win benchmarks ...NVidia GPUs aren't VLIW.
Those 2 companies have years of experience in that area though.How so ? Adreno GPUs and even most popular Nvidia GPUs currently in use today use VLIW as well ...
Carmel trades blows with A75 and with worse efficiency (thou that can be attributed to older process node) as per Anandtech's review of it.nVidia's current custom v8-A core Carmel is clearly aimed at AI/ML applications rather than general use, I doubt we shall see it in a Shield someday, it trades blows with A76, but A77 was announced 4 months ago, and A78 will add at least 15% IPC on top of that in 2020-21.
Speaking of NVIDIA they really should consider taking on Qualcomm in laptop/2-in-1 Windows devices. It is a bad thing for others if Qualcomm manages to monopolise Windows on ARM.This makes sense as ARM's reference designs have outstripped everyone but Apple in the CPU (well ARM CPU, although ARM has started pushing Intel and AMD). Nvidia, who desperately needs CPU now that AMD and Intel are making GPUs (which means they'll be able to start squeezing Nvidia out), haven't had any luck getting traction on their CPU design (of course they took a more drastic approach). But Qualcomm couldn't either so it was becoming kinda futile. On top of that, custom ARM designs in many ways was fracturing the market, causing software related issues that prevent them from maximizing development (its why all the ARM GPUs have been chasing feature set of the PC GPUs, meet the standards to maximize compatibility). With ARM growing outside of mobile, it kinda needs to converge some to aid software development.
I'd guess another elephant in the room is IP issues. I think that was the major impetus behind the AMD GPU deal, but like with that, if they work with others they can pool R&D money and Samsung can trim some fat. Now Samsung lets others do the grunt of the chip design, and they can focus on the production engineering. Seems like a smart move for Samsung.
Heck, maybe AMD would absorb the team and put them to work developing the Samsung designs (pairing ARM reference with AMD GPU)? I think AMD could make the case, especially for their semi-custom unit as not having a track record with ARM very possibly cost them the Nintendo Switch deal (and there's definitely going to be others that want ARM designs, as streaming and VR/AR begins to grow).
Seemingly the team would be complete so AMD wouldn't need to do much more than give them some resources (money mostly, and if they use the ARM reference stuff mostly then ARM will be doing most of the chip design work) and since its still some time before the Samsung chips will be showing up, it shouldn't upset anything currently (i.e. steal production from AMD's other stuff; while giving them some help in using Samsung's fabs) and AMD could possibly work a deal with Samsung to get the team for cheap due to their established deal.
Windows on ARM essentially runs on any ARMv8 system, provided it has a proper UEFI and drivers.Speaking of NVIDIA they really should consider taking on Qualcomm in laptop/2-in-1 Windows devices. It is a bad thing for others if Qualcomm manages to monopolise Windows on ARM.
Adreno, I'm not too sure since Qualcomm doesn't release any information about their ISA but as for relatively recent Nvidia GPUs like Maxwell/Pascal they definitely do use VLIW instruction encoding for their control codes since they're organized into a 'bundle' of 4 instructions ...Those 2 companies have years of experience in that area though.
Also, I'm not altogether sure that either Adreno or nVidia currently use a VLIW based architecture - Adreno has doubtless gone through evolutions as ARM's Mali has (Utgard -> Midgard (VLIW) -> Bifrost (scalar) -> Valhall (scalar)), I would be surprised if Adreno at least still used the same basic architecture they bought from AMD/ATI at this point.
Not true for the latter, to them VLIW is actually an advantage since they can afford to keep doing per-game specific hacks in their drivers to win benchmarks while also winning at power efficiency too ...We don't know what's inside Adreno, and nV stuff is anything but VLIW.
That is its own problem, even with modularised code that quantity would be a mess to administrate - though no doubt AMD's is little different despite their assertions to GCN scalar being easier to code for.Nvidia's driver stack is probably well over 50+ million lines of code by now
It's pretty similar stuff to Intel's Itanium ...Control codes?
AMD's GCN architecture pretty much is easier for the compiler designers but that doesn't mean it translates to the most highest performance strategy since AMD hardware makes the trade of power efficiency for scheduling simplifications ...That is its own problem, even with modularised code that quantity would be a mess to administrate - though no doubt AMD's is little different despite their assertions to GCN scalar being easier to code for.
Quite a few modern NV GPUs are VLIW. What exactly are you reading about ? If it's high level material such as PTX then that is not useful information to how Nvidia's SASS actually works under the hood inside their hardware. Kepler/Maxwell/Pascal are all VLIW and nothing is going to change that ...nV GPUs are nothing VLIW, just read their execution model.
None of them are.Quite a few modern NV GPUs are VLIW
Something something whitepapers made by some people.how Nvidia's SASS actually works under the hood inside their hardware.
None of them are whatever-slot VLIW.Kepler/Maxwell/Pascal are all VLIW and nothing is going to change that ...
No, they don't bet against the biggest x86 server fistfight since Opteron DC.Do you mean companies should just bet on what currently works? That's not a good way to increase revenues and that means they'd be already late
ARM servers looked like a good bet a few years ago, when it looked like x86 was going to turn into an Intel monopoly and big customers wanted a real competitor. But now AMD is competitive again, and migrating your stack to AMD is a damn sight easier than migrating to ARM.Do you mean companies should just bet on what currently works? That's not a good way to increase revenues and that means they'd be already late
Now go back 20 years ago and apply your claim to the market. With your logic we would have no x86 server now. Of course the past does not always repeat itself, and various things are different here, but still if we follow your thought, nothing will ever change in the server market.No, they don't bet against the biggest x86 server fistfight since Opteron DC.
That's a very good point (and I somehow agree with @Yotsugi). My point is just that if you don't take risks, you're sure things will not change. And bets in the server markets are long time ones; it took Intel many years to get really strong here (and not mocked by the RISC mobs).ARM servers looked like a good bet a few years ago, when it looked like x86 was going to turn into an Intel monopoly and big customers wanted a real competitor. But now AMD is competitive again, and migrating your stack to AMD is a damn sight easier than migrating to ARM.
ARM servers are not fighting big iron.Now go back 20 years ago and apply your claim to the market
The risks could've been taken like 3-4 years ago.My point is just that if you don't take risks, you're sure things will not change
6 variants - Southern Islands, Sea Islands, Volcanic Islands, Polaris, Vega, Vega 2/Vega 7nm/Radeon 7.AMD just dealing with a couple of variants of GCN
A lot of the stack migration problems stemmed from adequate ARM ports of various software, which is admittedly not at parity with x86 still, but a great leap from where it was only a few years ago - the efforts of Linaro have not been solely limited to ARM Android by any means.But now AMD is competitive again, and migrating your stack to AMD is a damn sight easier than migrating to ARM
Those whitepapers are made by computer engineering/science specialists so if it's anyone that knows about the architecture of Nvidia GPUs aside from Nvidia employees themselves then it would be them ...None of them are.
Something something whitepapers made by some people.
None of them are whatever-slot VLIW.
It's a fact. How else would Valve be able to make their ACO shader compiler be compatible with several AMD architectures ? Valve would be one of the last places I'd expect to see specialists in shader compilers ...6 variants - Southern Islands, Sea Islands, Volcanic Islands, Polaris, Vega, Vega 2/Vega 7nm/Radeon 7.
Arcturus could be a 7th if rumours/driver code holds true.
I don't know where people have been getting the idea that AMD have been lax in uArch/ISA changes - sure they have had trouble putting out a full line up for each generation (to be expected from a cash strapped company), but there certainly has been serious iteration in the last decade.
|Thread starter||Similar threads||Forum||Replies||Date|
|Question What is difference between Samsung D die and E die ?||CPUs and Overclocking||1|
|K||Samsung 9820 subjective cpu perf/efficiency impression vs prior gen||CPUs and Overclocking||5|
|Qualcomm show their 7nm 5G chips with many products rumored to be ready to go. Is intel's 10nm disaster killing Apple from a delay for their 5G parts?||CPUs and Overclocking||4|