AMD Unveils World's First Hardware-Based Virtualized GPU Solution at VMworld 2015

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

zir_blazer

Golden Member
Jun 6, 2013
1,239
536
136
I think only Fiji, newer 1.2 GCN. But yes, their marketing is not taking advantage of this.

Imagine a cloud server that streams gaming services. You can have heaps of gamer streams from 1 Fiji, via parallel execution, no added *latency* (that's the key here, to maintain smooth gaming performance/response) for the games being rendered. It's an INSANE technology and they aren't even making a noise and selling it.

One of the biggest hurdles to cloud gaming is the added latency of the internet + graphics latency, gaming is not as smooth or responsive.
Google Radeon Sky. They were "launched" around two years ago and were the original GRIDs competence. As far that I know, they were merely a paperlaunch with no actual Software to make them work as AMD intended. Never hear of them again.

Fiji has the 4 GB limit. Since FirePro parts have usually twice the RAM than their Radeon counterparts (And even you have some joke parts like a 32 GB Grenada, who knows for what...), I think than the 4 GB HBM limit really hurts Fiji as a possible FirePro GPU.


I'm pretty sure they will do just fine after this tech makes it's rounds in the various upcoming conferences and trade shows. People in this field will already know that SR-IOV will be faster and with low overhead compared to a IOMMU pass-through implementation (Nvidia).
nVidia GRIDs GPU sharing is not Passthrough using IOMMU. You can do it that way anyways since, for example, the GRID K1 has 4 GPUs, so you get 4 PCI Devices and can provide them for up to 4 individual VMs using PCI Passthrough. However, the GRID K2 with 2 GPUs should be very similar to any consumer Dual GPU Video Card if you use it that way.
I recall hearing that the GRIDs GPU sharing is sort of a XenGT like solution, since I never read than that mode needs explicit IOMMU support (But any of the servers that provide it should have that working anyways). XenGT is a Software based mediated Passthrough, you need both a frontend (Guest Drivers supporting it and aware that they're a VM), backend (Host Drivers), and Hypervisor support to provide a path between them. The GRIDs GPU sharing feature was implemented around this if I understanded properly.

SR-IOV seems to move all that Software support for sharing the GPU into the Hardware itself, since it provides multiple virtual functions, which you do PCI Passthrough of. If you want to know the details of how it works, you may want to look around for people telling experiences of working with the Intel NICs that support it and several VMs. That would give the best possible idea of what to expect from the FirePros.
 

Despoiler

Golden Member
Nov 10, 2007
1,968
773
136
Google Radeon Sky. They were "launched" around two years ago and were the original GRIDs competence. As far that I know, they were merely a paperlaunch with no actual Software to make them work as AMD intended. Never hear of them again.

Fiji has the 4 GB limit. Since FirePro parts have usually twice the RAM than their Radeon counterparts (And even you have some joke parts like a 32 GB Grenada, who knows for what...), I think than the 4 GB HBM limit really hurts Fiji as a possible FirePro GPU.



nVidia GRIDs GPU sharing is not Passthrough using IOMMU. You can do it that way anyways since, for example, the GRID K1 has 4 GPUs, so you get 4 PCI Devices and can provide them for up to 4 individual VMs using PCI Passthrough. However, the GRID K2 with 2 GPUs should be very similar to any consumer Dual GPU Video Card if you use it that way.
I recall hearing that the GRIDs GPU sharing is sort of a XenGT like solution, since I never read than that mode needs explicit IOMMU support (But any of the servers that provide it should have that working anyways). XenGT is a Software based mediated Passthrough, you need both a frontend (Guest Drivers supporting it and aware that they're a VM), backend (Host Drivers), and Hypervisor support to provide a path between them. The GRIDs GPU sharing feature was implemented around this if I understanded properly.

SR-IOV seems to move all that Software support for sharing the GPU into the Hardware itself, since it provides multiple virtual functions, which you do PCI Passthrough of. If you want to know the details of how it works, you may want to look around for people telling experiences of working with the Intel NICs that support it and several VMs. That would give the best possible idea of what to expect from the FirePros.

You are right. I was reading the install docs on VMware and saw that you have to load the VIB for the VGPU manager into the hypervisor, which then allows you to assign the memory pools/profiles for various users to a GPU. I assumed that memory management unit was an IOMMU setup, but MMU gets thrown around in different places depending on what you are setting up.

http://on-demand.gputechconf.com/gt...hi-perf-graphics-nvidia-grid-virtual-gpus.pdf