• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Amd iommu

thecuttingedge

Junior Member
hi long time lurker first time poster,

looking at intel's VT-D and VT-x, it allows for the pci pass through to the guests and all that jazz, but before plunking down for a 2720QM, im curious as to what amd has to offer, I know they have AMD-vi, but I can't seem to find any documentation as to what supports it, is it on die or does it need a certain chipset etc.

long question short, does anyone know of any amd non server chips that support this amd-vi(or IOMMU)? or something comparable that would be able to use DirectPath I/O?
 
VT-D=AMD-Vi(Old IOMMU.)

And its a relatively new feature for AMD to support on desktops. But as stated above what supports it.

We run alot of virtualized servers, and none of them uses VT-D/AMD-Vi. Even those that run RemoteFX.
 
Yea I saw the wiki link, but with them moving to the FMx chipset, there isn't much to find regarding if that is still chipset or CPU integrated. Another point of contention is I haven't seen any recent laptops that have the full blown 890FX mobile chipset that supports it as they all are moving towards trinity.
 
What are you trying to do that requires it?

I am with ShintaiDK on this. I am a VT engineer at work, have fairly robust home lab and teach a VT class at a local college and have yet to find a scenario where I put it to use.

That sounds skeptical, but I am also honestly curious 🙂
 
Bitcoin miners are interested in this feature, somewhat. Since mining work is done by the video cards rather than CPU, stuffing more video cards into a single system saves money. AMD cards have a driver limitation of 8 GPUs. This might seem like a number that would be hard to exceed, but with dual GPU cards you only need 5 before you surpass the limit. Also, with some "exotic" hardware such as PCI-E extenders and back planes you can connect 15 or more cards to a single system. That 8 GPU limit is a real annoyance if you are trying to do this.

A potential work-around is to virtualize, create 2-4 VMs, and use IOMMU to give each VM direct access to 8 or fewer GPU, and mine on the VMs. A few brave testers have gotten it to work, so it is possible, but from what I have heard it's not very stable and either the mining software or the VMs crash often and need to be restarted. With such a convoluted system build it's hard to identify what is really at fault and causing the crashes, so it's tough to troubleshoot.

But anyway, that is a real-world usage that requires IOMMU.
 
GFX cards doesnt require AMD-Vi/VT-D. They are covered under VT-X and the equal AMD version. All it requires is a SLAT capable CPU. (Nehalem+.)
 
GFX cards doesnt require AMD-Vi/VT-D. They are covered under VT-X and the equal AMD version. All it requires is a SLAT capable CPU. (Nehalem+.)

I/O MMU virtualization (AMD-Vi and VT-d)
Main article: IOMMU
An input/output memory management unit (IOMMU) enables guest virtual machines to directly use peripheral devices, such as Ethernet, accelerated graphics cards, and hard-drive controllers, through DMA and interrupt remapping. This is sometimes called PCI passthrough.[31] Both AMD and Intel have released specifications:
AMD's I/O Virtualization Technology, "AMD-Vi", originally called "IOMMU".[32]
Intel's "Virtualization Technology for Directed I/O" (VT-d).[33] Included in most but not all Nehalem based processors. [34]

This is what I am talking about. It is required for PCI pass-through of an accelerated video card. If you have something that says otherwise please share.
 
Virtualization and direct access of the GFX card(s) in Hyper-V for example doesnt require VT-D/AMD-Vi. As far as we experience with out server afrms its only for other devices.

Also the source of documentation for your wiki quote doesnt contain information about display devices.

Lucid Virtu Graphics Virtualization also runs without VT-D.

I can only tell you what we do every day and it works 🙂
 
Last edited:
Well the original use was for a edge network laptop that will have several periphials attached to it. Since I know esx 5 won't emulate some of these devices. It would seem better and more secure to make sure only the designated guest has access. Looking through Intels ark we can clearly see which cpu's supports what, as its mostly CPU based. But AMD doesn't make it so easy with their FMx series
 
Yea, and as much as I would like to support the green team the lack of documentation does not help. As Intel's cpu's may go under utilized with us disabling HT and such, and amd's seemed better fitted for the task since they would be cheaper for the feature set we need.
 
My understanding is all modern AMD CPU support all the features, but it depends on having a chipset or motherboard with the feature enabled.

That is, any FX cpu should work as long as the motherboard supports it.
 
Related question about AMD's PCI passthrough: is it implemented in their Trinity FM2 platform, specifically in their A85X chipset? All I can really find on it is references to IOMMU v2, specifically relating to discrete graphics cards, but I can't confirm if this also means general PCI passthrough or not. Specifically, I am wondering about it being implemented in a type 1 hypervisor, such as Xen or VMware ESX.
 
Chiropteran already said, which AMD CPUs support it. (all modern ones)
You need to research the motherboard to see if the BIOS fully supports it.
 
890 or 990fx will do pass through fine. I know in the past some didn't like the vm's disk io. I can pass through a raid controller and have native disk performance. Or as some have passed through video cards. Not sure but I haven't heard of an nvidia card being passed through on a windows vm, only radeons so far.
 
Back
Top