It sounds as if AMD's drivers - even though they pass conformance (which is a minimum) just aren't up to snuff. Considering how long it took them to get a Vulkan compliant driver out, that would seem to be the most logical answer. AMD (ATi) has a history of poor OpenGL implementations spanning decades. I can say that from personal experience dating all the way back to a Rage card.
What do you mean "how long it took them to get a Vulkan compliant driver out", exactly? They were ready by the time the first game that mattered, Doom, got its Vulkan renderer running. They were faster on the ball than Nvidia was, in that case. AMD's OpenGL support may be flawed, but their Vulkan support is separate and I don't really think you have a point with that.
Anyways, I've been doing some more testing on the older PC I have access to (a Dell PC with Core 2 Quad @ stock 2.4 GHz, MSI Radeon 270X, 8 GB of 800 MHz DDR2 RAM). This time using MSI Afterburner to track GPU and CPU usage. Using 16.10.3 drivers, the latest patch for DXMD, and the High preset with triple buffering enabled.
One thing that was troublesome is that the DX12 renderer now seems more sensitive to overclocking than before. This 270X came factory overclocked to 1080 MHz with a supposed "boost clock" of 1120. I usually just have it at 1100 MHz. At those speeds though when I was testing just now, the game showed massive corruption and artifacting issues walking around the streets of Prague. DirectX 11 doesn't demonstrate the same artifacting, and turning off MSI Afterburner didn't change anything; the corruption only went away in DX12 when I lowered the clock speed to 1050-1065 MHz (normal clock speed for a 270X is 1000 MHz and a "boost clock" of 1050 MHz, though I've never figured out just how the boost clock goes into effect).
Anyways. according to Afterburner's monitoring, GPU usage with DX11 hovered around 40%, while DX12 GPU usage stayed around 80%. CPU usage in DX11 hovered around in the 80s on each core, while in DX12 it stayed more in the 90s, sometimes hitting prolonged instances of 100% across all cores. VRAM used was pretty close between the renderers, high 1800 MB for DX11 and low 1900 MB for DX12, while DX12 used a good thousand megabytes more system RAM at about 6300 MB as opposed to 5100 MB in DX11. Definitely CPU bottlenecked in either case, but getting more usage out of the GPU with DX12, supposedly.
For all that difference in usage numbers though, there wasn't a huge leap in frame rates. Both bounced around the 20s and could dip to the high teens or reach low 30s at times, and like I tested earlier, DX11 tended to report somewhat higher framerates when standing still. HOWEVER, subjectively I do think DX12
felt smoother, with less hitching, stuttering, and the like. It felt more playable.