Like I said earlier, DX12 increases RAM usage because it caches more data, presumably due to more of the CPU threads being engaged.
Using DX12 doesn't mean it will automatically balloon RAM amounts by a significant amount, yes, there is some more RAM allocated for command lists and whatnot, but, that is just a drop in the bucket compared to where the majority of the RAM is being eaten up.
The #1 culprit for RAM usage is textures. The higher the resolution, the more RAM is required. The more you have active on the scene 'at once' the more of those you want to be stored in VRAM, and prevent swapping from system RAM (to keep the speed sane). Then you got 3D textures, that eat even more RAM.
Then, you got hi-res models and the list goes on.
Look at the XB1, it is using DX12, and it is *more* efficient compared to DX11, and doesn't balloon the RAM it requires out of the water, it is just the opposite. For the same RAM footprint, you can do more.
We also got the case of a single CPU instruction can generate multiple draw calls as well, and that can save loads of RAM.
It highly depends on engine design, and type.
Look at this demo that switches DX11 & DX12 on the fly, and look at memory allocation in the task manager, it doesn't change by any significant amount.
The fault here is squarely at the devs, and their choices they made.