The software is not the problem, DLSS on cuda could be ported to Direct ML. What's important is actually the process. There's 3 basic components when training an AI Neural Network;
1) The inputs. In DLSS 1.0, they were using the actual game. For obvious reason, this is not ideal. With DLSS 2.0, NV is using some generic sets of graphical data. Trade secret. Would they share? Probably not.
2) The Neural Network, the program/algorithm that will do the image reconstruction. With a given input image, this program will create a new image. This is what's being trained. Again, would they share this? Probably not.
3) The expected result, so with a given input and the current NN, some result will be generated. They compare this with the expected result and continue to tweak the Neural Network until the output is what they want it to be. Again, would they share? Probably Not.
So, once the NN is trained to the level that they want. The model is downloaded in the NV driver. With a supported game, the model will be used at real time to generate the image. You need fast hardware to do this, that's what the tensor cores are for. You can run this on anything, CPU even, but it's going to be slow and therefore defeats the purpose. Can you hack the NV driver and steal the model? Probably, but it's illegal.
We don't know what sorts of support will RDNA 2 have for tensors, since AMD haven't leaked much info, so we don't know how well AMD hardware will perform with something like DLSS. Also, the training would be constant. So, will AMD put in the effort and cost of having to do this? I think Microsoft would be more likely to do this than AMD, if the new console have sufficient hardware to make this viable.
All the info for NV is actually given in their DLSS 2.0 page
https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-2-0-a-big-leap-in-ai-rendering/
One more thing, and this is the awesome part, DLSS 2.0 is using 16K image to train their NN. That's why the result is so good, because what they're training with is better than the actual game assets. So, it's not your imagination, sometime the DLSS 2.0 image can be more clear than the native output. the DLSS 2.0 NN is just able to construct an image with more detail than the asset included with the game. Especially if the game asset themselves aren't that high res. Yeah, it's crazy sounding, but that's what's cool about AI and that's why it's blowing up in all sorts of industries. When it works, it's scary how good they are.