From what I'm seeing around, it seems to be a lightly modified Polaris chip.
As for why didn't AMD do This? Cost and demand. This is not going to be a cheap product to manufacture. For AMD, it would be a package near to TR in size and complexity at the present moment. Given both time and a solid revenue stream, I Don't see where AMD would be excluded from developing a an interposer with an 8-12 core 7nm die, possibly with a small iGPU for low power situations, a vega die, and an HBM or 3 stack. By then, the market will have a better understanding of those sorts of products and more OEMs will be willing to take the risk there. I just Don't see anyone being willing to take on such a risk with AMD at this moment. This Intel venture is just what the market needs, a proof of concept.
Why hasn't AMD made a RR die on an interposed with HBM? Cost again. The GPU on RR is sized for its expected available RAM bandwidth and use cases. If you suddenly threw 8 times it's current memory bandwidth at it in its current form, you wouldn't even get a 50% performance boost unless in the most theoretical, memory bound situations. To take advantage of the same stack, the iGPU section on RR would have to be 4 times it's current size. That would be a MUCH larger die which would tank yields, skyrocket per die costs, and be wholly inappropriate for the volume market that RR is currently aimed at. Remember, until 6 months ago, AMD was suffering on the CPU side from revenue stagnation. It doesn't have the money to fund a giant die like that for such a low volume use case.
7nm can make a lot of difference for them. It roughly doubles the floor space for Zepplin at similar chip dimensions. This can enable a 2 CCX, double size GCX and expanded caches in the same space as the RR die (roughly). Couple that with faster RAM and you've got something. Or, you can reduce the die size by 33%, keep a single CCX, double the gcx, mount it on an interposer with an HBM stack and be near the performance of the discussed Intel solution for less money in the same AM4 footprint.
Something else to consider, an AND interposer solution would likely use infinity fabric to link the GPU and gpu. That brings a lot more potential bandwidth between the GPU and external GPU, meaning that there can be flexibility with memory pooling. Using AMD's research in HMA, they could dynamically use all of the memory as system RAM or vram as needed with lower latency and bandwidth penalties as compared to PCIe solutions.