AMD have provided highly customised designs for their console partners - as Apple, why bother to make such a massive investment to create a whole new high end GPU division when they have a partner doing all that custom work for them?
Personally I would not be inclined to go off on a tangent with a new design, patents or not - AMD are just getting started with RDNA2 IMHO, given their design strategies are aligning to Zen and passing learning from one project back to the other.
Which part of "Apple Silicon" did you not understand?
Apple isn't going to spend the entire WWDC constantly pushing the idea that this is APPLE SILICON (and in every talk, mentioning specific details like how to optimize for tile rendering, or the advantages of a single pool of RAM) only to then say "just kidding, what we meant was Apple+AMD silicon"...
Honestly, this is nothing but a replay of the CPU argument. Remember how that went
"Apple is on track to exceed Intel performance".
"No they aren't"
"Why not? Give me technical reasons."
"Well Intel has experience. And fabs. And x86 magic pixie dust."
Now we are hearing the same arguments based on anything but actual technology nous regarding the GPU, and soon we're hear yet a third version of it based on how Apple might have a good core but they will not be able to scale it up to a Xeon number of cores because reasons.
(a) Apple does things on a long timetable. Apple is planning for ten years from now. If your worldview extends to three days from now, yes you won't understand what they are doing.
(b) Apple operates on the assumption (largely validated...) that they can do many thing better than anyone else.
Part of it is that they have total control of the hardware, the OS, the compilers, the APIs, so they are willing to add functionality to their CPU/GPU/NPU/... that others wouldn't add because "no existing apps will use it, and no-one will pay for it". Look at them adding the U1 to iPhones. Been sitting there mostly unused for a year; but it's part of that long term plan.
Another part of this is that Apple is willing to drop any parts of the past that hold them back or cause problems. And, sure, that creates a little hassle every year. But the payoff is immense, for Apple, developers, and users. Other companies will not accept that short term pain for the long term gain.
So, by Apple having their own GPU, they can drop various AMD functionality that's only there to support some aspect of Direct3D or OpenGL from ten years ago, or that's there so AMD can sell the GPU as an NPU; while replacing it with eg functionality that accelerates AR (ray tracing?, physics engines?)
This is not about "Apple can save money by not having to pay AMD (or Intel)". It's about Apple wants a rocket to the moon, while AMD and Intel think a daring step forward is design a boat to cross the English Channel.