If they're releasing big NAVI in 2020, then the next midrange iteration is going to come in 2021. That's another two years, that's a long time to wait.
Navi 20 could be early 2020 (like Radeon VII was), leaving them plenty of time for new mid-range GPU rollout in the 2nd half of the year. AMD has Arcterus slated for 2020 on their roadmaps (although I think they just call it Next Gen, I believe it was referred to as Arcterus on their roadmaps at one point, but there's probably been a lot of shifting of things happened in the last couple of years).
I personally think there's a decent chance it might get pushed to early 2021, but it could still easily beat 2 year time by announcing at CES 2021 and launching not long after (if it launches in Jan 2021 it'd be 1.5 years from Navi) or in the Spring (March or April and it'd be like 1.75 years since Navi). And it could be part of a big change to AMD's GPU setup. By that I mean I think their pro/enterprise markets go mGPU with chiplets and I/O, while their consumer stuff goes to like a 3 tiered traditional setup (where they have 3 main GPU die that they have produced, with like 2-3 versions of each via binning and fusing).
I have very little idea what AMD is planning GPU wise. In many ways it seems like they're still figuring it out, but they also seem to have some plan/idea in place that with the release of Navi they're now implementing, and will be iterating on. Dr Su said that "it must be so" with regards to splitting consumer/gaming GPU development from the pro-enterprise/compute development, but what she meant (i.e. very different base GPU designs, or did she mean one going chiplets) is far from clear. She doesn't seem high on mGPU for gaming right now though, which makes sense as pro/compute stuff scales a lot better with mGPU than gaming does. I thought Navi 20 would be the big compute heavy version of Navi, but it kinda sounds like they're gonna roll with Vega for compute stuff for some time. It seems like Navi cut back on compute performance quite a lot, although I'm not sure if that's at the chip level or is more superficial lockout (much like how AMD and Nvidia have limited other than single precision performance of their consumer GPUs), meaning that a bigger Navi chip could be a compute monster as well. I think its the former (although I think they kept a lot of the compute ability that games utilize).
I don't know if they could make up that compute by doing non-GPU chiplets (so say they integrate Tensor processor; the way Lisa talked almost makes me think they'll let companies roll their own AI chips and then maybe integrate them on package alongside AMD GPUs). Which makes me wonder about implications for gaming as well (could they do a dedicated ray-tracing processor separate from the GPU?).
Something I'm interested in finding out, is AMD taking all the input from all the different companies and trying to develop the fewest GPU designs to try and meet them as best they can by optimizing the base GPU design and then scaling up and down as the need calls for, or are they going further than even before and specializing even more (so Google will slap their TPU alongside an AMD GPU, Sony and Microsoft have unique ray-tracing acceleration bits from each other, they have a significantly different GPU design in store for their Samsung deal, etc). I can almost see it being a mix of both, where they focus on making an efficient base GPU that can scale, while then pairing that with specialized bits for the bigger customers that can afford it - i.e. Google, Microsoft, Sony, etc).