4 NUMA nodes on a single socket (a la ThreadRipper 2) is DOA. Windows freaks out with 2 nodes.
TR2 cores will be BW limited and with then narrow execution units need high core counts vs. Intel.
The irony is the frequency perception game played by Intel so masterfully against AMD in the P4 days will now be reciprocated by AMD by way of core wars. AMD may have more cores in a single package now but how many AMD cores are equivalent to how many Intel cores nowadays exactly?
Usually around 0.8-0.9x, more or less depending on what you're up to. They are about equal to Haswell, clock per clock. Not really all that big of a difference. With slower cores, they have to push more of them for your money, but it's not like BD type ones at any stage, or late Phenom IIs. In a lot of server cases, it can be more like 1.25-2x, though. AMD would have sold some servers with Epyc, offering CPUs at the right low prices, without Spectre and Meltdown. But, those vulnerability families came at just the right time for TR/Epyc's availability, and they happened to be much less affected by Spectre, performance-wise.
Most users of those chips will be doing things that are good at scaling out, so it won't really matter. Very few cases really need a ton of bandwidth, FI, though the latency could hurt (on Windows, especially). For cases like render farms, virtualized hosting, scale-out databases, "big data" processing, "AI," and others, the CPU basically being like a 4S system in one package will either be a non-issue, or a very minor one. As well, for many cases, the lack of a major performance hit with Meltdown and Spectre patches gives AMD a big performance edge, and will for at least a few more years.
Not that the OP should be buying a TR CPU, but it's far from DOA. And, it's not like Intel has no history with MCM server and workstation CPUs that had wonky performance as a result, either. They aren't taking the market by storm with a big MCM chip, but it's fine, they seem to be doing fine, and will likely continue to do fine.