Of course they claim that. You always want to claim that a feature you have and your competition doesn't is super duper awesome. Doesn't mean it's reality. Out of curiosity, happen to know of any reviews that have tested server workloads on Epyc with SMT on/off? I only did a quick search, so plausible that there are some which I missed.AMD claims 5% extra die space and gets ~ 40% in DC from SMT. Intel's implementation appears to be both fatter, and less capable.
All that SMT does is allow the core to have two threads active such that the core's execution resources can be better utilized. As such, you could intentionally code a program such that SMT would yield 2x the performance. Outside of such contrived scenarios though, how much benefit SMT provides depends on how well a given workload can utilize the core's execution resources. If a single thread can fully load the core then SMT reduces overall performance. Whereas in the far more common case where there are free resources, then the other thread can make use of those and provide a 10-30% overall performance gain.
With the mess which Intel's P core is, I can easily see removal of SMT being a net win for both client and server. Whereas SMT on the upcoming converged core may show more benefit for a smaller area penalty and hence make sense again for server. Realistically it may end up being the case that Intel's converged core will have both ST and SMT variants, with client using ST only and server having both ST and SMT products for their appropriate markets.