It's funny that you mention the "real world," because I look there and I see the market increasingly moving to Cloud services, because the economies of scales from buying at the higher end and consolidating are quite compelling. The "average server" running an "average workload" these days is located somewhere in Amazon, occupying part of a 12-22-core chip (depending on age).
This also isn't true. Oppenheimer puts the 2016 estimates for cloud usage as overall server base at a little over 17%. Even as all cloud approaches the point in the next couple of years where they'll have the majority of compute capacity available (Microsoft, Amazon, and Google have over a million servers for instance), this overall capacity is still a shell of the actual used server base (where workloads are actually getting run), which is estimated to be between 75 and 140 Million Servers.
Also, Amazon has no 22 core chips offered. The most they offer is the the E5-2686 v4, and the E7-8880 v3 with 18 cores. Ironically the latter is used to break the memory bottleneck for VMs with very large memory footprints, by leveraging additional sockets. In those situations, would AMD's additional 512GB per socket that they're offering make a difference? Maybe. It would likely be doing so through additional memory slots, and that could be very compelling, as Cisco notes that most of their Virtualization platform shipments are with 256-512GB of RAM, populating all slots with much cheaper, smaller DIMMs. It also has the effect of minimizing faults, as while density is a major factor for virtualization, most Virtualization deployments target no less than 5 hosts, and after that it turns to even numbers when targeting 4-node blocks.