I'm writing this thread to invite objective discussion on the Zen 2xxx series. I want to spread a few facts around, give my theory on things, and then invite anyone else (especially those in the field) to share their thoughts. If you are an Intel fanboy, coming here to bash Ryzen, take the bashing elsewhere, this is an objective discussion on what I believe the 2xxx series actually is (though disclaimer, it sheds a bit of a negative light on AMD: More on that soon.)
Last year when AMD launched the Ryzen 1xxx series, it was an incredible leap forward. We got fast, cheap, affordable, powerful octacore chips, and even a 16 core monster for under a grand. IPC was slightly weaker than Intel's best offerings, but in exchange we got more cores and better hyperthreading performance. However, the chips had 1 major issue: Core performance boost (Precision Boost) was based on a fixed function table. As the load went up on the CPU, the frequency would step down in increments. This meant for example, that if 5 cores were being utilized 100%, the entire CPU would throttle down towards the base clock. This was mentioned briefly by AnandTech, and you can actually verify this behavior with various benchmarking software. It had nothing to do with thermals. Your CPU could be at 40C and still see the same drop. This caused performance issues in games. In addition, for some yet unknown reason, cache and memory latencies were often higher than Threadripper/EPYC counterparts.
For Zen+, I have a general theory I've come up with that is holding water thus far, so I figure I'd share it with others. Pinnacle Ridge is Summit Ridge with a software update. Whatever the differences between Threadripper chips and Ryzen chips were for Zen 1 (whether it be microcode or some other mechanism at play), the cache issues have been corrected. I expect it was a software/microcode issue. The fact that Threadripper is running the exact same die (just 'higher quality' bins) tells me that it was something minor to begin with. If you look at the technical documentation, you'll note that they even state what the latencies were supposed to be at around Threadripper levels (someone posted it somewhere, maybe thestilt?
Okay, so what about this 12nm nonsense? AMD opted not to shrink the dies to 12nm. It is true that Zen+ uses 12nm, but there are simply empty gaps taking up the space. This spread things out which caused less interference (source: anandtech). Less interference means higher clocks. So why is the 2700X so much faster than last gen? Why is it even competitive with coffee lake? We know this as well. They rewrote the boosting algorithm. The stepping table is gone and replaced with an algorithm that looks at thermal headroom and boosts as high as possible within thermal limits. The 2700X also gets a boosted TDP.
The boosting algorithm I also expect is entirely microcode and/or UEFI based.. AMD could bring this technology to Zen 1 if they want to. However I suspect they won't. We know it's not a hardware function because the boost technology is also present in Raven Ridge, which is NOT Pinnacle Ridge.
So what about the improved memory compatibility? What about improved XFR? X470 brought better boards to market. More PCB layers, better VRMs (some with heatsinks!), etc. This allows for better memory compatibility. If you look back at Threadripper vs Ryzen 1xxx, you will notice that Threadripper had better memory compatibility for many boards. This wasn't a maturity issue, this was because the the boards had more layers/traces to accomodate, and due to the way quad channel is implemented on Threadripper, there wasn't such a strain on the IMC.
There is also the SMALL possibility that the IMC may have been tweaked, but I'm betting they didn't touch the chip at all. This was all a minor port to 12nm and a software upgrade. This gives them the time they need for a significantly faster next gen chip.
So where does this leave Threadripper? I'm at a bit of a loss here. Threadripper would also get the same upgrades, but it won't benefit as near as much. They may just leave it at that. The 2950X would still be faster. However, It wouldn't get as nearly as much out of the scenario (since the dies have less TDP to work with) as the 2700X. It is possible they could raise TDP, however, the other possibility is that they could raise core count for bragging rights for the top end. I guess we'll find out.
Note that most of this information above is based on theory that is derived from information derived from the media and also the fact that I can match 2700X benchmarks by setting downcore control to 4+0, removing 2 dimms, and making sure I'm in local mode.
One final note and a bit of a rant: Only one benchmark was off with Anandtech's review that I am aware of: Rocket League. Nvidia has made some fundamental changes to their drivers (don't even make me pull out my tinfoil hat and tell you my theory on their 'optimizations'...if only AMD could get back in the graphics game), and rocket league's publisher has released a bunch of patches. That's why I call for open source benchmarking. Review sites should also keep all current chips and retest with the latest firmware, drivers, etc. every new CPU review. AT (to my knowledge) did not retest Ryzen 1xxx or CL when it did this review. I suspect that AT doesn't get to keep review samples, doesn't purchase their own retail copies, and therefore can fall into this trap from time to time. It took me maybe 2 hours to run nearly every benchmark in that suite that they were transparent about. If they wrote a script to run the benchmarks in sequence, they could have retested as many chips as they wanted in 2 hours and had the full review ready before launch. Instead we got placeholders, shills attacking AT's numbers, etc. However, every review site/channel out their has it's fault.
Last year when AMD launched the Ryzen 1xxx series, it was an incredible leap forward. We got fast, cheap, affordable, powerful octacore chips, and even a 16 core monster for under a grand. IPC was slightly weaker than Intel's best offerings, but in exchange we got more cores and better hyperthreading performance. However, the chips had 1 major issue: Core performance boost (Precision Boost) was based on a fixed function table. As the load went up on the CPU, the frequency would step down in increments. This meant for example, that if 5 cores were being utilized 100%, the entire CPU would throttle down towards the base clock. This was mentioned briefly by AnandTech, and you can actually verify this behavior with various benchmarking software. It had nothing to do with thermals. Your CPU could be at 40C and still see the same drop. This caused performance issues in games. In addition, for some yet unknown reason, cache and memory latencies were often higher than Threadripper/EPYC counterparts.
For Zen+, I have a general theory I've come up with that is holding water thus far, so I figure I'd share it with others. Pinnacle Ridge is Summit Ridge with a software update. Whatever the differences between Threadripper chips and Ryzen chips were for Zen 1 (whether it be microcode or some other mechanism at play), the cache issues have been corrected. I expect it was a software/microcode issue. The fact that Threadripper is running the exact same die (just 'higher quality' bins) tells me that it was something minor to begin with. If you look at the technical documentation, you'll note that they even state what the latencies were supposed to be at around Threadripper levels (someone posted it somewhere, maybe thestilt?
Okay, so what about this 12nm nonsense? AMD opted not to shrink the dies to 12nm. It is true that Zen+ uses 12nm, but there are simply empty gaps taking up the space. This spread things out which caused less interference (source: anandtech). Less interference means higher clocks. So why is the 2700X so much faster than last gen? Why is it even competitive with coffee lake? We know this as well. They rewrote the boosting algorithm. The stepping table is gone and replaced with an algorithm that looks at thermal headroom and boosts as high as possible within thermal limits. The 2700X also gets a boosted TDP.
The boosting algorithm I also expect is entirely microcode and/or UEFI based.. AMD could bring this technology to Zen 1 if they want to. However I suspect they won't. We know it's not a hardware function because the boost technology is also present in Raven Ridge, which is NOT Pinnacle Ridge.
So what about the improved memory compatibility? What about improved XFR? X470 brought better boards to market. More PCB layers, better VRMs (some with heatsinks!), etc. This allows for better memory compatibility. If you look back at Threadripper vs Ryzen 1xxx, you will notice that Threadripper had better memory compatibility for many boards. This wasn't a maturity issue, this was because the the boards had more layers/traces to accomodate, and due to the way quad channel is implemented on Threadripper, there wasn't such a strain on the IMC.
There is also the SMALL possibility that the IMC may have been tweaked, but I'm betting they didn't touch the chip at all. This was all a minor port to 12nm and a software upgrade. This gives them the time they need for a significantly faster next gen chip.
So where does this leave Threadripper? I'm at a bit of a loss here. Threadripper would also get the same upgrades, but it won't benefit as near as much. They may just leave it at that. The 2950X would still be faster. However, It wouldn't get as nearly as much out of the scenario (since the dies have less TDP to work with) as the 2700X. It is possible they could raise TDP, however, the other possibility is that they could raise core count for bragging rights for the top end. I guess we'll find out.
Note that most of this information above is based on theory that is derived from information derived from the media and also the fact that I can match 2700X benchmarks by setting downcore control to 4+0, removing 2 dimms, and making sure I'm in local mode.
One final note and a bit of a rant: Only one benchmark was off with Anandtech's review that I am aware of: Rocket League. Nvidia has made some fundamental changes to their drivers (don't even make me pull out my tinfoil hat and tell you my theory on their 'optimizations'...if only AMD could get back in the graphics game), and rocket league's publisher has released a bunch of patches. That's why I call for open source benchmarking. Review sites should also keep all current chips and retest with the latest firmware, drivers, etc. every new CPU review. AT (to my knowledge) did not retest Ryzen 1xxx or CL when it did this review. I suspect that AT doesn't get to keep review samples, doesn't purchase their own retail copies, and therefore can fall into this trap from time to time. It took me maybe 2 hours to run nearly every benchmark in that suite that they were transparent about. If they wrote a script to run the benchmarks in sequence, they could have retested as many chips as they wanted in 2 hours and had the full review ready before launch. Instead we got placeholders, shills attacking AT's numbers, etc. However, every review site/channel out their has it's fault.