2990WX review thread Its live !

Markfw

CPU Moderator, VC&G Moderator, Elite Member
Super Moderator
May 16, 2002
16,950
373
136
#1
So I am sure some of you will beat me getting up in the AM, so please place a link to any 2990WX review you see, and I will add them the OP. They should start flowing in 8/13/2018 in the morning, so this is a placeholder until then

Edit: Its I was right on a couple of points in my 2990WX builders thread. First, that MSI motherboard I ordered appears to be the one everyone is using, and second, overclocked to 4 ghz is common, and draws 500-600 watts, so my custom water loop is required.

Posted by By bouowmx First extreme overclocking results on HWBOT: Cinebench R15 8391 cb

Posted by Dookey https://www.hardocp.com/article/2018/08/13/amd_ryzen_threadripper_2990wx_2950x_cpu_review

Posted by Hitman928 https://www.anandtech.com/show/13124/the-amd-threadripper-2990wx-and-2950x-review
https://www.techspot.com/review/1678-amd-ryzen-threadripper-2990wx-2950x/
https://www.computerbase.de/2018-08/amd-ryzen-threadripper-2990wx-2950x-test/

Posted by Gideon https://www.phoronix.com/scan.php?page=article&item=amd-linux-2990wx&num=1

Posted by Dlerious Hardware Unboxed review: https://www.youtube.com/watch?v=QI9sMfWmCsk

Posted by NTMBK https://techreport.com/review/33977/amd-ryzen-threadripper-2990wx-cpu-reviewed
 
Last edited:

Gideon

Senior member
Nov 27, 2007
417
16
136
#3
I would assume it's the usual 9:00 AM (EDT) or about 4h 45m to go.
 

Bouowmx

Senior member
Nov 13, 2016
819
8
116
#4
First extreme overclocking results on HWBOT: Cinebench R15 8391 cb

I previously estimated the score to be about ~8200 cb. 2% under the mark.
 

french toast

Senior member
Feb 22, 2017
916
0
91
#10
Crikey.. those interconnect power numbers!
Ryzen cores themselves are very efficient, but the fabric penalty is staggering.
 

EXCellR8

Platinum Member
Sep 1, 2010
2,705
29
126
#11
that is some truly sexual packaging.
 

dnavas

Senior member
Feb 25, 2017
231
12
61
#12

Hitman928

Golden Member
Apr 15, 2012
1,602
61
136
#13
Crikey.. those interconnect power numbers!
Ryzen cores themselves are very efficient, but the fabric penalty is staggering.
It's not much different than intel's mesh solution. Look at the 7980xe vs 2950x. It is 39 W vs 43 W of "uncore" power use. The "uncore" power use will continue to scale up with more cores for both intel and AMD. When you start getting to these extreme core count CPUs, you need something more complex to connect them all and keep performance from dropping off a cliff. Intel has 2 different solutions for desktop vs HEDT / enterprise for this reason. Lower core count CPUs get ring bus which provides lower latency and power. Higher core count CPUs get mesh which takes a hit on power and latency for better throughput. AMD's solution is applied to all product ranges to simplify things for them. It's worked out pretty well so far, I'd say. We'll see what solutions both companies can come up with moving forward.

That's why Ian said this in the Anandtech review:

After core counts, the next battle will be on the interconnect. Low power, scalable, and high performance: process node scaling will mean nothing if the interconnect becomes 90% of the total chip power.
 

Edrick

Golden Member
Feb 18, 2010
1,879
0
106
#14
I think Anandtech's reviewer said it best. Once the "Core Wars" stop, the interconnect will be the next big thing for AMD and Intel to improve.
 

french toast

Senior member
Feb 22, 2017
916
0
91
#15
I was coming back here to say the same thing.
I knew it was going to be bad, but, holy smokes!
Direct link for those that miss it: https://www.anandtech.com/show/13124/the-amd-threadripper-2990wx-and-2950x-review/4
Yea I knew it was much less efficient than ring bus, but I didn't expect it to be THAT bad, intel's MESH is also worse, but better than IF.
It's not much different than intel's mesh solution. Look at the 7980xe vs 2950x. It is 39 W vs 43 W of "uncore" power use. The "uncore" power use will continue to scale up with more cores for both intel and AMD. When you start getting to these extreme core count CPUs, you need something more complex to connect them all and keep performance from dropping off a cliff. Intel has 2 different solutions for desktop vs HEDT / enterprise for this reason. Lower core count CPUs get ring bus which provides lower latency and power. Higher core count CPUs get mesh which takes a hit on power and latency for better throughput. AMD's solution is applied to all product ranges to simplify things for them. It's worked out pretty well so far, I'd say. We'll see what solutions both companies can come up with moving forward.

That's why Ian said this in the Anandtech review:
I can see it makes some sense for high core counts, but it is still worse than intel mesh, intel has more cores in that matchup also.

All those IF links in the 4 die/quad channel 2990wx makes the problem much worse, there has to be a better solution here.
Certainly for desktop IF sucks too much power compared to ring bus topology, could we see a CCX with higher core counts and a ring bus and then IF to link fewer CCX?...in future using butter donut to link CCX on active interposer?.

AMD have done really well so far, but they need to change it up, loads of small 4 core CCX connected together via infinity fabric is not going to cut it imo.
6-8 core CCX with ring bus topology is a must at least, they have to improve the infinity bus efficiency to connect the ccx..double up the links and clock them lower?..use an active interposer?.
 

SPBHM

Diamond Member
Sep 12, 2012
4,730
17
106
#16
the most interesting thing would be to see a comparison with a 32 core Epyc,
because the results are mostly bad outside of rendering, and it's likely the memory configuration, but how much is just the infinity fabric a problem for those tests is not clear without having Epyc to compare... hopefully someone will do that.

in any case, it looks to heavily compromised to be a viable high end desktop CPU,
 

BigDaveX

Senior member
Jun 12, 2014
314
27
101
#17
Some really impressive numbers in some benchmarks, but kinda underwhelming in others. It looks like AMD has done a great job of minimising NUMA overhead in two-die situations, but bump it up to four and performance plummets again.

EDIT: The 2990WX's gaming numbers on Tech Report make for ugly reading - performance is all the way down to Bulldozer levels in several cases. I was worried that the original Threadripper would be a repeat of Quad FX, which I'm happy to say turned out not to be the case, and isn't really the case for this one either since it at least does post some very solid multi-thread numbers (Quad FX couldn't even manage that), but still, clearly some major pitfalls that need to be worked around in AMD's future designs.
 
Last edited:

tamz_msc

Platinum Member
Jan 5, 2017
2,148
86
106
#18
Given that the interconnect takes up a significant portion of the package power, I am interested to see a comparison of frequency vs. # of cores loaded between the 2-die and 4-die parts.
 
Jan 28, 2017
63
5
51
#19
Very inconsistent, even "disastrous", on Windows... no surprise.
Anyone receive a 24 core sample?
 
Jun 4, 2004
12,308
245
146
#20
Yea I knew it was much less efficient than ring bus, but I didn't expect it to be THAT bad, intel's MESH is also worse, but better than IF.
I can see it makes some sense for high core counts, but it is still worse than intel mesh, intel has more cores in that matchup also.

All those IF links in the 4 die/quad channel 2990wx makes the problem much worse, there has to be a better solution here.
Certainly for desktop IF sucks too much power compared to ring bus topology, could we see a CCX with higher core counts and a ring bus and then IF to link fewer CCX?...in future using butter donut to link CCX on active interposer?.

AMD have done really well so far, but they need to change it up, loads of small 4 core CCX connected together via infinity fabric is not going to cut it imo.
6-8 core CCX with ring bus topology is a must at least, they have to improve the infinity bus efficiency to connect the ccx..double up the links and clock them lower?..use an active interposer?.

This would suggest that maybe an active interposer is the way forward? Mount the CCX’s to the silicon interposer with the minimum routing capabilities required for inter chip communications and the butter donut topology and maybe both power and latency decrease.
 
Feb 23, 2017
421
268
96
#21
The cores whilst idle are running at 2GHz?
Is there a reason for this?
If it is possible to manually override this idling speed, how much power could it save?
 

beginner99

Diamond Member
Jun 2, 2009
3,925
70
126
#22
I think Anandtech's reviewer said it best. Once the "Core Wars" stop, the interconnect will be the next big thing for AMD and Intel to improve.
This has already started. No sense to pack even more cores after these pumps over the last 1.5 years. If the mesh/fabric takes half the power you have a huge opportunity. Especially AMD on desktop. Imagine power use of Ryzen with a ring bus and a better process (higher clocking). 8 core 5ghz would be a non-issue.
 

moinmoin

Senior member
Jun 1, 2017
625
145
96
#23
What I personally find more interesting than the uncore power use increase between 2950X and 2990WX is the decrease between 2990WX and Epyc 7601. Considering the price difference between the latter two is not that big, and with the benchmark result from Phoronix in mind this makes for a very specific use case (like as a build server) for 2990WX where it's both more efficient and faster than other options.

In general due to increase in power usage at idle none of HEDT/server chips are recommendable for common consumer use cases where the PC is more often idle than not. Now that the core parts are optimized for power use it will be interesting what can be done to achieve the same for the ever more demanding uncore (make no mistake, the increase in speed and bandwidth with DDR5 and PCIe 4/5 will need even more power).
 

french toast

Senior member
Feb 22, 2017
916
0
91
#24
This would suggest that maybe an active interposer is the way forward? Mount the CCX’s to the silicon interposer with the minimum routing capabilities required for inter chip communications and the butter donut topology and maybe both power and latency decrease.
I think that is the optimal solution, perhaps more expensive for mainstream?
What about larger 6-8 core CCX with ring bus, then a more efficient way of connecting the CCX on die?..active interposer is for sure sounding the best in terms of performance, is there a cheaper/easier way to get the communication efficiency up?..double up IF links between CCX and clock them lower?
 

PeterScott

Platinum Member
Jul 7, 2017
2,517
93
96
#25
Some really impressive numbers in some benchmarks, but kinda underwhelming in others. It looks like AMD has done a great job of minimising NUMA overhead in two-die situations, but bump it up to four and performance plummets again.
I think AnandTech sums it nicely. It's niche of niche:
However the 2950X already sits as a niche proposition for high performance – the 2990WX takes that ball and runs with it, making it a niche of a niche.
It appears more suitable as render farm chip for very specific workloads, than as a general purpose server/workstation chip. Which is probably as it should be, or it would eat Epyc sales.
 


ASK THE COMMUNITY