2 1700 overclocked to 3.9-4.0 vs 1 TR 1950X

ericlp

Diamond Member
Dec 24, 2000
6,133
219
106
What would be faster?

A) Two systems with 1700 overclocked to 3.9 Ghz - B350 motherboards W/ nvidia GTX 1080Ti's (one card for each system)

OR

B) One system, Thread Ripper 1950X and Two GTX 1080's...

This will be doing mostly DC Work. Both options would give me 16 Cores.

I would rather go with A since I want a two system setup. I use to worry about Power being off the grid, but I am in the process of purchasing a new office space and apparently (HOA FEE'S) cover Power! Use as much as I want. WhooHoo! So, I'm kinda excited to build a few high end PC's again.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
Honestly the two 1700 machines. For several reasons if you are running a single tool that can use up the 16 cores chances are that it will not scale as well as splitting the two tasks independently on two systems. In the end with DC you lose performance on parallelism anyways, but I think the dual 1700 setup would be smoother. It'll also be cheaper (though getting two 1700's at 3.9 might be difficult).

1 1950 would be $1000 vs $550 for 2x 1700
1 TR4 is ~$300 vs. $200 for a B350 board
Memory cost is the same.
2x times 650 would generally be cheaper than a 1 1KW-1.2KW (I mean you would need at least 800w, for the CPU and two cards, plus some cushion)
So really the difference would be needing two sets of drives and two cases. That's going to be cheaper than the $600 more you are spending on the CPU and motherboard for TR.

The one caveat is that if you can run two instances in a NUMA setup it will basically be exactly the same minus clock speed if you aren't OCing the 1950 (which personally I wouldn't).
 
  • Like
Reactions: ericlp

pjmssn

Member
Aug 17, 2017
89
11
71
That's very interesting to see that 2 machines are preferred. Will you be setting up a 2-node Beowulf cluster with the 2 computers so as to be able to distribute your code on 16 cores?
(Sorry if I missed it but I do not know what DC work stands for.)
 

ericlp

Diamond Member
Dec 24, 2000
6,133
219
106
DC = Distributed Computing. You know for like Folding @ Home or Seti / Prime and many others. Most DC projects are piggy backing on Open-Sourced BONIC.

Berkeley Open Infrastructure for Network Computing

http://boinc.berkeley.edu/

 

DrMrLordX

Lifer
Apr 27, 2000
21,632
10,845
136
Yes, very little reason to localize computational resources to one node when doing something like a DC project.
 

WhoBeDaPlaya

Diamond Member
Sep 15, 2000
7,414
401
126
Yes, very little reason to localize computational resources to one node when doing something like a DC project.
Or anything MT for that matter.
While certainly not as power efficient, my 2x HP Z800 (dual X5690s) and 2x HP Z600 (dual X5670s) are dirt cheap for the amount of MT power that they bring.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
Or anything MT for that matter.
While certainly not as power efficient, my 2x HP Z800 (dual X5690s) and 2x HP Z600 (dual X5670s) are dirt cheap for the amount of MT power that they bring.
Not completely true. Just because it's MT work doesn't mean it would just as quick. For example doing encoding work, much better off using 1 machine if working with one video file. Though if you are trying to process two you'd be better off with two machines. Getting MT workloads to work like DC comes with a lot of overhead for piecing the processed work back together. But if you are doing DC work already there is little difference between doing the work on one system or 2 or 30.

The advantages there would be lower latency on the actual payload and higher clocks that you get with smaller CPUs. In Some settings, like Celeron vs. i5, the 100 dollar difference would be a pittance in comparison to the extra cost of all the other hardware for a second machine. Or a 1700x vs. 2*1300x. A 1950 vs 1700x *2 the premium cost of all the support hardware for the 1950, negates any cost savings.
 

WhoBeDaPlaya

Diamond Member
Sep 15, 2000
7,414
401
126
^ You're exactly right, the thought occurred to me after posting.
I typically have a whole mess of stuff to encode, and splitting that up over multiple systems is not a problem.