So.... What would be faster...

ericlp

Diamond Member
Dec 24, 2000
6,133
219
106
A) 1080 TI
B) Threadripper 1950X 16-Core

I guess what I'm trying to get at here is, I can buy two 1080 TI's and one AMD 1700X for about the same price as a full on system with 1950X and 1 1080 TI. While it would be really nice to crunch with 16 cores.... The object here is how much work can be done... :)

I am getting my keys to my new office space tomorrow if everything closes like it should. After I paint do some upgrades to floor... I'm going to start building a new crunching rig. I'm think a 1080 TI will run circles around a threadripper... ? Correct?

Thanks for the help!
Threadripper
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,244
3,833
75
For what purpose? It varies by project.

CPU-only projects: TR
Asteroids: TR
Most other projects using GPUs: 1080 Ti
 
  • Like
Reactions: ericlp

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
(ninja'd by Ken g6)
I'm think a 1080 TI will run circles around a threadripper... ?
It's very different between projects. Many projects don't have a GPU application at all.

In projects which do have a GPU application, you are usually correct. For example, in Folding@Home* and in SETI@Home, a single GTX 1070 or 1080 is about as fast as a dual-processor Xeon system (with 2x low-clocked high-core-count CPUs or with 2x higher-clocked medium-core-count CPUs).

But in Asteroids@Home for example, the GPU application is inefficient. A GTX 1080Ti is only as fast in Asteroids as circa 10 threads of a low-clocked Xeon processor, i.e. perhaps as fast as 7 threads of the AMD 1950X. (IOW a 1950X fully used for Asteroids should be 4...5 times as fast as a 1080Ti.) But most GPU-enabled projects are much more efficient on GPUs compared with CPUs; I am not aware of another project which is as bad as Asteroids, but I haven't tried many yet.

--------
edit, re F@H,
*) but only with certain work unit types, and only on Linux (I heard Windows cannot use more than 32 threads in a single CPU "slot" of the F@H client, which is not sufficient to match the F@H throughput of a GTX 1070)
 
Last edited:
  • Like
Reactions: ericlp

GLeeM

Elite Member
Apr 2, 2004
7,199
128
106
I think it would be most helpful if someone would create a list of projects indicating which type of system is best. I know I would like to see such a list. :)

I did find Current BOINC Whitelist somewhat helpful
We have a list that shows if a project is CPU only, which GPU can be used if any, and what Operating Systems.
Look at the end of each project description in Orange Kid's "Distributed Computing Project List" in the Sticky Threads section at the top of the DC forum.
https://forums.anandtech.com/threads/distributed-computing-project-list.2494315/

Because I can never remember which GPU does best in each project it would be cool if that were included, like in Enigma, "AMD is 2X faster than Nvidia".
 
  • Like
Reactions: ericlp

ao_ika_red

Golden Member
Aug 11, 2016
1,679
715
136
Because I can never remember which GPU does best in each project it would be cool if that were included, like in Enigma, "AMD is 2X faster than Nvidia".

As a novice, that's one of the reason why I have to do some "digging" in several forums. In the future, I wish "The DC project list thread" also explain how each project behaves on certain CPU / GPU brand, so it will be easier to lure new member into the TeAm.

AFAIK, Milkyway, and Einstein do very well on AMD card ( I wish I could own 7970/280X card because its FP64 performance is in a different ballpark).
 
  • Like
Reactions: ericlp

ericlp

Diamond Member
Dec 24, 2000
6,133
219
106
Most computers are equipped with a Graphics Processing Unit (GPU) that handles their graphical output, including the 3-D animated graphics used in computer games. The computing power of GPUs has increased rapidly, and they are now often much faster than the computer's main processor, or CPU.

NVIDIA (a leading GPU manufacturer) has developed a system called CUDA that uses GPUs for scientific computing. With NVIDIA's assistance, we've developed a version of SETI@home that runs on NVIDIA GPUs using CUDA. This version runs from 2X to 10X faster than the CPU-only version. We urge SETI@home participants to use it if possible. Just follow these instructions:

As with a 1080TI with over 12 Billion transistors and a TR 1950X of 9.6 Billions. Usually a higher transistor count wins in speed tests.

https://setiathome.berkeley.edu/cuda.php

I was hoping someone here would know from experience running the latest CUDA drivers with a modern graphics card. Maybe.......... not too many people buying 7-800 dollar graphics cards to run seti. ;) I'm still thinking a 1080TI will out perform a TR running seti work units. Then again....likewise... Probably not many here buying 1000 dollar CPU to run Seti! :p
 

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
SETI@Home is a special case. Efficiency of their current mainline GPU application is better than that of their mainline CPU application, but then again only at the same level as third party optimized CPU applications.

In the SETI@Home Wow Event 2017, I ran among else a host which has 2 GTX 1080Ti and 1 GTX 1080.* Most of the time during the event I ran 3 tasks in parallel on each card.** This host was measured with 300 GFLOPS during the event. As far as I understand, this number is per task, which would mean an average of 900 GFLOPS on each of my cards.

Since performance scales nearly linearly with shader count of a card when the card is fully utilized by the application, that would mean about 990 GFLOPS on 1080Ti and 710 GFLOPS on 1080 (again as the sum of GFLOPS of the three tasks per card).

I also ran dual Xeon E5-2690v4 which was measured at 23.6 GFLOPS using the optimized CPU application. I presume this number means GFLOPS per logical CPU, hence 660 GFLOPS for this 14C/28T processor (and double for the dual socket host).

And dual Xeon E5-2696v4: 20 GFLOPS per logical CPU presumably, or 880 GFLOPS for the 22C/28T processor.

The optimized SETI application makes use of AVX. Therefore it is difficult to tell how Broadwell-EP's performance translates to Threadripper's performance, since their AVX units are built differently.

@Markfw had his Threadripper running Asteroids@Home recently, and I the E5-2696v4. Judging from the average task durations on both processors, Threadripper 1950X comes very close in performance to E5-2696v4. Asteroids@Home uses AVX too (on processors which support it, like the ones discussed). Whether or not this also means similar SETI@Home performance is not clear to me.

Caveat:
There may be grave mistakes in my calculations.

not too many people buying 7-800 dollar graphics cards to run seti. ;)
Performance per Watt of 1080Ti and 1080 is identical (considering only power consumption of the card, not of the host). The choice between them is therefore decided by density, cooling, and other considerations.

------------
*) I need to clean this up; it's better to have only cards of the same model in a host.

**) One task per card would not saturate the cards; saturation with 2 tasks would have been OK, but 3 was slightly better still, and 3x3 tasks was a convenient way for me to overcome Maintenance Tuesday. The application has command line parameters which can be tweaked for particular GPU models, which I presume could be a way to saturate big cards with just one task at a time, but I don't know how to work with these parameters.
 
  • Like
Reactions: ericlp

Smoke

Distributed Computing Elite Member
Jan 3, 2001
12,649
198
106
We have a list that shows if a project is CPU only, which GPU can be used if any, and what Operating Systems.
Look at the end of each project description in Orange Kid's "Distributed Computing Project List" in the Sticky Threads section at the top of the DC forum.
https://forums.anandtech.com/threads/distributed-computing-project-list.2494315/

That is a very good thread. It ought to be pinned to the top of the forum ... oh, wait, I already did that. :p

And thank you StefanR5R for your all your work and input.
 
  • Like
Reactions: ericlp

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,542
14,497
136
OK, The only input I can give.

FIRST, all my GPU's (video cards) do F@H, as one 1080TI does 1.1 million PPD, but 32 threadripper CPU's only do 200k PPD.
So every box currently (about 200 threads/cores) has WCG running for a grand total of 105-120k ppd for all 200 cores.

But remember, each project scores points way differently, so F@H does 200k on 32 cores, but WCG on the same box only does 16k ppd (if the averages I see are correct, not sure on that)
 
  • Like
Reactions: ericlp

ericlp

Diamond Member
Dec 24, 2000
6,133
219
106
Thanks for all the replies.... :) Makes it more clear on what to invest in. Cheers guys. Painting out my office..... New floors and some other stuff are happening by the end of the month. Timeline is mid november for build, but might wait to see what black friday has for video cards before buying. Maybe I'll get lucky!