Threadripper BUILDERS thread

Page 30 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

urvile

Golden Member
Aug 3, 2017
1,575
474
96
Because of interest, I am a software architect so it would be interesting if the case was the later...

The whole concept of things like folding@home, seti@home is that you lend your hardware to do calculations for others, the fact that he has full control doesn't make any difference, this was quite unique when it first arrived because before you where use to do such things in big super-computers build in large data centers... The fact that he is in full control doesn't change that fact, it just means he sets the terms for the lend...

The mere concept of building such software and infrastructure is super interesting...
It's not to belittle what he does, it just not as interesting to me (professionally)...

The distributed systems I have seen in industry are quite different. Where you can have a cluster that works off shared state in a data base. Then you can add and remove nodes (scale) depending on required throughput, availability etc.
 

jmelgaard

Member
May 23, 2011
27
9
76
The distributed systems I have seen in industry are quite different. Where you can have a cluster that works off shared state in a data base. Then you can add and remove nodes (scale) depending on required throughput, availability etc.

With the exception that the "database" is often "distributed" as well (under some sort of consistency model and sharding level etc.) then that is true for me as well...

I have been writing bids for projects where we would be able to utilize the concept of sending out instructions and data to nodes and then receive the results back where the shared state would not be needed (Which would be a rough simplification of the same general concept) and calculations was heavy enough to justify it, we would obviously control the nodes which is a rather big contrast, and it would still be child's play in comparison.

That doesn't make such systems less interesting to me though.

Anyways, this is drifting way of subject of building TR systems... >.<
 
Last edited:

urvile

Golden Member
Aug 3, 2017
1,575
474
96
With the exception that the "database" is often "distributed" as well (under some sort of consistency model and sharding level etc.) then that is true for me as well...

I have been writing bids for projects where we would be able to utilize the concept of sending out instructions and data to nodes and then receive the results back where the shared state would not be needed and calculations was heavy enough to justify it, we would obviously control the nodes which is a rather big contrast, and it would still be child's play in comparison.

That doesn't make such systems less interesting to me though.

Well the system we developed was a piece of middleware that controlled a tolling system. It wasn't really a pure distirbuted system. It would be more accurate to say that it had distributed system aspects. We did it that way primarily for availability and throughput. Where each server could process a data set (in regards to certain functionality) from where another server left off. The data sets would contain the state so a particular node would know how to process it.

It's distributed computing just not in the sense of purely crunching numbers. We needed to be able to process data from a single (enterprise grade) data base instance in a distributed fashion if required.
 

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,328
4,913
136
From csbin's link (http://www.tomshardware.com/news/amd-1900-threadripper-tr4-ryzen,35360.html):

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS82L1QvNzA3ODYxL29yaWdpbmFsLzM5OS5KUEc=
 

Fir

Senior member
Jan 15, 2010
484
194
116
Does anyone know if there is any performance difference using the DIMM nvme adapter on the Asus Zenith vs. the onboard nvme socket that's under the cooler?
I have a single 2TB 960 pro in the DIMM adapter (that takes two drives) now and performance is OK but not quite the level it was on my x99 board. It's probably about 3% slower so no dealbreaker.

However, running a pair of 960s striped and bootable would be nice indeed!
Another option would be to boot off one on the board and use two striped as a scratch disk.
 
  • Like
Reactions: lightmanek

jmelgaard

Member
May 23, 2011
27
9
76
And you don't have to buy a $100 key to use it. ;-)

I am guessing that is a comment about the Intel debacle? To be fair, it was only Raid 5 and up you had to pay for wasn't it?... (Only to be fair)
Wasn't there something about it would also only work with Intel NVMe's (for Intel)??...
I am guessing that since AMD doesn't produce NVMe's, that is not a limitation here?... (Which would be 2 x Wins)

(We can have some hope that this puts pressure on Intel so they can come to their senses as well >.<, but I am not holding by breath for it)...
 

hasta666

Junior Member
Aug 18, 2017
17
8
81
However, running a pair of 960s striped and bootable would be nice indeed!
Another option would be to boot off one on the board and use two striped as a scratch disk.

I'm exactly planning that with 3 960 Pro 500Gb. I like that the OS has a dedicated drive. But I'm gonna test a 3-drive RAID 0 array to see if it scales well. If it does, I'm gonna use a SATA SDD for the OS.
 

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
Does anyone know if there is any performance difference using the DIMM nvme adapter on the Asus Zenith vs. the onboard nvme socket that's under the cooler?
According to the manual, each of these slots is using PCIe 3.0 x4 lanes directly from the CPU. They should perform the same.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,770
3,590
136
Got a 1950X, Zenith Extreme and Noctua NH-U14S on the way :) Also got a Vega 64 LC. Replacing my X99 system, first time I've gone full AMD since the Athlon 64 X2. Pretty excited!
Just look out for clearance on the top-most PCI-e slot.
 
  • Like
Reactions: Drazick

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
tomshardware.de reports wrong temperature readings with Agesa 1003 patch 4:
http://www.tomshardware.de/threadripper-bios-agesa-fehler-temperaturwerte,news-258405.html

Also, EK-Supremacy TR edition quick hands-on:
http://www.tomshardware.de/wasserku...emperaturen-probleme,testberichte-242386.html
It's in German of course, but there are pictures. ;) Mr. Wallossek approves of the mounting procedure and of the cooling capacity, as far as it could be determined at this stage. Ignore the graphs; they are presumed wrong due to above mentioned Agesa version.
 
  • Like
Reactions: IEC

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
I am guessing that is a comment about the Intel debacle? To be fair, it was only Raid 5 and up you had to pay for wasn't it?... (Only to be fair)
Wasn't there something about it would also only work with Intel NVMe's (for Intel)??...
I am guessing that since AMD doesn't produce NVMe's, that is not a limitation here?... (Which would be 2 x Wins)

(We can have some hope that this puts pressure on Intel so they can come to their senses as well >.<, but I am not holding by breath for it)...

No it was both Raid 5 and Raid 1. With possibly two keys one for Raid 1 and another for Raid 5 and Raid 1 ($100 vs. ~$299), and had to be Intel SSDs no matter what.

But yeah the advantages go from single NVME drives, to being able to raid for free. That raid options being 0/1/10. All with whatever NVME drives you want to use.
 
  • Like
Reactions: lightmanek

Chaotic42

Lifer
Jun 15, 2001
33,929
1,097
126
Well, it ran for like 10 minutes and then died. No idea what the problem is since the motherboard display is broken. I'm going to swap out the motherboard tomorrow. If that doesn't work, I'm returning the whole thing. We'll see.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
Through the chipset. How is AMD implementing theirs?
Through the CPU.

That's what the VROC thing is for is Raid through CPU attached NVME.

The way Intel Chipsets work and sadly how most X299 boards work is you get X amount of CPU PCI-E. An additional 4 of those get sent to the chipset like AM4. At the chipset they bridge that PCIe 3.0 4x to 20 PCIe lanes. So with X299 because you could be dealing with 16, 28, or 44 lanes, What most motherboards are doing is Is outside PCIe 16x #1 they are reluctant to rely on CPU lanes for most PCIe services. What I have seen is that the PCIe slots are more likely to get CPU lanes (except PCIe 16x #2) and only one or two of the NVME slots get CPU lanes.

The positive side is that you get RAID for free with Chipset connected NVME. Bad side is that Raid 0 perf is a single NVME drive and Raid 1 would have half the performance of a single drive.

I looked up your board and found only 1 of the M.2 slots is a CPU slot.

With AMD not only do they have 60 CPU lanes available. But since all TR's have 60 there is no reason to use the chipset for NVME. Also the chipset doesn't have a multiplier so you could only do so much with it. That's why it was going to be a bit of a question mark for AMD, since Intel only recently launched CPU NVME Raid support if AMD could offer it at all. The great news is yes they can and for free at that.
 
  • Like
Reactions: lightmanek and IEC

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
Hmm, the 1900X looks like it may be a useful CPU for benchmark testing against the R5-1500X. As I understand it, the 1500X is 2+2. In game mode, the 1900X should be 4+0+0+0. If you configured the 1900X to only have two channels populated with RAM, and managed to figure out which two would be attached to the active core in game mode, you could effectively test the effect of the cross CCX communications penalty for Ryzen. You could also test the effect of RAM speed on that delay in a controlled environment.