Not really, the stock cooler is supposed to be able to keep the CPU (@ stock settings) cool enough that it can run any app (except avx I'd imagine) without thermal throttling. Bear in mind I'm running with the side of the case off, & I'm using high grade thermal paste too.asking the Stealth cooler to cool what we're doing is a bit much. Honestly, I don't think even the popular Hyper 212 can cool these 7nm chips for crunching and that's a 4 heat pipes cooler. The Hyper 212 can handle any V3/V4 xeon, but won't survive with these new Ryzen. Unless you undervolt the crap out of it and that'll cause clock stressing. My 3600 non X is running at 4ghz with 1.32v vcore all core and cooled by a 280mm AIO (Corsair) still hits low 70s with avx WUs. 70s are high for me .. it's just personal preferences.... so I lowered my cpu to 3.9ghz kept the temp under control. I don't like my cpu or gpu goes over 70C.
Look here: https://www.worldcommunitygrid.org/ms/viewBoincResults.do?filterDevice=6313713&filterStatus=4&projectId=-1&sortBy=sentTime&pageNum=1
It seems WCG doesn't show host stats except to the host's owner. But 3rd party stats sites do (if the owner enabled the corresponding export option in the privacy settings). Here is Mark's device ID 6313713:hrm, not showing up here (and i'm logged into wcg)
To follow up on this: As reported elsewhere, last month I went with an 32-core SKU after all, mainly for cost reasons, secondarily for per-socket cooling requirements. Thirdly, I still am wondering about memory bandwidth per core — contrasting 2-CCD per memory controller vs. 1-CCD per memory controller —, but I guess the former is still fine for virtually all DC projects.I should downsize my DC gear in terms of heat output, and for that I need to look into upgrade options with an efficiency level that currently only the 64-core SKUs of EPYC can provide, and maybe the 48-core SKUs come close to. Anything else would merely be a sidegrade with insubstantially improved efficiency, it seems.
I lost interest in TR3K as soon as specs of it and its platform came out. For the time being, I have no use for factory overclocked processors, nor for all the power-hungry I/O and oversized mainboards.I am looking forward to the Threadripper 3000 announcements which should happen this month, but one is for certain: At stock they won't be as energy-efficient as EPYC 7002. Whether or not BIOS settings will allow to drive their energy use down to EPYC levels will remain to be seen.
From watching video below, I immediately know TR is operated beyond its efficiency curve. I mean, TRX40 has huge array of VRM compared this Epyc board.I lost interest in TR3K as soon as specs of it and its platform came out. For the time being, I have no use for factory overclocked processors, nor for all the power-hungry I/O and oversized mainboards.
The draw for the threadripper is for those who need cores and clock speed without ECC and other server features, and who may not be able to spread load across multiple hosts.From watching video below, I immediately know TR is operated beyond its efficiency curve. I mean, TRX40 has huge array of VRM compared this Epyc board.
I still have all my other computers, and of course use them all in Pentathlon. My most efficient ones were dual E5-2696v4 (2x 22c/44t Broadwell-EP, Intel's first 14 nm server processor generation).What did the 32core Rome replace? Great efficiency gain anyway!
Idle power consumption of my PCIe v3 based dual socket Rome computer (here) is the same as as idle consumption of sTRX40 computers (there). As a small difference, these sTRX40 test systems had a GTX 1080 in them which idles at 8 W or less, whereas my computer is headless (but then again, has 4 times as many DDR4 channels).Are the sTRX4 mbrds a lot more expensive than the SP3s as well as more power hungry? (I only looked so far ).
Read Anandtech's Ian Cuttress's response to that. Bottom line ? Don't worry about your CPU dying. As for the lower clocks, no idea.I finally got the bolt through adapter kit for my 12yr old Thermalright Ultima 90 yesterday , swapped out the crappy Stealth cooler for it & unsurprisingly got a 15C drop in temps for LHC (and ~+50 MHz clock interestingly) , the same avx WUs. In Rosetta the drop is even more dramatic at 20C! Although strangely still a lower clock speed (about 3.725 GHz vs 3.8 GHz for LHC), I don't get that, does anyone else?
My next step is to set the PPT back to default & see if it'll hold reasonable temp's.
Then I need to check the 'Power reporting deviation' to see if that was also responsible for the high temps with the stock cooler & PPT setting.
Has anyone else hear read about that? "Ryzen Burnout? AMD Board Power Cheats May Shorten CPU Lifespan" - Tom's
The CPU lifespan thing is apparently over-egged, but it could certainly (in part) explain the temperature problem I was having, we'll see.
Btw, is it true that the recent batch of Ryzens overclock much better?
This is the article by IanYea The Stilt said a similar thing over at the HWiNFO forum, and he was the source of the info in the 1st place .
I don't see Ian's response in their (other than a quote by someone).
Now if I could only find out who/where it was that someone was telling me my Ultima 90 wouldn't be up to the job , I thought it was in this thread.......
|Thread starter||Similar threads||Forum||Replies||Date|
|I found this interesting.||Distributed Computing||1|
|Info I finally somehow got all my computers on BOINCtasks !||Distributed Computing||3|
|Server GPU for F64 Compute?||Distributed Computing||10|
|[TheVerge] brief article about PS3-cluster super-computers.||Distributed Computing||6|
|S||How To Multiple boinc clients on the same computer||Distributed Computing||24|