Performance/watt for various CPUs

Discussion in 'Distributed Computing' started by VirtualLarry, Nov 29, 2012.

  1. VirtualLarry

    VirtualLarry Lifer

    Joined:
    Aug 25, 2001
    Messages:
    35,785
    Likes Received:
    608
    Curious how Thuban CPUs compare with 45nm C2Q, and Vishera.

    Yes, I realize that SB/IB are tops, but they are expensive, and would require a whole new mobo + CPU. I'm trying to re-use what I have.

    C2Q Q9300 @ 3.0, stock voltage.
    Thuban 1045T @ 3.51, stock voltage.
    Vishera 8320 @ 4.0? stock voltage.

    How would these stack up?

    You would be going from four threads to six threads to eight threads.
    If power consumption doesn't go up linearly with thread count, then perhaps it would be worthwhile investing in a more-core platform.
     
  2. Loading...

    Similar Threads - Performance watt CPUs Forum Date
    Performance differences on the same video card in F@H Distributed Computing Nov 10, 2016
    F@H GPU Failed, Video Performance Lag Distributed Computing Oct 28, 2016
    GTX1080 F@H findings and performance Distributed Computing Aug 29, 2016
    Milkyway@Home - GPU & CPU performance stats wanted, any capable h/w, old or new! Distributed Computing Feb 1, 2014
    Article: Seeking The Best Performance per Watt for Folding@Home Distributed Computing Oct 31, 2008

  3. sangyup81

    sangyup81 Golden Member

    Joined:
    Feb 22, 2005
    Messages:
    1,082
    Likes Received:
    0
    Is this for crunching? Here are some per core points/hour numbers I came up with:

    https://docs.google.com/spreadsheet/ccc?key=0AnMz44dTsXA6dHdmU19WbE1Jb1hORnJFSDFkN0NHMXc
     
  4. Sunny129

    Sunny129 Diamond Member

    Joined:
    Nov 14, 2000
    Messages:
    4,821
    Likes Received:
    0
    ^ thanks for the info. we have to do some additional math to get the big picture, as the numbers in that chart are only representative of "per core performance." that said, i kind of like the data displayed that way, as it gives us a more or less direct comparison of architecture efficiency between each CPU's instruction pipeline (after we take into account that all CPUs in the test are overclocked, and not in equal amounts or proportions with respect to their base clocks). for instance, while a 1045T core @ 3.8GHz is 26% more efficient than an FX-8320 core @ 4.0 GHz and 15% more efficient than a Q6600 core @ 3.0GHz in the Malaria application, when all cores are taken into account, the FX-8320 @ 4.0GHz edges out the 1045T @ 3.8GHz by 5.8%, and it beats the Q6600 @ 3.0GHz by a whopping (but expected) 82%.
     
  5. sangyup81

    sangyup81 Golden Member

    Joined:
    Feb 22, 2005
    Messages:
    1,082
    Likes Received:
    0
    I created that spreadsheet because I was aware of how Bulldozers and Piledrivers had weaker Floating Point but strong Integer Performance. With that data I decided to set my FX-8320 to get mainly Clean Energy Project and Leishmaniasis as well as the GFAM and Schitstosoma that my Q6600 and 1045T get.

    Only thing I can't really contribute are the power numbers unfortunately. It's the power numbers that makes the SB and IV the kings of crunching CPUs. I bet if we really looked at the numbers, it would be worthwhile to even have a cheaper non-OC setup using SB or IB compared to AMD or Core 2 systems
     
  6. VirtualLarry

    VirtualLarry Lifer

    Joined:
    Aug 25, 2001
    Messages:
    35,785
    Likes Received:
    608
    I guess my initial question is kind of moot now. I have a relative, who's electricity is included in the rent. They also have electric heat. So I'm going to help them heat their apt this winter, with a few crunchers. (Which are more or less pretty much space heaters, electric heaters in general being 100% efficient. So it won't cost any more than just running the heat directly, only some useful computations get done.)

    I've got two Q9300 @ 3.0, and two 1045T, one in an Asrock 990FX Extreme4 board, and one in a K9A2 Platinum board. I'm going to need another Extreme4 board, and possibly a PSU.

    Then I will have Q9300 @ 3.0, 8GB DDR2, and an HD4850 512MB, times two. I will also have the Thuban 1045T times two, each with two 9600GSO video cards, with a third slot empty.

    I also have a GTX460 card to utilize too, possibly.

    So what projects should I crunch? I should probably put the HD4850 cards on MW@Home, I'm undecided if I should run MW@Home on the Q9300 CPUs as well, or run WCG.

    On the Thubans, I will either run PrimeGrid, or WCG, or one of each on each machine.
     
  7. sangyup81

    sangyup81 Golden Member

    Joined:
    Feb 22, 2005
    Messages:
    1,082
    Likes Received:
    0
    I haven't done any Milkyway on CPUs ever since the N-Body days but I can say that WCG pushes my CPUs harder than Correlizer did.
     
  8. VirtualLarry

    VirtualLarry Lifer

    Joined:
    Aug 25, 2001
    Messages:
    35,785
    Likes Received:
    608
    http://www.newegg.com/Product/Produc...82E16822136292

    I ordered five of these refurb 80GB IDE HDDs for $19.99 FS ea. It's shipping "egg saver", hopefully they don't get kicked around too much.

    Anyways, I have plenty of SATA-to-IDE/IDE-to-SATA converters, so I'll hook up each of my crunchers with a HDD, and have a spare. I'm not storing any personal data on the crunchers, so I figure that should work out. If the HDD dies, it dies, and I just have to get another HDD and re-install.

    I do have some 1TB SATA HDs, but I was planning on using them for a storage server, and not "waste" them sitting spinning their life away, barely used, on a cruncher.

    Edit: If those drives and the SATA-to-IDE converters don't work out, I guess I'd have to splurge for these drives for $50:
    http://www.newegg.com/Product/Produc...82E16822136771

    But for $65-70, I could also pick up some Samsung 830 64GB SSDs.
    http://www.newegg.com/Product/Produc...82E16820147162

    The problem is, BOINC checkpoints every so many seconds, which means that with six cores, there will be six checkpoints, every N seconds, which comes out to a lot of writes, which will wear out an SSD quickly.

    I went from 100% health on a 30GB Agility drive with 1.7 firmware, down to like 74%, in a month or two, of running BOINC, without editing the checkpoint time.

    Edit: It figures. After I ordered those HDDs from Newegg, I found out Geeks.com has them for $11.99 + ship.
    http://www.geeks.com/details.asp?invtid=WD800BB-NDW-R&cat=HDD
     
    #7 VirtualLarry, Nov 30, 2012
    Last edited: Nov 30, 2012
  9. Sunny129

    Sunny129 Diamond Member

    Joined:
    Nov 14, 2000
    Messages:
    4,821
    Likes Received:
    0
    holy crap! i've been running BOINC on an SSD in all of my hosts for well over a month now! granted they're Samsung 830 128GB drives, and won't wear out as fast as a small 30GB SSD, but i should check into the health of my SSDs. what software do you use to check this? and what do i have to do to still allow checkpointing in BOINC without wearing out my SSD? must i install both the BOINC program and data directories on a HDD? or do i just have to install the BOINC data directory on a HDD?

    TIA,
    Eric
     
  10. VirtualLarry

    VirtualLarry Lifer

    Joined:
    Aug 25, 2001
    Messages:
    35,785
    Likes Received:
    608
    Well, the Agility series uses a Barefoot controller, which was known for horrible write-amplification, although it was improved in the 1.7 firmware.

    I think BOINC defaults to checkpointing every 60 seconds. I changed it to 600, or 3600.

    I think it wrote 9TB to the drive in a few months.

    Edit: I was going by what SSDLife was telling me.
     
    #9 VirtualLarry, Nov 30, 2012
    Last edited: Nov 30, 2012
  11. blckgrffn

    blckgrffn Diamond Member

    Joined:
    May 1, 2003
    Messages:
    6,647
    Likes Received:
    2
    Yeah, I upped my checkpoints a ways as well. No sense in overdoing it...

    Sounds like you'll be doing a lot of crunching this winter :)
     
  12. petrusbroder

    petrusbroder Elite Member

    Joined:
    Nov 28, 2004
    Messages:
    12,806
    Likes Received:
    201
    Why use a SSD for BOINC? It does very little from the speed point of view ...
    I have - when installing - moved the data and program directories to a mechanical HDD and it works very well indeed - I see no problem with performance.
    A problem with check-pointing every 10 or 30 minutes is that you loose all work done in the past 10 or 30 minutes if the computer crashes or hangs. That means - at least for short WUs (quite a few GPU-WUs) that you may loose all work you have done. Could become a problem - or not. YMMV.
     
  13. Sunny129

    Sunny129 Diamond Member

    Joined:
    Nov 14, 2000
    Messages:
    4,821
    Likes Received:
    0
    i have BOINC on my SSD b/c my OS and all other programs are installed on the SSD. OS aside, my programs are on the SSD not so much for the slight speed advantage, but so my HDD doesn't have to spin up and down every time it has to read or write something. spin-ups consume a good deal of power and put alot of wear and tear on HDDs. while my HDDs aren't particularly loud when spinning, i'm still a silence freak, which is why i let my hard disks power down when not being used. alternatively, i could let them run all the time to avoid the extra power consumption and wear & tear regular spin-ups, but then i have to deal with the extra noise, and constantly spinning HDDs consume a good deal of power too.

    i suppose for now i'll leave the BOINC program and data directories installed on the SSD and see what kind of performance degradation had happened since i started using SSDs several months ago. what software do you guys use to test the remaining life and performance of an SSD?
     
  14. petrusbroder

    petrusbroder Elite Member

    Joined:
    Nov 28, 2004
    Messages:
    12,806
    Likes Received:
    201
    Yeah, Sunn129, your's are very valid concerns.
    On the other hand: The following is for a 256GB-drive ... and may not represent an average SSD. But the review is astill worth a read ...
    from: tomshardware.com
     
    #13 petrusbroder, Dec 1, 2012
    Last edited: Dec 1, 2012
  15. petrusbroder

    petrusbroder Elite Member

    Joined:
    Nov 28, 2004
    Messages:
    12,806
    Likes Received:
    201
    How did you find out that BOINC wrote 9TB? or does the number represent all of the writes to the SSD?
     
  16. Kiska

    Kiska Senior member

    Joined:
    Apr 4, 2012
    Messages:
    344
    Likes Received:
    43
    But with BOINC projects they need to write and/or read from the drive so they can compute. On HDD they die when the motor or head gives out and using my workload, 24/7 on a laptop it'd take just under 200 years from now and has been running for 7 years on counting. HDD is still healthy and performance is at 97% with health at 95%.
     
  17. VirtualLarry

    VirtualLarry Lifer

    Joined:
    Aug 25, 2001
    Messages:
    35,785
    Likes Received:
    608
    That was total SSD writes. But I was barely using the SSD otherwise. So it had to have been BOINC using all those writes, more or less.

    Edit: So what is the best way to allocate these machines, for crunching:
    2x rigs - Q9300 @ 3.0, HD4850, 8GB DDR2 - MW@Home on GPU, ? on CPU
    1045T @ 3.51, 2x 9600GSO, 16GB DDR3 - PrimeGrid on GPU, WCG/PrimeGrid on CPU?
    1045T @ 2.7 (stock), 2x 9600GSO, 8GB DDR2 - PrimeGrid on GPU, ? on CPU
    X4 630 @ 2.8 (stock), GTX460, 4GB DDR2 - ? on GPU, ? on CPU
     
    #16 VirtualLarry, Dec 1, 2012
    Last edited: Dec 2, 2012
  18. blckgrffn

    blckgrffn Diamond Member

    Joined:
    May 1, 2003
    Messages:
    6,647
    Likes Received:
    2
    Are you looking for max PPD or pushing on pet projects? :)

    I vote for WCG where possible :)
     
  19. petrusbroder

    petrusbroder Elite Member

    Joined:
    Nov 28, 2004
    Messages:
    12,806
    Likes Received:
    201
    If for max ppd the the combo MilkyWay and PrimeGrid should do very well (using the CPU-clients for PrimeGrid) and have very few outages.
    WCG is very good from the science point of view, but WCG is not so efficient from the ppd point of view, but have extremley few outages.
    Similarly, MalariaControl and Fightmalaria@home are very good fronm the science point of view, but have outages.

    PrimeGrid works hotter than most other projetcs.
    MilkyWay has more of faulty WUs (just now) than many other projects; this may be temporary and they are working on this problem.

    My pet-project, Seti@home is quite good at the ppd-level, very exciting at the science level, but they have server problems, outages, etc at least once a week, some times seti can stay of the net for 4 - 5 days ... Seti has grown very, very large and has these problems because of its size (I think). It is also the great-great-grandfather of all DC-project on the net.
     
  20. Kiska

    Kiska Senior member

    Joined:
    Apr 4, 2012
    Messages:
    344
    Likes Received:
    43
    Yes SETI does have server problems also they have bandwidth problems too they only have a 100 megabit connection and it's saturated all the time
     
  21. VirtualLarry

    VirtualLarry Lifer

    Joined:
    Aug 25, 2001
    Messages:
    35,785
    Likes Received:
    608
    Can they set up shop in Kansas City, with a Google Fiber 1Gbit connection?
    :)
     
  22. ZipSpeed

    ZipSpeed Golden Member

    Joined:
    Aug 13, 2007
    Messages:
    1,193
    Likes Received:
    25
    I have a 100 mbps connection to myself at home and I can't imagine having to share that with thousands of other people. Crazy.
     
  23. Kiska

    Kiska Senior member

    Joined:
    Apr 4, 2012
    Messages:
    344
    Likes Received:
    43
    They share it with millions of computers not just thousands or tens of thousands, literally millions. :) And I have a 10 mbps connection and 100 mbps that is connected to a server so I have to share that connection with other people. :( Oh well.
     
  24. somethingsketchy

    somethingsketchy Golden Member

    Joined:
    Nov 25, 2008
    Messages:
    1,019
    Likes Received:
    0

    If they were able to use EC2, then they could create some redundancy for the millions of connections that are expected. Certainly would be handy for competitions.
     
  25. Kiska

    Kiska Senior member

    Joined:
    Apr 4, 2012
    Messages:
    344
    Likes Received:
    43
    But they don't have the funds to do so. They are running the servers at the university but the uni is not paying for maintainence fees, so who are there maintaining the servers are volunteer professors to keep them running and giving their spare time that is why the servers aren't started or are quite slow.
     
  26. PCTC2

    PCTC2 Diamond Member

    Joined:
    Feb 18, 2007
    Messages:
    3,877
    Likes Received:
    7
    On a tangent, UCSD is lucky in regards to the network connection compared to other universities. UCSD hosts the location of San Diego Supercomputer Center, which hosts some of the larger research clusters, in addition to 2 colo's available to UCSD faculty and staff for departmental use. UCSD as a whole piggy-backs onto SDSC's connection. SDSC has a fat pipe to UCI and to 4 other major research universities, in addition to a fat pipe to the world. On a campus with 28000 students, and thousands of staff and faculty and thousands of workstations, laptops, and servers, real-world bandwidth for servers can hit 80/50 Mbps and WiFi can still hit 20/20 Mbps to off-site locations. Not bad.

    Now, I went from that connection to 2 bonded T1's, getting 3/3... it's very sad. But I also went from a small 8-node cluster and a few stand alone servers to working with heavy-hitting hardware, so I think the trade-off was alright.
     
    #25 PCTC2, Dec 5, 2012
    Last edited: Dec 5, 2012