• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Future-of-Free-DC-stats?

Rudy Toody

Diamond Member
The Free-DC stats site is down due to SSD failure. Go here for discussion:
Future-of-Free-DC-stats

Post your ideas for solutions here first so we don't bury their forum. When we have a few concrete plans, we can post those on Free-DC.

I don't have any high-end gear, so I will contribute money. Also, we could move our pledge drive from September to June/July.
 
They need to buy large MLC drives and then underuse them like crazy. Like some of the 980GB Crucial drives and then use ~ 200GB of them.

This will keep write amplification at a minimum and give the controller a huge amount of spare area to spread the write load across.

If he was using an Intel Sandforce drive, they are not known for any type of reliability if they are being used to their max capacity.

My $.02.

I see a reboot on the drive brought it back. Par for the course on SF based drives that are flakey in my experience.
 
Last edited:
The stats don't need to be updated as often as they are now.

And a big thanks for the stats, they are appreciated 🙂
 
They need to buy large MLC drives and then underuse them like crazy. Like some of the 980GB Crucial drives and then use ~ 200GB of them.

This will keep write amplification at a minimum and give the controller a huge amount of spare area to spread the write load across.

If he was using an Intel Sandforce drive, they are not known for any type of reliability if they are being used to their max capacity.

My $.02.

I see a reboot on the drive brought it back. Par for the course on SF based drives that are flakey in my experience.
i'm not really sure what would be the less expensive solution - a mobo upgrade and lots more memory, or a pair of SSDs in the 1TB capacity range...but as fast as the slowest SSD may be in comparison to the traditional spindle HDD, there is substantially less latency involved in reading/writing directly from/to main memory (RAMdisk) than there is in reading/writing directly from/to an SSD over the SATA bus. then again, perhaps this DB server doesn't need to be that "fast." if that is indeed the case, and the SSD solution is in fact the less expensive solution of the two, then its a no-brainer.

someone mentioned in a subsequent post though that he has a Supermicro MBD-H8SGL and an AMD Opteron 6172 CPU that he could donate once the mobo comes back from RMA, leaving the memory as the only remaining investment. a quick search on the egg shows that 128GB of registered ECC DDR3 memory will run about $1000 (less any amount of comaptible memory he might already have in use in the existing server). already its starting to look like the RAMdisk is the better solution, so long as the donor gets his mobo back from RMA sooner rather than later...
 
Could the solution be as easy as no stats from X to X+1 hrs GMT just to let the everything catch up & TRIM & whatever?

If it isn't obvious already, this is waay over my paygrade.

I'm not averse to sponsoring another month, if needed. I like (& miss) his stats.
 
Velociraptors might be a better choice than an SSD, unless he absolutely needs the IOPS for some reason. (Possibly VR in a RAID-10 config?)
 
Velociraptors might be a better choice than an SSD, unless he absolutely needs the IOPS for some reason. (Possibly VR in a RAID-10 config?)

Agreed, a RAID-10 would be very stable for these particular database transactions. Granted I'd go with 15K if Bok needs the speed, but the reliability would be better in the longtime, until SSDs for databases.

Of course another solution would be a SAN, but we may need to start with fundraising to get that kind of hardware.
 
They need to buy large MLC drives and then underuse them like crazy. Like some of the 980GB Crucial drives and then use ~ 200GB of them.

This will keep write amplification at a minimum and give the controller a huge amount of spare area to spread the write load across.

If he was using an Intel Sandforce drive, they are not known for any type of reliability if they are being used to their max capacity.

My $.02.

I see a reboot on the drive brought it back. Par for the course on SF based drives that are flakey in my experience.

SSDs can work with native TRIM support or make use of the TRIM support if OS supports. SandForce has claimed to achieve a typical write amplification of 0.5, with best-case values as low as 0.14 in the SF-2281 controller. OP is part of all SSDs that use Flash memory, Users can easily increase the OP if desired generally higher OP provides Higher write performance,Lower “Write Amplification”with Longer Flash life endurance,lower write amplification & longer NAND life.
 
SSDs can work with native TRIM support or make use of the TRIM support if OS supports. SandForce has claimed to achieve a typical write amplification of 0.5, with best-case values as low as 0.14 in the SF-2281 controller. OP is part of all SSDs that use Flash memory, Users can easily increase the OP if desired generally higher OP provides Higher write performance,Lower “Write Amplification”with Longer Flash life endurance,lower write amplification & longer NAND life.

My experience (and Anandtech articles) says that with SF drives fill up, they fall over, even when they have OP space allocated (stock space is too small to really handle this).

Two 980GB drives should cost ~$1k. To 480's at 30% provisioned would be a great fit and should be be less than $800 for a lot of performance.

http://www.amazon.com/Crucial-2-5-In...rds=crucial+m5

If ultimate reliability is the name of the game, we should probably be hitting a read only DB on a different host that only gets updates once a day or something similar.
 
Using something like memcache, we shouldn't have to load the DB into memory. Cache the most hit pages and use some big buffers for the DB to make sure indexes fit into memroy. Again, my $.02.

This is for the stats which doesn't have a lot of end user pushed content. The forums are likely a different beast that wouldn't respond as well to that approach.
 
Using something like memcache, we shouldn't have to load the DB into memory. Cache the most hit pages and use some big buffers for the DB to make sure indexes fit into memroy. Again, my $.02.

This is for the stats which doesn't have a lot of end user pushed content. The forums are likely a different beast that wouldn't respond as well to that approach.

Are the forums on the same physical server?
 
He's been quite busy! Sounds like things will be up sooner rather than later.


*UPDATE* 5.00AM EST 06/07 Back end changes all done and scripts are runnign and accumulating data. Overnight rollover worked out great. Just mkaing some tweaks to the php scripts to accomodate everything. Should be up and running at least for BOINC projects later today.

*UPDATE* 12:00 PM EST 06/07 Only a few more changes needed in the boinc parts, but it all seems to be running well. Some of the custom scripts I created for various teams out there still to fix up. Just finished all the scripts for badges (and that includes the new one for WCG badges which is on a per-user basis only given they don;t export all the data..). I still need to figure out how I'm going to incorporate the nonboinc updates, as I'll need to synchronize it to stay in the same database. If it weren't for the F@H stats it wouldn;t be an issue but they take some time to run. May just update them once per day. For the people asking to donate, it's always appreciated. I'd love to build up a hotspare db server, already have a hotspare webserver, but I could only use it as a DB server if absolutely needed, just not as powerful.
 
For those who want to send a donation, there is a PayPal link here.

I think we should have our donation drive at its regular time---September.
 
Last edited:
Back
Top