• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Anyone know of spin-up time benchmarks?

Concillian

Diamond Member
Looking at 2-3 new drives for file storage. Considering possibly spinning down drives, and was interested in finding some info on how fast various drives go from idle to being usable. Couldn't find much, does anyone know of a place that tracks that kind of thing on drive reviews?

Alternately, it seems to be something tracked in S.M.A.R.T. so could there be a database of user submitted values for this somewhere?
 
ssd's are near zero.


Okay, so I apparently just got a volunteer to pay for my file storage server to have 2TB of SSDs.

Thanks Emulex!! Should only be around $4k, but I don't need them immediately. If we wait for the 25nm transition to complete, it may be as low as $2500-3000.

Send me a PM and I'll let you know where you can send the paypal for these.
 
Last edited:
well you certainly need to stage spin up on multiple drives to prevent too much current draw and enterprise drives (RE4) have very very low spin cycles (??) compared to consumer drives.

I wonder if the wear is worth the power the drives pull at idle?
 
well you certainly need to stage spin up on multiple drives to prevent too much current draw and enterprise drives (RE4) have very very low spin cycles (??) compared to consumer drives.

I wonder if the wear is worth the power the drives pull at idle?

It's for the home, I will be using consumer drives.

2 1TB drives won't need to be on staged spinup. I have plenty of power on tap for spinup of 2 drives simultaneously... lol. It'll be an atom or E350 CPU with a normal ATX power supply. I'll have at least 200W headroom on the PSU, drives generally take 25W or so at spinup. The second one will only spinup at 3am for it's nightly incremental backup anyway. That's the only time they'll ever spinup at the same time.

A 3rd drive would be fore external backup to bring to work in case my house burns down. That's why I don't use RAID, makes for easy backup and easy recovery. No special requirement to read data, all three are 3 are bootable. If one dies, I can replace it right away. If my house burns down, I plug it into any other mobo and I lose 3-4 weeks of data tops since I bring that one home about once a month.

I used RAID in the past, but never checked it. when I upgraded the drives last time ( a few years ago) I found one of them had failed and I never even noticed it because I barely touch that machine. I'd actually prefer to know a drive has failed so I can swap it and buy another before I lose a second. This is preferred to one dying, the RAID continuing to work and me having to go look at something to see if it's all working right. Who knows how long I was 1 failure away from losing all my data when my RAID drive failed and it was working off the parity drive.


Aside from nightly backups, these drives are hit less than once a day on average. I don't think the wear and tear would be any more than I put on my desktop system, which hibernates after an hour of idle or so. Coupled with having 2 backups of everything, I think I'm okay with any additional wear from spinups.

Given my low usage requirement, I was thinking about living with spinup time, but wanted to get drives that spinup relatively fast. Storagereview.com reviews show spinup power draw. Seeing that, I assumed different power draws translate to different spinup times. I'd get drives that spinup "quickly" for this usage profile. This may consume more power on spinup, but would make them more acceptable to use in a manner that they spin down.

However, there doesn't appear to be any info comparing spinup time of various hard drives. So I guess I need to guess.
 
Last edited:
you need to compute the wear too. most drives have a fixed spinup cycle count. believe me in non-raid i'd do it but only the newest raid systems rock drive spindown - everything else does scrubbing when the drives are inactive. Bits standing up in parallel tend to fall over if they are not tended too which leads to raid failure. scrubbing tends to them and take a proactive strategy.

One external drive is not enough, if your files are copying during the house fire,theft,tornado you would be sad.

rotate several copies so if you are infected and all files are damaged over the weekend you have enough copies to go back. been there done that. except it was a bad sector which BESR skipped (thank god) most backup software doesn't skip and continue bad sectors on backups. however the bad sectors ruined the rar files. on 3 backups before i could take action.

I'd stick to no-raid or no-spin-down honestly is what i'm saying. soft raid is pretty time sensitive
 
perpendicular recording. you leave a drive off long enough i guarantee some bits will fall and the ecc will do some fixing up. why hard drives are useless for extended storage they either wear out or fail (off). hell any drive has this problem is natural for magnetic media.

tapes are just a more stable platform (modern lto/metal) for long term archival. gold media (cd/dvd/etc) also tends to be more stable than a drive.

go bust out that 4gb drive you have stashed and run spinrite on it and collect the raw error rate on a scrub. you'll probably have most of the data if it spins up - but i bet it will have to work very hard
 
I know what perpendicular recording is. The bits might FAIL but where exactly are they going to FALL OFF to or from? And I have never read about bit failure from being off too long on a spindle drive.
bit rot / flipping happens overtime and regardless of operational status. that old 4GB drive is going to have issues not because it was OFF for a long period of time, but because its really really old and entropy had its way with it. Cosmic rays and chemical degradation occur. As far as tapes go, my educated guess is that their reliability is lower. And gold media is only used for CD mastering, DVDs use glass masters, and both are not available to the home user. Instead the home user gets disk made out of laquer and/or cheap metals.
 
Last edited:
gold media is taiyo-yuden (sp?) for archival purpose. my bad jvc owns them: JVC Archival Grade DVD+R, 16X, Gold Lacquer, Branded, QTY 300

Tape is far higher reliability and has more ecc since its sole purpose in life is to go the distance when all heck has happened.

Disk with today's areal density (4gb wasn't a good example, lets stick to 2tb) is unstable always at the current bitrate of error. so what kills the 4gb drive will kill the 2tb much faster.

I have found that refreshing the media about once a month has been good so far with seagate 2tb external product for d2d2d. But we all know external drives are the bottom of the barrell dregs in the picking line (enterprise,oem,consumer,external). so that may be slightly jaded there.
 
gold media is taiyo-yuden (sp?) for archival purpose.

No, gold media is made out of GOLD, the metal. It is made by all manufacturers. Taiyo-Yuden is a company that makes very high quality disks, and is in fact my disk manufacturer of choice. But not all their media is gold media.
http://en.wikipedia.org/wiki/Gold_CD
here are some gold CD-Rs made by kodak http://www.newegg.com/Product/Produc...-009-_-Product
notice the price.
read about http://en.wikipedia.org/wiki/Taiyo_Yuden
buy non gold disks from them http://www.amazon.com/s/ref=nb_sb_no...%3ATAIYO+YUDEN

Tape is far higher reliability and has more ecc since its sole purpose in life is to go the distance when all heck has happened

1. You can choose to put as much ECC as you want, the question here was bit failure rate not how much ECC they have to compensate. with this tool you can put insane amounts of ECC on your optical disks http://dvdisaster.net/en/ and with ZFS I can choose to put insane amounts of recovery data on a spindle disk.

2. Tape's purpose in life is to be really really cheap backup for massive amounts of data. And easily movable from one location to another. It isn't superior, its just cheaper. And its only cheaper if you get into significant amounts of TB to backup.

Disk with today's areal density (4gb wasn't a good example, lets stick to 2tb) is unstable always at the current bitrate of error. so what kills the 4gb drive will kill the 2tb much faster.
I am not sure what the hell do you mean by "unstable". Disks have a bit error rate, and indeed it has grown with every miniaturization... such that a 2TB disk would be near guaranteed to have unrecoverable errors. A huge mitigating factor is the transition to 4kb sectors, which allowed for significantly more ECC.

I have found that refreshing the media about once a month has been good so far with seagate 2tb external product for d2d2d. But we all know external drives are the bottom of the barrell dregs in the picking line (enterprise,oem,consumer,external). so that may be slightly jaded there.
Refreshing media? you mean like a refreshing glass of water? what in the world are you trying to say?

And I would still like to see you explain how bits FALL DOWN... as if they are dominoes or something.
 
Last edited:
my bad. my point is getting lost. and i'm not trying to argue. I strongly suggest you have enough external off-site backup (aka external drives) to sustain failure in process and transport and natural failure. keep 'em fresh.

I find it hard to believe (it is true) that spindown is on the list for enterprise san's from HP since the drives are rated at such low spin cycle . Maybe there is a change in the drives coming or SSD which would have no spin cycle ?

Let's continue that arm of the discussion. wasn't trying to argue.
 
arguing is a good thing, we both learn. I could be wrong about things and I will only ever learn by arguing. As long as we are civil about it and not resorting to name calling, hostility, flaming, trolling etc.

I agree that offsite backups are a good idea.
 
Last edited:
I wish i could answer your question but my raid controllers all stage and cage.

cage 0 stages its drives one at a time (firmware probably) and cage 1 stages its drives almost staggered so all 8 (or 16) drives don't try to come online at once. then it seems to fart around for a few minutes doing bios startup. drives me crazy. this is probably highly dependent on the controller and i'm sure some engineers with some mad skills coded the design for a good reason (i rock dual 750 watts redundant power with 94% on the left and backup on the right instead of split 50/50) - i wish i could explain the logic but it seems to change a little with every server revision and i'm sure SSD changes the game completely. 800gb drives in a month or two 😉 w00t.
 
I actually work on perpendicular magnetic recording layer development, thanks for the lols... which is about all this thread is good for. I should have known better than to even post it.

I asked for one, very simple thing and it has devolved into utter garbage.
 
Back
Top