• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

SSD Best option for this?

Anubis

No Lifer
So we have a piece of equipment here at work that captures still frames at a rate of 160 per second and writes them out to disk. each frame is roughly 2 mega pixel and we are grabbing 10-30 seconds at once. Currently we write out to a normal 7200 RPM HDD, which is slow as crap. it takes 10+ minutes to write out all the data for a 30 second capture. Is going to a SSD the best option or is there something else i should tell people to look into for this?
 
An SSD will definitely speed things up because now you're being limited by the hard drive's throughput. However, is the capture also stored in the drive or will it just be analyzed and then deleted? Or is it archived elsewhere (if so, where)? That matters because you may still run into a bottleneck if the capture is then moved to a hard drive after capturing.

What is the average size (in MB) of each frame/picture? Just thinking about what kind of setup would work the best in this situation.
 
each capture is ~2.3 megs

and yes at the end of the day after testing the data is moved to an external HDD - this runs over ESATA and isnt too bad when transfering

I know thats not ideal but there is not a fast enough connection to network it from where it is to a backup array, the test unit is not onsite
 
Probably SSD is the way to go or RAID HDD.

What is the file size of each frame?

Could be wrong on this part:

160fps * 2mp file size = much higher MB/s than a HD can handle without having to wait a while.

Edit...

Whoops my reply too slow.

Yeah. 2.3Megs * 160fps = 368MB/s. HD way slower than that.
 
Last edited:
A single SSD should work then, as long as you get at least 240/256GB one. HDs (probably 3) in RAID 0 would work too but the chance of failure is much higher and an SSD doesn't even end up being much more expensive (if at all).
 
Hellhammer, would something like this be random writes as far as speeds go? Wouldn't be sequential unless going disk to disk huh? Something that always gave me trouble wrapping my head around.
 
A single SSD should work then, as long as you get at least 240/256GB one. HDs (probably 3) in RAID 0 would work too but the chance of failure is much higher and an SSD doesn't even end up being much more expensive (if at all).

thanks, we would prob go with a 500gb ish one just to give more working room.
 
Since it sounds like you'll be doing lots of writes per day, might also install something to occasionally monitor the nand endurance. Should still be good for several years anyway.
 
the unit only sees 20 or so tests per day, so wear endurance should not be a huge issue. raid is an option but if the added speed of simply using a SSD gets save times down to a few min it should be fine because setting up for another test takes 5 min

the 20+ min saves on 30 second captures is what was killing us
 
Hellhammer, would something like this be random writes as far as speeds go? Wouldn't be sequential unless going disk to disk huh? Something that always gave me trouble wrapping my head around.

It should be sequential because it's writing new data and not modifying existing one. Random writes usually occur when you're modifying existing data, such as log files. To save RAM, only the part of the data you need is loaded to RAM and the modified data is then rewritten. The randomness comes from the fact that there may be multiple parts that are loaded but they are not in sequential order, or you're accessing multiple files simultaneously so the controller needs to read/write from non-sequential (i.e. random) LBAs.

A good real world example would be an address book. If you want to access it sequentially, you have to load the whole address book to RAM in order to access just one contact. For example, if you want to add a name that starts with a Y, you first go through all names from A-T until you get to Y (assuming the address book is sorted alphabetically) and then add the new name there. The whole address book is then rewritten. Reading/writing the data is fast since it's sequential but it takes more RAM and CPU.

Random access in this case would be that you only modify and write the Y contacts again. It's random because if you're e.g. viewing a contact that starts with an E and another one that starts with a Y, the IOs are not in sequential (i.e. alphabetical in this example) order so one contact is read from LBA 1 and the other from LBA 17.

I hope this sheds some light. It's definitely quite hard to understand and what makes it even harder is the fact that all apps behave in different ways. One might load the whole address book to RAM and be a RAM hog, while the other may access all contacts separately (i.e. randomly).

Furthermore, an SSD doesn't really care if the data is sequential or random because seek times are essentially non-existent, what matters is the IO size. A big IO can be broken into smaller bits and written to multiple NAND dies simultaneously, which provides great performance. In OP's case each frame is 2.3MB so that's a fairly big transfer.
 
Back
Top