• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Why does flash have poor random writes?

Fox5

Diamond Member
Hey, I was wondering why flash ram has poor random writes. At first I thought it might be the way the data is structured, as a sequence of NAND (or perhaps NOR) gates...except flash ram has constant time random access for reads. To me this applies that the value of any bit is not dependent on the others, so there shouldn't be a waterfall effect for random writes (having to rewrite the circuit to maintain consistency).

So I'm just curious, why does flash performance die during random writes, yet stays good for random reads and sequential writes?
 
Could it be the wear leveling algorithm? It has to figure out where a write would cause the least amount of wear probably.

 
Originally posted by: PottedMeat
Could it be the wear leveling algorithm? It has to figure out where a write would cause the least amount of wear probably.

In that case, wouldn't it destroy sequential performance even more so? Or maybe the wear leveling algorithm disables after the starting bit?
 
I believe it's because flash has to be erased in blocks. IIRC block size for flash memory is 512 bytes, so this means writing a few bytes of data to the memory involves erasing the entire block and rewriting it with the new and old data. Obviously if you're doing a lot of these operations over the entire disk things can slow down pretty quick.

For example, if you're writing 64 bytes of data to random blocks that means that each write operation actually has to write eight times that amount of data, 512 bytes. So assuming the blocks can be written to at 80MB/s, this means the effective write speed of this particular operation is eight times slower, or 10MB/s.

With sequential write this isn't a problem as lots of unchanged information isn't being rewritten (i.e. you delete the block and replace it all with the new data). And flash can read bit-by-bit, so this is why random read is basically the same as sequential read.

At least this is how I understand it, would appreciate others' input if this isn't accurate.
 
frostedflakes has it correct. You cannot write to a single byte in a FlashROM chip. You have to read a 'sector', replace the data to be written, recalculate the ECC, erase the Flash sector, then write the data back.

As soon as the write chunk size gets larger than the Flash sector size, the read cycle is no longer needed - hence that's faster.

So it's not actually about random writes, but about small writes.
 
Hmm, wait. When using popular file systems like FAT32/NTFS, where the block size is, say, 4KB, won't all writes be in blocks larger than the Flash blocks? There shouldn't be such a thing as small writes during normal usage.
 
Depending on the type of flash chip in use, block sizes are surprisingly large. 64KB blocks aren't too uncommon - and if you use several chips in parallel for speed, it doubles or quadruples again.
 
Back
Top