Why does flash have poor random writes?

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Hey, I was wondering why flash ram has poor random writes. At first I thought it might be the way the data is structured, as a sequence of NAND (or perhaps NOR) gates...except flash ram has constant time random access for reads. To me this applies that the value of any bit is not dependent on the others, so there shouldn't be a waterfall effect for random writes (having to rewrite the circuit to maintain consistency).

So I'm just curious, why does flash performance die during random writes, yet stays good for random reads and sequential writes?
 

PottedMeat

Lifer
Apr 17, 2002
12,363
475
126
Could it be the wear leveling algorithm? It has to figure out where a write would cause the least amount of wear probably.

 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Originally posted by: PottedMeat
Could it be the wear leveling algorithm? It has to figure out where a write would cause the least amount of wear probably.

In that case, wouldn't it destroy sequential performance even more so? Or maybe the wear leveling algorithm disables after the starting bit?
 

frostedflakes

Diamond Member
Mar 1, 2005
7,925
1
81
I believe it's because flash has to be erased in blocks. IIRC block size for flash memory is 512 bytes, so this means writing a few bytes of data to the memory involves erasing the entire block and rewriting it with the new and old data. Obviously if you're doing a lot of these operations over the entire disk things can slow down pretty quick.

For example, if you're writing 64 bytes of data to random blocks that means that each write operation actually has to write eight times that amount of data, 512 bytes. So assuming the blocks can be written to at 80MB/s, this means the effective write speed of this particular operation is eight times slower, or 10MB/s.

With sequential write this isn't a problem as lots of unchanged information isn't being rewritten (i.e. you delete the block and replace it all with the new data). And flash can read bit-by-bit, so this is why random read is basically the same as sequential read.

At least this is how I understand it, would appreciate others' input if this isn't accurate.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
frostedflakes has it correct. You cannot write to a single byte in a FlashROM chip. You have to read a 'sector', replace the data to be written, recalculate the ECC, erase the Flash sector, then write the data back.

As soon as the write chunk size gets larger than the Flash sector size, the read cycle is no longer needed - hence that's faster.

So it's not actually about random writes, but about small writes.
 

Foxery

Golden Member
Jan 24, 2008
1,709
0
0
Hmm, wait. When using popular file systems like FAT32/NTFS, where the block size is, say, 4KB, won't all writes be in blocks larger than the Flash blocks? There shouldn't be such a thing as small writes during normal usage.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Depending on the type of flash chip in use, block sizes are surprisingly large. 64KB blocks aren't too uncommon - and if you use several chips in parallel for speed, it doubles or quadruples again.
 

Casawi

Platinum Member
Oct 31, 2004
2,366
1
0
That is because flash write to a block of memory. That is also why it consumes more current.