Do SSD Sequential Reads even Exist

scaryfast

Member
Jul 3, 2008
97
0
0
I was told that an SSD scatters the data on purpose for wear leveling.

Also, you are told to never defragment the drive.

If the SSD is always fragmented, how can there ever be a sequential read.

And if it's true, why are sequential reads benchmarked and given in specs?
 

Zap

Elite Member
Oct 13, 1999
22,377
7
81
They are given in the specs because it is a better number, and better numbers sell.

Who would buy a 1.36TB drive when you can buy a 1.5TB drive for the same price?

Million Bits =/= Mega Bytes
 

Rubycon

Madame President
Aug 10, 2005
17,768
485
126
Fragmentation matters little on SSD because the latency is so much lower than a spinning disk. Even with data scattered everywhere they can still produce a steady stream of data across the entire space.
 
Feb 21, 2010
72
0
0
If wear leveling is true, then sequential reads would hardly exist for SSD.

However, I doubt that Intel's controller would scatter every single bit that gets written. They probably get scattered in blocks. So you'll get sequential reads for that whole block that was written at once. There will probably be alot more blocks that can be read sequentially when a big file is written because it's written in one go and will probably get separated to fewer of those blocks. For smaller writes here and there yeah they probably get scattered here can there more. Intel themselves say they have to find the sweet spot between performance and wear leveling.

Manufacturers will still advertise their sequential reads first because it's the fastest score that will be produced by any benchmarking software and that's a good marketting stragetgy regardless whether it really represents the performance a user can feel.

If you were to dig deeper, HDD benchmarks means much less for SSD because SSD don't read data linearly as HDDs do. Internally an SSD has multiple flash memory chips which are used in such a way that they act like a RAID 0 array. That is why the controller matters so much in SSDs. If data were really stored and read sequentially aka linearly it would probably be much slower than the current way of distributing the data over all the available flash chips. This is also the reason why 160GB SSDs gets higher scores compared to 80GB SSDs because they have more flash chips and thus more data can be read and written at once. Well there is a whole lot of bottlenecking here and there so the increase isn't alot. This internal parallelism is a major factor why SSDs are so much faster than any memory card or USB drive although the technology behind it is the same. It is also the reason why SSD can achieve such high 'sequential' reads which is actually multiple random reads from it's multiple flash chips.

To really test the performance of the flash chips and the controllers, current software developers will need to get a much deeper understanding of SSDs. There has to be a reason why Intel SSDs shows much better OS and program boot times compared to the best OCZ SSDs but get almost exactly the same scores on benchmarks.
 
Last edited:

Swivelguy2

Member
Sep 9, 2009
116
0
0
Well, obviously the sequential read times are faster than the random read times on SSDs, so they must be fundamentally different. In a sequential read, the SSD always knows where the next chunk of data to be read is located. In a random read situation, it doesn't.
 

skid00skid00

Member
Oct 12, 2009
66
0
0
How big is a 'block' on any specific SSD? It's bigger than 4k, which is why random 4k block access is slower.
 
Feb 21, 2010
72
0
0
Well, obviously the sequential read times are faster than the random read times on SSDs, so they must be fundamentally different. In a sequential read, the SSD always knows where the next chunk of data to be read is located. In a random read situation, it doesn't.

I think that is a wrong concept.

Sequential access refers to reading a whole chain of data irrespective of what part of the chain you need. If the program just need 1 block of data from a hundred, it's gonna have to read all those hundred block into memory then only use 1 and discard the other 99.

Random access on the other hand means it specifically seeks that 1 block of data.

In a HDD, where it would waste alot of time to spin multiple times to read multiple specific blocks of data, it could just read of a whole stretch of 100 block into a RAM, then select it's 79 and throw away the rest. A

However that may not always be possible depending on how scattered the data it, and there are multiple request coming in from difference program etc etc so the smart OS, controllers and whatnot will work out the fastest way to access all the data.
 

HendrixFan

Diamond Member
Oct 18, 2001
4,646
0
71
I was told that an SSD scatters the data on purpose for wear leveling.

Also, you are told to never defragment the drive.

If the SSD is always fragmented, how can there ever be a sequential read.

And if it's true, why are sequential reads benchmarked and given in specs?

I can tell you when moving large files I am getting close to maxing out the sequential speeds, and far beyond the random speeds.
 

pjkenned

Senior member
Jan 14, 2008
630
0
71
www.servethehome.com
To really test the performance of the flash chips and the controllers, current software developers will need to get a much deeper understanding of SSDs. There has to be a reason why Intel SSDs shows much better OS and program boot times compared to the best OCZ SSDs but get almost exactly the same scores on benchmarks.

I have a 80GB and 160GB G2, a 120GB Vertex, 60GB Agility, a 64GB Samsung (second gen w/ GC), som Phison based drives, and etc. Bought a Vertex LE last week, returned it after 3 days and bought another 120GB Vertex. Real world usage, maybe I'm not sophisticated enough but, I can't tell the difference between a Vertex (on 1.5 firmware) and a G2. Pre-TRIM era, I thought my G1 was actually faster than the Vertex. Nowadays not so. The LE's performance was GREAT on benchmarks, but real world I didn't see as much benefit as I was hoping. Maybe it's something wrong with my head, but if programs launch in 4.5 seconds v 4. seconds, it isn't as noticible to me as 4.5s v. 25s.

Just a thought.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Well, obviously the sequential read times are faster than the random read times on SSDs, so they must be fundamentally different. In a sequential read, the SSD always knows where the next chunk of data to be read is located. In a random read situation, it doesn't.

The SSD has to figure out where all of the blocks are regardless of location. If the OS submits a 256K read that's logically sequential, the SSD will still have to convert all of those logical block numbers into physical cell locations since they could be anywhere due to wear leveling. If there is any difference it's probably in the µs levels.

How big is a 'block' on any specific SSD? It's bigger than 4k, which is why random 4k block access is slower.

No matter what the block size is the device still has to accept read/writes down to the 512b level in order to support legacy OSes like XP.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Haha time sure does fly doesn't it?:rolleyes:

The older I get, the faster it flies. But XP was release 8 years ago and it's last service pack was 2 years ago. It's time for people to move on. I have Win7 on my work notebook and I cringe every time I have to look at a Win2K3 or XP machine. Win7, Vista (despite its problems) and even Linux are so much better than XP it's not even funny. People are already running into the 2TB BIOS partition table limits in XP32 and it'll be entertaining to see the flurry of posts about various problems as people start trying to get XP to work well with disks that have >512b clusters.
 

skid00skid00

Member
Oct 12, 2009
66
0
0
No matter what the block size is the device still has to accept read/writes down to the 512b level in order to support legacy OSes like XP.

You completely missed my point. The OP wanted to know why sequential was faster than 4k. My point was that in order for an SSD to give the OS 4k, it has to read a much larger chunk (since that's all it can do -as a minimum-), and then pick the 4k out of that larger chunk (which takes -more- time to do).
 

jimhsu

Senior member
Mar 22, 2009
705
0
76
http://www.usenix.org/event/usenix08/tech/full_papers/agrawal/agrawal_html/index.html

If someone wants to help decipher the USENIX paper, be my guest:

It is unlikely that any one choice for exploiting parallelism can be optimal for all workloads, and certainly as SSD capacity scales up, it will be difficult to ensure full connectivity between controllers and flash packages. The best choices will undoubtedly be dictated by workload properties. For example, a highly sequential workload will benefit from ganging, a workload with inherent parallelism will take advantage of a deeply parallel request queuing structure, and a workload with poor cleaning efficiency (e.g. no locality) will rely on a cleaning strategy that is compatible with the foreground load.

Basically I think it says that sequential operations benefit more from the inherent parallelization of flash than random operations. An analogy is how RAID 0 massively boosts sequential speeds (almost linearly), but has less of an effect on random speeds.

Note: The stuff in there is quite technical.
 
Last edited:

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
NAND flash is divided into pages. The size of a Page depends on the bus width of the chip. In x16 devices the pages are split into a 1,024 Word main area and a 32 Word spare area. The spare area is used for flags like bad pages or error correction.

The memory is organized in blocks where each block contains 64 pages.
Total block size of 8Kb / number of pages 64 = 128 bits as the smallest directly accessible unit of memory or roughly 16 bytes. That would be the smallest unit of memory you could write or read at any time sequential or random. Accessing single bits is not possible. If a page becomes bad you don't lose the entire block, you lose the 16 bytes of that page. Each time a cell goes bad you lose 16 bytes.

Every time there is a request for data it always starts as a random read where the entire block is transferred to a copy buffer and then can be sequentially read. The reason that sequential read numbers are higher is not because the data is in order when read but because the data is already in the copy buffer . The chip knows it is a sequential read so it automatically loads the next block into the buffer before it is needed, this takes no memory or performance hit. If you issue a random read then the block has to be copied to the copy buffer before the individual pages can be read. If the data you need is in another block then another block has to be swapped in before the read can happen. Random read can be just as high as sequential read if all the pages you request are in the same block, but that would mean the total size of the data requested is smaller than 8KB. In a way it is very similar to how a hard drive has to move the head to the new track to get the next bit of random data.

Think of it like Block ---> Pages ----> Bytes ----> Bits

This is per chip. SSD use multiple chips in parallel so that changes things on how much data can be written per cycle. 32 chips = 256KB per read/write, etc.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
You completely missed my point. The OP wanted to know why sequential was faster than 4k. My point was that in order for an SSD to give the OS 4k, it has to read a much larger chunk (since that's all it can do -as a minimum-), and then pick the 4k out of that larger chunk (which takes -more- time to do).

All of that is true for mechanical disks as well except that most have a 512b sector size so the "excess" date is generally smaller. But mechanical disks with 4K sectors will be more and more prominent soon.
 

HeXen

Diamond Member
Dec 13, 2009
7,837
38
91
The older I get, the faster it flies. But XP was release 8 years ago and it's last service pack was 2 years ago. It's time for people to move on. I have Win7 on my work notebook and I cringe every time I have to look at a Win2K3 or XP machine. Win7, Vista (despite its problems) and even Linux are so much better than XP it's not even funny. People are already running into the 2TB BIOS partition table limits in XP32 and it'll be entertaining to see the flurry of posts about various problems as people start trying to get XP to work well with disks that have >512b clusters.

but if an 8 yr old OS is doing what the person needs from the computer, why move on to another OS thats 3x's larger footprint and still doing the exact same thing?
With Vista/W7, you still need Antivirus apps, similar NTFS that still fragments, it still can BSOD, hang..etc, it still cannot burn proper iso/data cd's therefore still requires pretty much as many 3rd party applications, so if all one does is browse web, photoshop, mp3 work, game..etc, whats the point? why upgrade and do the exact same things?
Tell me what will i suddenly do differently in Windows 7 that i would not otherwise do in XP. 3rd party apps to replace WMC, WMM..etc is cheaper than W7 or use freeware and don't say dx11 gaming lol thats pointless as well for the moment at least considering how few use it and comparison shots are negligible so far.

Now Linux, thats a different beast, to say its "better" is apple to oranges. It functions far different and takes XP users a very long time to master enough to do everything they did in XP with as much ease. Launching included apps is intuitive enough, but installing, networking, compatability..etc can be frustrating for new users
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
but if an 8 yr old OS is doing what the person needs from the computer, why move on to another OS thats 3x's larger footprint and still doing the exact same thing?

Because hardware and software support moves on, if you're willing to stick with all of the software versions that you're using now then be my guest. But unless it's a single-purpose machine like a set top box you're going to run into something doesn't work simply because it's old. Would you really recommend that someone running Win98 keep running it because it's "only' 12yrs old?

What do you plan on doing with that XP machine when you plug a drive in and it either doesn't work or gives you like 10% of the performance that it should because it has 4K sectors and every I/O causes the drive's firmware to do 4x the work as a modern OS?

so if all one does is browse web, photoshop, mp3 work, game..etc, whats the point? why upgrade and do the exact same things?

If you remove Photoshop and game I might buy it. But unless you want to stick with the exact same games for the next decade you're going to have to upgrade eventually as DirectX and OpenGL progress and newer games require them.

But if you're going to remove those two tasks then you might as well go to Linux anyway and save yourself some money and headaches.

Tell me what will i suddenly do differently in Windows 7 that i would not otherwise do in XP.

The new start menu, task bar and UAC are worth it alone for me, but I got my license through work so I didn't waste any money on it.

don't say dx11 gaming lol thats pointless as well for the moment at least considering how few use it and comparison shots are negligible so far.

For the moment, but chances are it won't be that way forever.

Now Linux, thats a different beast, to say its "better" is apple to oranges. It functions far different and takes XP users a very long time to master enough to do everything they did in XP with as much ease. Launching included apps is intuitive enough, but installing, networking, compatability..etc can be frustrating for new users

No, better may be subjective and I'm obviously biased, but there's a lot about Linux that is undeniably better. In fact that only thing I can think of that's better about Windows is commercial application support and thankfully OSS and web-based software are becoming more popular so hopefully the underlying OS will continue to become less and less relevant.

Switching to any new OS requires learning and can be frustrating for certain users. There's no way around that. Hell I find OS X the most frustrating of the 3 to use because I don't know it. I can find my way around Linux and Windows with my eyes closed, and I've actually done it when walking people through things over the phone, but OS X just pisses me off because of how opaque they made it. If the option isn't right in front of your face it's virtually impossible to find. But I digress...
 

jimhsu

Senior member
Mar 22, 2009
705
0
76
You switch to a new OS because ... it offers an advantage over your old OS?

Heck, why didn't we just stay with Windows 3.1? It's fast, can browse the web (barely), plays back music (mp3s might be a challenge), and even games (assuming 2D sidescrollers strike your fancy).

OSes were designed to work with the hardware and software that ... existed when the OS was designed. A corollary: new hardware and software might not work on your old OS that used to work with old hardware and software. Gradually, there will come a point where maintaining, releasing security updates, etc. for new hardware and software to work with old operating systems simply isn't worth the effort. If you're satisfied with XP, by all means stay with it. Just don't expect manufacturers to cater to your needs forever when regression testing new patches easily costs upwards of thousands. (There's a reason why Nvidia no longer releases Windows 95/98 drivers).

That is why any operating system, much less XP, can't last forever. Would you still be using Windows 7 in 2020?
 
Last edited: