SSD scaling above 2 drives?

kalniel

Member
Aug 16, 2010
52
0
0
I've seen the articles looking at RAID 0 for two drives, but are there any that investigate scaling for three or four? I'm thinking that the dropping price of some out-of-favour technologies might make this a viable route, for example three 40GB Intel X-25Vs being roughly equivalent in price to several 120GB or 2x64GB drives out there.
 

HendrixFan

Diamond Member
Oct 18, 2001
4,646
0
71
To scale well (especially with four drives) you will need a dedicated RAID controller, onboard RAID likely won't have the horsepower you want with that many drives. What kind of usage are you looking for?
 

kalniel

Member
Aug 16, 2010
52
0
0
Purely hypothetical at the moment. If it became feasible then most likely usage would be game toolsets and image editing, both of which have similar, mainly sequential read/write requirements.

So you don't think even the mighty ICH10R would cope with three drives?
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
Yeah i benched 5x Intel X25-V on FreeBSD with ZFS. Great fun, high random IOps too.

I would advise software RAID over hardware RAID here, as hardware RAID easily gets capped with SSDs. SSDs are fast to a point where it can bottleneck the CPU, for example with high queue random reads. With one Intel having max 35k IOps under artificial 512-byte workloads, you would already bottleneck an Areca hardware RAID controller starting with 2 SSDs in RAID (70k IOps); only in extreme cases though. But as you add more SSDs you would see that software RAID scales better with more SSDs than hardware RAID for the simple reason that your host system has faster hardware (CPU+MEM subsystem) and thus less latency.

Just to be clear: onboard RAID is software RAID too; it's all done by drivers you install or are pre-installed in Windows already, such as with Intel RAID arrays on Windows 7 (and perhaps Vista?).
 

kalniel

Member
Aug 16, 2010
52
0
0
Yeah i benched 5x Intel X25-V on FreeBSD with ZFS. Great fun, high random IOps too.
Have you got the scaling results?

I would advise software RAID over hardware RAID here, as hardware RAID easily gets capped with SSDs. SSDs are fast to a point where it can bottleneck the CPU, for example with high queue random reads. With one Intel having max 35k IOps under artificial 512-byte workloads, you would already bottleneck an Areca hardware RAID controller starting with 2 SSDs in RAID (70k IOps); only in extreme cases though. But as you add more SSDs you would see that software RAID scales better with more SSDs than hardware RAID for the simple reason that your host system has faster hardware (CPU+MEM subsystem) and thus less latency.
That's an interesting point. Be interesting to see how latency was quicker having to route all the way through the computer, but if there was a massive processing load the CPU might be better suited to it.

Just to be clear: onboard RAID is software RAID too; it's all done by drivers you install or are pre-installed in Windows already, such as with Intel RAID arrays on Windows 7 (and perhaps Vista?).
Yup. But the ICH10R has pretty good drivers, and I'm guessing, connectivity.
 

Rifter

Lifer
Oct 9, 1999
11,522
751
126
I would also go software raid for this, or a really really good hardware raid card.
 

LokutusofBorg

Golden Member
Mar 20, 2001
1,065
0
76
Purely hypothetical at the moment. If it became feasible then most likely usage would be game toolsets and image editing, both of which have similar, mainly sequential read/write requirements.

So you don't think even the mighty ICH10R would cope with three drives?
You're better off going with one of the PCIe SSD drives. They aren't limited by the SATA bus. If you want full enterprise class, there's FusionIO. If you want affordable enterprise, there's OCZ's Z-Drives. If you want consumer level, there's OCZ Revo-Drives.

Good article to get you jazzed about PCIe SSD - http://www.brentozar.com/archive/2010/03/fusion-io-iodrive-review-fusionio/
 

kalniel

Member
Aug 16, 2010
52
0
0
Thanks LokutusofBorg, I hadn't appreciated the possibility of running several PCIe cards in RAID. Is that possible even when the card is running internal RAID, like the Revo-drive?
 

Cable God

Diamond Member
Jun 25, 2000
3,251
0
71
Thanks LokutusofBorg, I hadn't appreciated the possibility of running several PCIe cards in RAID. Is that possible even when the card is running internal RAID, like the Revo-drive?

Keep in mind, the OCZ product is internal RAID0. You can't change it. I had one die (R2 1TB model) and take a database with it. Luckily my backup/restore to alternative storage worked. The performance is "good", but the performance doesn't scale with multi-threaded apps as much as I thought it would. Small IO's and a low number of threads are where it shines best. If you want it to scale, Fusion IO is the best bet, and has far better support.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
I'm not that well known with these PCIe products, but i would carefully watch their performance numbers. At least one of those OCZ Z-drives has lower random IOps than a single Intel X25-V 40GB (for reads). And loss of TRIM is not that sexy as well.

RAID0 is fun, but only if you apply it intelligently, like SSDs internally already do. The X25-V could be considered a 5-disk RAID0 and the X25-M could be considered 10-disk RAID0. Indilinx uses 4 channels and Sandforce 8 if i remember correctly.

If you build a native PCIe -> NAND controller then you could use RAID0 like SSDs do with multiple channels, SSDs essentially are NAND flash memory with a SATA -> NAND controller. A native PCIe product does not exist yet, to my knowledge. Would surely be interesting!

PCIe SSD could be made incredibly fast. There is no real 'cap' on technology potential like there is with mechanical/magnetic storage. HDDs rely on data density and rpm to increase performance, but that goes very very slowly. But if some Japanese comes with a good PCIe NAND chip tomorrow, we could be seeings things like 200-500% performance increase which are astonishing and unseen in the IT world. I think the biggest reason for that is that most money can be made by slowing innovation and progress. If they build 1GB/s+ and 100k IOps+ SSDs now then there's no (real) reason for you to want more. They could achieve that performance with the same NAND chips in the original Intel X25-M G1 controller (50nm) - just with another controller chip.
 

LokutusofBorg

Golden Member
Mar 20, 2001
1,065
0
76
Thanks LokutusofBorg, I hadn't appreciated the possibility of running several PCIe cards in RAID. Is that possible even when the card is running internal RAID, like the Revo-drive?
Yes, if you read that article Brent Ozar talks about running the cards in RAID 1 using the onboard RAID in case one dies (as Cable God experienced). There are some case studies on the FusionIO site that detail using 4 cards in a RAID 10 like setup, also using onboard RAID.

When used in a server environment, you don't get hot swap since they're plugged right into the PCIe slots, so you have to go with full machine redundancy. This has pros and cons.

Where your application seems to be workstation not server, you could probably get away with one card and no RAID, or running two cards in RAID 0 for some blazing performance. The OCZ cards aren't as spectacular as the FusionIO stuff is, but they're a damn sight cheaper, and still outperform SATA drives except in some particular usage scenarios. I believe the Z-Drives are v2 Indilix controllers (someone correct me if I'm wrong there) and the RevoDrives are SF1200 based.
 

Sahakiel

Golden Member
Oct 19, 2001
1,746
0
86
PCIe SSD could be made incredibly fast. There is no real 'cap' on technology potential like there is with mechanical/magnetic storage. HDDs rely on data density and rpm to increase performance, but that goes very very slowly. But if some Japanese comes with a good PCIe NAND chip tomorrow, we could be seeings things like 200-500% performance increase which are astonishing and unseen in the IT world.
Actually, yes, there is a cap or two. Bear in mind that everything runs on something physical. For memory chips, the cap is usually bundled into feature size. Essentially, the smaller it is, the faster it runs, but also the more likely it will have problems or not work at all. Feature size as well as materials determine voltage as well. All memory chips basically rely on density and voltage to improve performance.

I think the biggest reason for that is that most money can be made by slowing innovation and progress. If they build 1GB/s+ and 100k IOps+ SSDs now then there's no (real) reason for you to want more. They could achieve that performance with the same NAND chips in the original Intel X25-M G1 controller (50nm) - just with another controller chip.

Doubtful. SSD's are not at the point of diminishing returns, yet. If a company could produce chips + controller that double what's on the market now at the same price point, they would have done it in a heartbeat. It would essentially wipe out all other competition.
Besides, think of it this way: if the memory chip itself has that much performance, then why do controller chips use multiple data channels?


When used in a server environment, you don't get hot swap since they're plugged right into the PCIe slots, so you have to go with full machine redundancy. This has pros and cons.

Just pointing out that in a server, you can hot swap PCI-E cards. It's built into the specification, though requires proper implementation. For the right price, every single part in a server can be hot swapped.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
Actually, yes, there is a cap or two. Bear in mind that everything runs on something physical.
Yes, and you say SSDs scale because the NAND gets smaller and faster, i say that SSDs scale mostly because of controller innovations, not because of faster NAND chips.

Hence my argument that with a new controller chip but the same 50nm NAND on Intel G1 SSDs, you could build an incredibly fast SSD. Thus, the key factor in SSD performance is the controller, not the physical NAND.

That's all because of fundamental differences: HDDs are serial devices; they can only really do one thing at a time and thus the controller isn't terribly important; you won't find huge performance differences between a Samsung or Seagate drive regarding firmware; simply because there is not much theoretical headroom to allow for such performance increase.

However SSDs don't have this problem, and can let multiple flash memory chips work in parallel. This they already do, the Intel has 10 channels. This works pretty much like RAID0 does, and it's why the controller is so much more important than the actual speed of the NAND.

Now the controller likely is a bottleneck too, it may have limited processing power and was designed to suit the bandwidth limitations of its interface. If we would assume a 'perfect' controller with the same NAND as Intel SSDs, it's not hard to imagine using more channels, so we get something like:

5 channels: ~200MB/s (X25-V 40GB)
10 channels: 400MB/s (X25-M in reality ~260MB/s)
20 channels: 800MB/s
40 channels: 1.6GB/s
80 channels: 3.2GB/s

That's all with the same speeds and same technology as Intel already uses, just more channels. This would need firmware tweaks as well and likely will be a massive controller. But i just wanted to make a point here: it's the controller that is key to performance, less so the physical NAND.

Doubtful. SSD's are not at the point of diminishing returns, yet. If a company could produce chips + controller that double what's on the market now at the same price point, they would have done it in a heartbeat. It would essentially wipe out all other competition.
Not everybody can do this, of course. But Intel can/could have. Just like they can wipe out AMD, but strangely enough that would not be in their interests.
 

pjkenned

Senior member
Jan 14, 2008
630
0
71
www.servethehome.com
Not everybody can do this, of course. But Intel can/could have. Just like they can wipe out AMD, but strangely enough that would not be in their interests.

Having a 15 percent or more market share competitor around + some upstart competition helps keep the anti-trust folks at bay.

Also, FWIW, people can get sequential read bottlenecks with 3-4x X25-V 40GB's in RAID 0 on an ICH10R. Although the ICH10R is great, you aren't going to stick a 6x SSD RAID 0 on it and (with SandForce Drives) see 285x 6 = 1.7GB/s transfers, even in best case.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
Having a 15 percent or more market share competitor around + some upstart competition helps keep the anti-trust folks at bay.
Precisely. :)

Also, FWIW, people can get sequential read bottlenecks with 3-4x X25-V 40GB's in RAID 0 on an ICH10R. Although the ICH10R is great, you aren't going to stick a 6x SSD RAID 0 on it and (with SandForce Drives) see 285x 6 = 1.7GB/s transfers, even in best case.
Yes, RAID and Windows appears to not scale that well, but i'm not sure why. They don't provide any acceleration for RAID1, but RAID0 should scale easily. One explanation is that at least some of Windows' onboard RAID engines always read the full stripe block even if only a part was requested, to provide some low-level read-ahead for synthetic benchmarks like HDTune. Also, NTFS filesystem may not apply more than 128KiB read-ahead, which also the default on FreeBSD but it's tweakable.

As i use FreeBSD i can test with some good RAID engines like geom_stripe. There i got 1234MB/s sustained random read (high queue depth) with just 5 SATA ports connected to X25-V. So the hardware is capable of these speeds, the RAID0 spec is as well, so it must be software issues in Windows.

A properly setup RAID0 would show only single-disk performance in single-queue raw access benchmarks like HDTune, due to limited queue depth. But would show linear scaling with multiple queue depth like CrystalDiskMark/AS SSD on actual filesystems. I guess people complaining about low synthetic scores is more important for the manufacturers implementing RAID, than actually performing well. This may be one reason for the low level optimizations, that tend to work really bad for IOps in RAID0.
 

Brento73

Junior Member
Dec 24, 2003
5
0
0
www.brentozar.com
I'm Brent Ozar, the guy with the benchmarks you linked to earlier. I've got a couple of FusionIO Duos in a dev server at the moment - if there's any particular questions you've got about them, let me know and I'd be glad to help.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
I'm Brent Ozar, the guy with the benchmarks you linked to earlier. I've got a couple of FusionIO Duos in a dev server at the moment - if there's any particular questions you've got about them, let me know and I'd be glad to help.
Can you benchmark it with CrystalDiskMark and AS SSD perhaps? Please set the test size to at least 1GB - preferably 8GB.
 

Sahakiel

Golden Member
Oct 19, 2001
1,746
0
86
Yes, and you say SSDs scale because the NAND gets smaller and faster, i say that SSDs scale mostly because of controller innovations, not because of faster NAND chips.

Hence my argument that with a new controller chip but the same 50nm NAND on Intel G1 SSDs, you could build an incredibly fast SSD. Thus, the key factor in SSD performance is the controller, not the physical NAND.

That's all because of fundamental differences: HDDs are serial devices; they can only really do one thing at a time and thus the controller isn't terribly important; you won't find huge performance differences between a Samsung or Seagate drive regarding firmware; simply because there is not much theoretical headroom to allow for such performance increase.

However SSDs don't have this problem, and can let multiple flash memory chips work in parallel. This they already do, the Intel has 10 channels. This works pretty much like RAID0 does, and it's why the controller is so much more important than the actual speed of the NAND.

Now the controller likely is a bottleneck too, it may have limited processing power and was designed to suit the bandwidth limitations of its interface. If we would assume a 'perfect' controller with the same NAND as Intel SSDs, it's not hard to imagine using more channels, so we get something like:

5 channels: ~200MB/s (X25-V 40GB)
10 channels: 400MB/s (X25-M in reality ~260MB/s)
20 channels: 800MB/s
40 channels: 1.6GB/s
80 channels: 3.2GB/s

That's all with the same speeds and same technology as Intel already uses, just more channels. This would need firmware tweaks as well and likely will be a massive controller. But i just wanted to make a point here: it's the controller that is key to performance, less so the physical NAND.

Not everybody can do this, of course. But Intel can/could have. Just like they can wipe out AMD, but strangely enough that would not be in their interests.

One big problem with your argument: That requires you to break down memory chips to feed each controller. There's really no point in scaling a controller past 10 channels when you're only adding 10 memory chips at most. If you're special-ordering memory chips at smaller sizes just to scale with more channels... well, let's just say economy of scale (from manufacturing smaller dies for memory, larger dies for controllers, and assembling low-volume end product) put those SSD's square into economically infeasible, if not phsically impossible; even controllers don't scale infinitely.
Let's not forget that after a certain size, your controller will actually start slowing down in performance because it simply can't handle that many data channels at the same speed per channel. After all, the controller itself is also an IC. Data channels are physical layers that take up space, not to mention the additional logic necessary to negotiate between them. Long story short, logic required, and by extension physical space required, scales closer to exponentially than linearly in relation to data width. 4X the number of channels results in logic (die space) greater than 4x the logic (and die space) of a single channel.
Last I checked, the 2.5" form factor is not infinite volume. There are limits to the size of your logic chips, the size of your memory chips, the size of your other miscellanea, and the space between them. Not to mention certain parts (like mount holes and sata interface) are fixed and immutable.
Now, if you throw out the form factor, that's a different situation. However, you are still limited by the size of the chips themselves. You may be able to add 100 channel solutions but I bet not only would the result be about the size of several hard drives put together, the performance won't be anywhere near 10x better than the 10 channel Intel controller due to limitations from the sheer size of the controller die.
Unless you want to RAID the controllers themselves, in which case, you run into physical limitations with how fast the controller itself can run. Just because you can RAID 100 15k SAS drives on a single controller doesn't mean the controller can handle a 100% transfer load from every drive.

The point is SSD's do share some similarities with RAID 0 arrays, only with memory chips instead of entire hard drives. If you're familiar with RAID, you know the limitations and, unsurprisingly, SSD's don't seem to be breaking them, yet. As lithography improves and we get to shove more transistors onto the same physical area, you'll see your 20-40 channel controllers, but not before it's both physically possible and economically feasible.

We've already hit the limit for controller size. Any larger and we're looking at a combination of slower performance and higher price up until the controller starts melting itself and can't run at the same clock. Before we run into a physical die size that breaks the 2.5" form factor, we run out of room to physically add memory chips. Therefore, there's really no reason to add more data channels at this point and a lot of incentive not to until memory chips get smaller. Hence, the artificial controller limitation you cite is actually a physical memory limitation.
Aside from more memory channels, the other speed up comes from increasing the memory itself. That is limited essentially by lithography, both due to decreasing feature size and more space for logic. It's usually just measured by density. Once you've hit the limit on data channels for your drive controller (and we've already hit it for this generation) your only recourse is faster memory (if the controller can run faster to keep up).
On that note, we already hit the wall a long time ago for RAM and figured out how to get around it. DDR was the solution, essentially increasing data channels in the memory itself. DDR2 doubled it, DDR3 doubled that, and DDR4 will double DDR3. If you can't get faster memory through increasing density, your only recourse is to make the output logic for the memory itself faster. Your access latencies take a hit (extra logic) but your sustained transfer (down to a certain chunk size depending on output logic) will increase. Since the last 30 years of software development has been aimed at minimizing latency, it's usually a cost-worthy trade-off.
 
Last edited:

Brento73

Junior Member
Dec 24, 2003
5
0
0
www.brentozar.com
Can you benchmark it with CrystalDiskMark and AS SSD perhaps? Please set the test size to at least 1GB - preferably 8GB.

Sure, you bet. Here's the CrystalDiskMark results with just one drive using a 64kb allocation unit size on NTFS (may not be the best size for CrystalDiskMark - I don't use that tool for benchmarking):

FusionIO-1drive-1.png


Here's the results of a software RAID 0:

FusionIO-2drives-1-1.png


Normally I wouldn't quote RAID 0 results, but it makes for an interesting use case with Microsoft SQL Server's TempDB. We're investigating using these for a quick performance gain on that, and it solves a lot of problems for database administrators.

Update - I should add that this isn't a server with a Nehalem chipset, and I've seen faster throughput rates with those. This is a Dell PowerEdge 1950 with a pair of E5405's.
 
Last edited:

sub.mesa

Senior member
Feb 16, 2010
611
0
0
Interesting post, Sahakiel! Let me share my thoughts..

One big problem with your argument: That requires you to break down memory chips to feed each controller. There's really no point in scaling a controller past 10 channels when you're only adding 10 memory chips at most.
Indeed! So that is why SSD performance should scale with its size. Thus a future 1TB SSD should be many times faster than a smaller 160GB SSD of the same brand family.

Currently, we only see this with Intel's X25-V, which is essentially half the channels of the original X25-M. But i could imagine that this 'scale performance with size' would be a logical thing for SSDs, especially when the controller market matured a bit.

Let me portray my vision of future SSDs:

Stacked flash chips with controller underneath:
http://www.dailytech.com/article.aspx?newsid=1761

Imagine a controller sits at the bottom and has 8 NAND chips on top. Now this makes an 8-channel controller and a nice size like 8*64Gb=64GiB. Very good usage of real estate, controller requires no extra real estate. High theoretical speeds due to NAND and controller being so closely integrated.

Simple SSDs would have only one NAND stack, and may be plugged directly on SATA port on the motherboard. They would still have decent size (64GB/128GB) and have a decent controller not like current versions that plug directly in PATA/SATA port).

More expensive / higher capacity SSDs would be either 1,8" or 2,5" form factor, with some exceptional beasts at 3,5". Thus a variety of form factors for different usage. A 2,5" could have 2 to 8 NAND stacks, while 3,5" could have something like 16 to 32 NAND stacks; all connected by a 'master controller', though the individual nand controllers do most of the dirty work. The master controller multiplexes all controllers into a single SAS or SATA/1200 port, and may share architecture with high-end PCI-express 3.0 SSDs.

This multiple controller configuration may also fix processing speed ability on the individual controllers, as it removes some of the logic away of the controllers and integrates it into a separate chip which can be passively cooled by a small heatsink. The seperate master-chip would be handling write-back and mapping table and uses only SRAM buffers for low-latency I/O and to store its mapping table. Not sure if that would be feasible, as Intel opted to use a DRAM chip for the mapping table instead of more SRAM which is understandable.

Power could also be conserved by switching individual chips off. This would especially work well to lower power consumption on the 3,5" 'monsters' with alot of NAND chips.

Concluding my future fantasy, i think there's ALOT of performance headroom that current generations are not utilizing. With the technology available today, we could build much better SSDs rated at much higher speeds.

An alternative would be: software NAND controllers. Then you buy 'dumb' NAND memory which is essentially controller-less but allows a software driver to implement the NAND controller's functions. This would add some latency, but would also allow for utilization of alot of CPU power. I read such a driver being in progress, but likely will be a difficult task with many uncertainties and potential issues with hardware.

The point is SSD's do share some similarities with RAID 0 arrays, only with memory chips instead of entire hard drives. If you're familiar with RAID, you know the limitations and, unsurprisingly, SSD's don't seem to be breaking them, yet.
I indeed think to have a decent understanding of RAID0 performance characteristics. But you weren't very specific; could you elaborate this further?

Hence, the artificial controller limitation you cite is actually a physical memory limitation.
Can you explain that? With memory limitation you mean the NAND or DRAM memory chip? In this context, it is interesting to note two things:

1) Intel doesn't use the DRAM for write-back, it has internal 256K SRAM buffercache to do that job. The DRAM is only used for HPA mapping tables.
2) Intel's X25-M G1 has 133MHz SDRAM chip; X25-M G2 has 100MHz SDRAM chip, but its performance does not appear to suffer as a result of the slower DRAM speed.

Internal SRAM buffers would allow for the extreme low latencies for both reads and writes that Intel has. If all the data has to run via one SDRAM chip it would indeed be slow as the memory is also used for the controller's functioning. Though stated, in the case for Intel only for doing LBA lookups, essentially looking up where it actually stored a data block.

If you can't get faster memory through increasing density, your only recourse is to make the output logic for the memory itself faster. Your access latencies take a hit (extra logic) but your sustained transfer (down to a certain chunk size depending on output logic) will increase.
If this is meant as analogy for RAID0, then may i add that parallel I/O (RAID0/interleaving/striping) not only accelerates sequential workloads, but also random I/O workloads, i.e. those with a low number of contiguous I/O requests. This applies to both SSDs and HDDs, though HDDs have latency handicaps that RAID0 cannot circumvent and revert to single-disk performance or just gain limited benefit from the striping.

If RAID0 is properly implemented, those limitations are very minor and RAID0 could substantially increase your I/O performance for both sequential and random workloads. Given enough queued I/O's, scaling should be nearly 100 percent or linear, until it hits a bottleneck of interface latency/bandwidth or CPU/RAM. Most intelligent RAID5 drivers are memory bottlenecked, such as the geom_raid5 in FreeNAS. The parity calculations only take up a fraction of all the memory copies due to splitting/combining I/O requests, which is needed for RAID5 to write fast.

Unfortunately, i haven't seen a perfect RAID driver yet that exploits all/most of theoretical abilities. RAID0 drivers do best in this regard. RAID1 is terrible since most implementations do not benefit from an additional disk capable of reading other stuff than the primary disk. Windows onboard RAID drivers aren't much better, though Intel is the only Windows software RAID5 implementation that uses RAM write-back, and thus capable of single disk+ write speeds.
 
Last edited:

sub.mesa

Senior member
Feb 16, 2010
611
0
0
Sure, you bet. Here's the CrystalDiskMark results with just one drive using a 64kb allocation unit size on NTFS (may not be the best size for CrystalDiskMark - I don't use that tool for benchmarking):
Nice scores! Though the random I/O doesn't scale well at all.

Update - I should add that this isn't a server with a Nehalem chipset, and I've seen faster throughput rates with those. This is a Dell PowerEdge 1950 with a pair of E5405's.
I don't know that CPU - but multiqueue I/O does tend to bottleneck your CPU. On Windows it will only use one core, so at 25% cpu utilization on a quadcore you could be CPU-bottlenecked on the QD32/64 benchmarks; the sequential benchmarks should not be CPU bottlenecked.

Intel/AMD with 'turbo' would be beneficial here, as it allows a single thread to run at higher frequency than the other cores which remain unused. On Linux/FreeBSD this should not be a problem as the kernel stuff and FreeBSD's geom stack is highly threaded. You might want to do some benchmarks on those platforms, if you feel up to it. If you want to put this into real use then i understand. :)
 

Brento73

Junior Member
Dec 24, 2003
5
0
0
www.brentozar.com
I don't know that CPU - but multiqueue I/O does tend to bottleneck your CPU. On Windows it will only use one core, so at 25% cpu utilization on a quadcore you could be CPU-bottlenecked on the QD32/64 benchmarks; the sequential benchmarks should not be CPU bottlenecked.

It's not CPU per se, but the new front side bus that helps with more throughput, as I understand it. CPU never hits 100% on these tests, even when looking at each core.

On Linux/FreeBSD this should not be a problem as the kernel stuff and FreeBSD's geom stack is highly threaded. You might want to do some benchmarks on those platforms, if you feel up to it. If you want to put this into real use then i understand. :)

Heh, yeah, I don't have any use for Linux/FreeBSD since it doesn't run Microsoft SQL Server, and that's the database I specialize in. These IO numbers are more than fast enough for me for now, thank goodness.
 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
RAID0 is fun, but only if you apply it intelligently, like SSDs internally already do. The X25-V could be considered a 5-disk RAID0 and the X25-M could be considered 10-disk RAID0. Indilinx uses 4 channels and Sandforce 8 if i remember correctly.

That's pretty much right.

Although Sandforce is a bit flexible. Some of the earlier sandforce SSDs were based on the enterprise reference design - and probably used Sandforce's redundant data striping algorithm - effectively making an 8 channel RAID-5 (protecting your data if a flash chip failed, or the data got corrupted for some reason). This is probably part of the reason for the high apparent over-provisisioning of sandforce based drives.

It's also a feature that is easily disabled for an immediate 12.5% boost in drive capacity. So I suspect it's disappeared in the later consumer models (which have a much greater available data capacity for the same number of flash chips).
 

LokutusofBorg

Golden Member
Mar 20, 2001
1,065
0
76
Just pointing out that in a server, you can hot swap PCI-E cards. It's built into the specification, though requires proper implementation. For the right price, every single part in a server can be hot swapped.
Nice, I wasn't aware of this. I'll look into this to help bolster my push for PCIe SSDs at work.

Do you work for IMFT by chance? I see SLC on your profile (I live in Sandy) and your replies sound like you're in the industry.