Samsung 840 dead.. What now?

CoronaX

Junior Member
Mar 12, 2013
3
0
0
Hi,

A little over 1 month ago, I took my first step into SSD world, and ordered myself 2x Samsung 840 128GB (not PRO). Despite being a rookie in SSD, I've got plenty of experiment with HDDs, from age-old 50MB PATA drives, to 10K Raptors running in RAID0 arrays.

Originally, I wanted something that was tried and tested, and had my sights on the 830 due to great words and feedback on reliability - however when I decided to buy, they were all pulled from the market. I could still get one through other sources, however the price was now 30-40% higher than the 840!

The first thing I did when receiving them, was to upgrade their firmware via Magician, and set them up in RAID0.

Now, one month later, one of them has died, and so my RAID0 array is gone (I did take daily backups - so no important data lost).

What happens is that when I turn on my computer, it registers one of the drives as 0000000000SAMS (instead of a long, cryptic serial number), with only 1GB capacity. Iv'e tried different cables, different controller, all to no avail - the disk is dead. Samsung Magician doesn't even see the disk.

My question is: What now? On the one hand, I can just RMA the faulty drive, however, since they are both (most likely) from the same batch, I fear that this might happen to the other drive too.
Having Googled a bit around, it seems that this problem has happened elsewhere as well - Anandtech had several of them die when copying mass amounts of small-ish files (which is exactly what I was doing when it died).

Therefore, I think it might be better to just cancel the purchase, return both disks, and go for something that's known to have decent reliability (I've read enough horror stories about the new, cheaper memory type in the 840s to steer away - at least in my opinion!).

My webshop here in Spain offers the following other options for a similar price:
Crucial M4 128GB
Crucial V4 128GB
Kingston V300 120GB
SanDisk Extreme 120GB
Corsair Neutron 120GB
OCZ Vertex4 128GB
Corsair Force GX 128GB

Now, my biggest problem, is that whatever drive or brand I try to look up, I see horror stories everywhere.

I was therefore hoping that someone on this site could give me some advice.
Keep in mind that reliability is much more important to me than performance/speed (I'll run them in RAID0, so they'll be fast enough either way).

Sorry for the lengthy post, but I wanted to get all the details down on paper.

Am hoping for some advice!! :)
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Sorry to hear you lost a drive, but at least you backed up!

Avoid the Crucial V4, as it's barely any cheaper but a lot slower. Also avoid Sandisk which has struggled with its firmware, e.g., they still didn't have TRIM working last I heard.

My recommendation from that list would be the Crucial M4. It's not the fastest but is still plenty fast enough that you won't notice the difference in 99% of use cases, and more importantly, it's very reliable.
 

Hellhammer

AnandTech Emeritus
Apr 25, 2011
701
4
81
Have you tried secure erasing the broken drive? I've seen some issues with the 840/Pro when running them in RAID 0 mode; other than that I have yet to see a single failure.
 

CoronaX

Junior Member
Mar 12, 2013
3
0
0
Have you tried secure erasing the broken drive? I've seen some issues with the 840/Pro when running them in RAID 0 mode; other than that I have yet to see a single failure.

The problem is that Samsung Magician doesn't even see the drive - nor does it show up in Windows. In the RAID-tool (Intel controller), it shows up as 0,9GB, and in BIOS (under Storage management), it shows up as 1GB - but Windows still doesn't see it.

Is there any other way of secure erasing?
Also, is it generally a good idea to secure erase an SSD before you use it for the first time?
 

Dari

Lifer
Oct 25, 2002
17,134
38
91
Have you tried secure erasing the broken drive? I've seen some issues with the 840/Pro when running them in RAID 0 mode; other than that I have yet to see a single failure.

So what SSDs are good for RAID 0?
 

Hellhammer

AnandTech Emeritus
Apr 25, 2011
701
4
81
The problem is that Samsung Magician doesn't even see the drive - nor does it show up in Windows. In the RAID-tool (Intel controller), it shows up as 0,9GB, and in BIOS (under Storage management), it shows up as 1GB - but Windows still doesn't see it.

Is there any other way of secure erasing?
Also, is it generally a good idea to secure erase an SSD before you use it for the first time?

You can try e.g. Parted Magic.

So what SSDs are good for RAID 0?

Generally I don't recommend RAID 0 unless you need more than 512GB of space but all SSDs should be fine for RAID. It's a rarer configuration so there isn't as much reliability data, which makes it hard to recommend any specific drives.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
Enough data to recommend the Intel 320 if reliability is of true concern. But this SSD is rather old and expensive. The Crucial M500 will be the next SSD of choice because it sports the same protections as Intel 320. This makes the mentioned two SSDs the only reliable consumer-grade SSDs. The M500 should appear on the market in the next couple of months.

I can think of no reason why RAID0 would be problematic. The only nasty thing is that RAID is implemented in an odd way on the Windows platform, making things like TRIM, SMART and secure erase counter-intuitive.

The advice to secure erase the Samsung SSD will probably work; it is not a failed SSD just a corrupted SSD. If you look at the SMART log you can see 'Unexpected Power-Loss' is not zero. Everytime this occurs, an unprotected SSD can die suddenly or become corrupt. Only the Intel 320 and Crucial M500 are fundamentally protected against this risk. All other SSDs are basically designed to fail and are thus inherently unreliable.
 

Hellhammer

AnandTech Emeritus
Apr 25, 2011
701
4
81
I can think of no reason why RAID0 would be problematic. The only nasty thing is that RAID is implemented in an odd way on the Windows platform, making things like TRIM, SMART and secure erase counter-intuitive.

It's not tested as much as non-RAID configurations, that's why. Only a fraction of consumers RAID their drives, so RAID testing is more limited (R&D funds/time are always limited, you can't test them forever).

Only the Intel 320 and Crucial M500 are fundamentally protected against this risk. All other SSDs are basically designed to fail and are thus inherently unreliable.

Do you have any actual proofs that the M500 is more secure than other consumer-grade SSDs? You keep talking about it like it's above everything else, even though there isn't a single review yet and all we have is a plain and boring press release (none of the CES articles I've seen have revealed anything more than the PR).

Yes, Micron has a technology called RAIN but fundamentally it's no different from SandForce's RAISE, which has been available in consumer-grade SSDs for years. Besides, it hasn't even been confirmed that the M500 will feature RAIN.
 

CoronaX

Junior Member
Mar 12, 2013
3
0
0
If you look at the SMART log you can see 'Unexpected Power-Loss' is not zero. Everytime this occurs, an unprotected SSD can die suddenly or become corrupt.

Can you possibly expand on this?
To me, this sounds really frightening; can a simple power loss completely corrupt an SSD drive? I did experience two power losses in the same day, however this was about a week before the drive failed (and the drives were used heavily during that week)...
 

Goros

Member
Dec 16, 2008
107
0
0
There are issues passing TRIM and therefore garbage collection using RAID with certain combos of chipsets and drivers. This drastically shortens the life of drives on those ports.

Try the secure erase first, you might just breathe life back into that drive. Then do some research and see what driver you need to use to pass trim to your drives in raid (if you can).
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
It's not tested as much as non-RAID configurations
Well I cannot see how an SSD would react differently if the host uses the drive in RAID. This should be something the SSD is totally unaware of. Now the only real compatiblity I can think of would be hardware RAID which may have different behavior regarding channel resets. But generally; ATA devices should work because both adhere to the ATA standard. Only if either controller or drive is violating this standard, would there be a fundamental problem.

Do you have any actual proofs that the M500 is more secure than other consumer-grade SSDs?
The SSD is not even out. But the proof is in the fact that only the Intel 320 and Crucial M500 feature the power-safe capacitor protection that SSDs need so badly.

You are right that more SSDs utilise RAID4 or RAID5 bitcorrection instead of using the NAND chips in RAID0 mode without any redundancy. But I was talking about power-safe capacitors sometimes referred to as 'supercapacitors'. This protects the power-path so that SSDs no longer have unsafe shutdowns which indeed can have dramatic results.

Can you possibly expand on this?
To me, this sounds really frightening; can a simple power loss completely corrupt an SSD drive?
Yes, all SSDs are inherently unsafe and can die or become corrupt everytime it loses power without first receiving a STANDBY IMMEDIATE ATA-command. The occurrences are counted in the SMART log under Unexpected Power-Loss. You can have over a hundred of unclean shutdowns without corruption, or the very first unclean shutdown may ruin your drive.

It may be hard to accept that these products are fundamentally designed to fail, but generally that is the truth. Why else would SSDs have so high failure rates even exceeding that of mechanical harddrives? SSDs have all the potential to be as reliable as your CPU or other electronics. Somehow 90% is returned with corruption as a result of this inherently unsafe design.

Of course, everyone wants proof when I make statements like this. Well I would recommend you look at the following documents:

http://cseweb.ucsd.edu/users/swanson/papers/DAC2011PowerCut.pdf
https://www.usenix.org/conference/fast13/understanding-robustness-ssds-under-power-fault (contains video!)

Short story: all SSDs are inherently unsafe, unless properly protected by an array of capacitors which provide effective protection against a wide range of corruption. Due to the very vulnerable design of modern SSDs utilising write-remapping, it is very easy for them to become corrupt. Especially the FTL or mapping tables - which store the difference between logical LBA and physical NAND address - are extremely fragile and susceptible to corruption from numerous sources.
 

piasabird

Lifer
Feb 6, 2002
17,168
60
91
http://www.anandtech.com/show/6824/inside-anandtech-2013-allssd-architecture

There was just an article on the front page of this site written by Anand. It is an interesting article. Sometimes I see article like this and my mind extrapolates new ideas for hardware. So this idea pops into my head for a Raid card with mini-PCIE or MSATA slots where you put the actuall MINI SSD Cards in acutal slots on the Raid Card itself. So you dont need all the wires. I have got to quit coming up with these ideas. It buggs me that I have now way to implement it.

Just imagine a Mini-ITX Motherboard with a X16 slot with such a raid card in it with a row of MSATA slots.
 
Last edited:

Dari

Lifer
Oct 25, 2002
17,134
38
91
So does anyone here have a recommendation for a large capacity SSD (512GB) that is good for RAID 0 or 10?
 

Ao1

Member
Apr 15, 2012
122
0
0
Short story: all SSDs are inherently unsafe, unless properly protected by an array of capacitors which provide effective protection against a wide range of corruption. Due to the very vulnerable design of modern SSDs utilising write-remapping, it is very easy for them to become corrupt. Especially the FTL or mapping tables - which store the difference between logical LBA and physical NAND address - are extremely fragile and susceptible to corruption from numerous sources.

It's hard to understand why a premium priced technology like a SSD does not use super caps by default, especially as DRAM caches are now quite large (1 GB). The cost of the caps in relation to the cost SSD is quiet small. Puzzling.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
It is to prevent enterprise customers from utilising cheap consumer-grade SSDs. At least, that is how it usually works. The consumer products are fast but not too reliable; just reliable enough so that it works 'mostly'. To Enterprise users this is unacceptable. They are forced to spend big money on enterprise storage.

Why otherwise do you think us consumers are not allowed to utilise ECC memory? It only costs 4 dollars extra for an extremely important protection. But they don't WANT to give it to you, because this would hurt sales on more expensive products that do provide reliability. Who would buy the more expensive parts if the cheap ones are just fine?

@Dari: the upcoming Crucial M500 is likely going to be a killer SSD product for 2013. The same controller is already being sold more than half a year by OCZ (Vertex4) but of course OCZ never waits until a product is matured. Rather, these users act as beta-testers of the final product: the Crucial M500. The same thing happened with its predecessor. Crucial is taking its time to perfect the firmware. Assuming they don't fail, the M500 could be the hottest SSD of this year and maybe the next, depending on its price. Since it is the successor of the popular Crucial M4 I have high hopes for this M500 SSD. It has all the potential of being a fast, reliable yet affordable SSD. Many people will want to buy a 1TB SSD for a reasonable price. This is going to be the first 'real' product to offer this kind of thing.
 

Hellhammer

AnandTech Emeritus
Apr 25, 2011
701
4
81
The same controller is already being sold more than half a year by OCZ (Vertex4) but of course OCZ never waits until a product is matured. Rather, these users act as beta-testers of the final product: the Crucial M500. The same thing happened with its predecessor. Crucial is taking its time to perfect the firmware.

The 88SS9187 has always been mature. Plextor's M5 Pro has used the same controller for over 6 months and there hasn't been any issues. Vertex 4 has also been more reliable than some of the other OCZ SSDs, so Micron/Crucial has just spent more time in R&D and validation.

Well I cannot see how an SSD would react differently if the host uses the drive in RAID. This should be something the SSD is totally unaware of. Now the only real compatiblity I can think of would be hardware RAID which may have different behavior regarding channel resets. But generally; ATA devices should work because both adhere to the ATA standard. Only if either controller or drive is violating this standard, would there be a fundamental problem.

In the case of pure software RAID, I agree. If you're using "firmware" RAID, however, it's more complicated. I have to admit I don't know too much about how the firmware RAID really works but there's definitely some difference to normal ACHI mode as TRIM is not passed, which means LBA mapping is done somewhat differently.

In this case, it's unlikely that the hardware has failed, it sounds like the mapping table (or other part of the firmware) has just corrupted.

The SSD is not even out. But the proof is in the fact that only the Intel 320 and Crucial M500 feature the power-safe capacitor protection that SSDs need so badly.

I take that back, didn't see that the M500 uses capacitors (though we still have to hope they'll be present in the actual retail units). Sorry.

http://www.anandtech.com/show/6824/inside-anandtech-2013-allssd-architecture

There was just an article on the front page of this site written by Anand. It is an interesting article. Sometimes I see article like this and my mind extrapolates new ideas for hardware. So this idea pops into my head for a Raid card with mini-PCIE or MSATA slots where you put the actuall MINI SSD Cards in acutal slots on the Raid Card itself. So you dont need all the wires. I have got to quit coming up with these ideas. It buggs me that I have now way to implement it.

Just imagine a Mini-ITX Motherboard with a X16 slot with such a raid card in it with a row of MSATA slots.

E.g. Marvell DragonFly

http://www.anandtech.com/show/6534/...fly-family-of-enterprise-storage-accelerators
 

Ao1

Member
Apr 15, 2012
122
0
0
@ sub.mesa Good point, especially with the IOP capability of client SSD's in an enterprise environment, although I do wonder if RMA costs and reputation should factor into it. I did some power testing myself with a number of SSD's a while back and found that file system cache was more of a problem in terms of data loss during a write operation. That of course only impacts whatever was being written at the time whereas power loss on the SSD can corrupt all data on the SSD.

Edit: I'm also looking forward to the Crucial M500 :)
 
Last edited:

Dari

Lifer
Oct 25, 2002
17,134
38
91
Thanks guys. I was going to use Samsung's 840s but definitely not after reading this thread since my machine will utilize RAID 10 and I have low tolerance for hardware dying prematurely.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
Hellhammer: I can't comment on the Plextor products but you can be right in this case. I believe Plextor had always had custom firmware enhancements? But either way, the Vertex 4 is an unfinished product with throttling firmware. The M500 will run on the same controllerchip but with different firmware which will make it a better product. The quality of the firmware is extremely important. OCZ itself commented on the fact that 9 out of 10 returned SSDs were failed due to software issues, while only 10% failed due to hardware issues like defective NAND.

Warning: rant about OCZ ahead!
OCZ improving on its reliability is not a big challenge - some of their products have had up to 40% failure rate according to some sources. Generally, I dislike OCZ alot. They have done their customers such a disservice. They operate in the enthusiast-market. Enthusiast are customers who have an admiration for technological advances but lack sufficient technical know-how. These kind of people do overclocking and generally are the first to try out new stuff, the kind of users that put lights in their casings and generally spend a lot of money on things the ordinary computer-user would never spend. These Enthusiast-customers wanted a REALLY COOL ssd, but what they got from OCZ instead is nothing but lies and disappointment. Lies because virtually all specifications are misleading and meant to misinform their customers. OCZ has sold their 75MB/s SSDs as 500MB/s for years, ever since the Vertex 2 with Sandforce. Even now when they migrate to Marvell controller they use write throttling causing them to give higher specifications than customers would encounter if they use more than half of their SSD. Virtually all users want to use more than half the capacity on their SSD, so what OCZ puts on the box doesn't represent actual usage.

Even worse, OCZ had a past of almost criminal behaviour. They sold memory to enthusiasts which want to overclock and generally wanted quality memory. But what they got was simple OEM memory that OCZ bought and overclocked it by reprogramming the SPD and selling them off as more expensive memory running at higher frequency. But this also was accompanied by higher voltages (i.e. from 1.5V to 1.65V). Customers could have just as easily bought the cheaper OEM memory and overclock it themselves. A nice analogy is what is happening in Europe lately: cheaper horsemeat is being sold as more expensive pork and beef. I consider this light criminal behaviour.

Then there is the issue of 25nm SSDs. OCZ was one of the first to profit from the migration from 34nm NAND to 25nm. Basically, the Vertex 2 users with 25nm NAND were screwed. They bought a 60GB drive but got 55GB, they got only 35MB/s of incompressible write performance instead of the normal 70MB/s because of half the channels were utilised due to using larger and cheaper 64Gbit NAND memory. In other words: OCZ sold you cheaper stuff and took the difference in price from the customer. The customer is always the loser when it comes to OCZ.

OCZ themselves know that they can hide behind the obscurity of technical nerdtalk which none of their customers understand. That is why they said their SSDs had "an incorrect IDEMA capacity" when referring to their 60GB SSDs having only 55GB of usable storage space. They simply use obscuring technical language to hide the fact they tried to rip off their very own customers.

Now I have done enough OCZ bashing for today. I must say it certainly feels good. I have done little less than explaining misleading SSD specifications to people. In general it just pays off to lie to your customers. It is so easy to sell a lie, and it doesn't matter people like me know better because I can only affect a small percentage of potential customers.

That said, even with high RMA rates like 5 - 40% this means a vast majority of OCZ users have no problems with their product. But you should expect nothing less from such an expensive product! The very least it can do is actually DO SOMETHING. Why else would you pay big bucks for a small drive, huh?

Of course, I respect everyones opinion. I have seen enough however, and will never recommend OCZ to anyone. They are the hedgefunds of the IT-industry and it would be a service to us customers if the company simply disappeared. Perhaps my dream will become reality: http://www.streetauthority.com/inve...tech-stock-cant-afford-any-more-losses-459898. But somehow, the cockroaches always find a way to survive, hiding behind the next rock that is still wet underneath.

In the case of pure software RAID, I agree. If you're using "firmware" RAID, however, it's more complicated. I have to admit I don't know too much about how the firmware RAID really works but there's definitely some difference to normal ACHI mode as TRIM is not passed, which means LBA mapping is done somewhat differently.
Well, I can tell you a lot about how it works. Not much accurate information is known by the public on 'onboard RAID'.

Onboard RAID, Fake RAID, Driver RAID, firmware RAID, Hybrid RAID ... they all mean the same thing. It is NOT the same as software RAID, but extremely close.

Despire there being a lot of misinformation about it, it is actually quite simple: onboard RAID is simply a normal SATA controller paired with Windows-only drivers that implement RAID. All work is done by Windows drivers, the hardware is nothing more than a regular SATA controller that is unaware of any RAID functionality.

Difference between software RAID and onboard RAID
Well actually, there is one difference: the controller has an option ROM with firmware just like hardware RAID has. This allows bootstrapping, which otherwise is impossible for software RAID. So Windows can not boot from a software RAID5 because they have simple boot code. To overcome this limitation the option ROM contains firmware that allows users to create and delete RAID arrays in its own 'RAID BIOS'. This RAID BIOS will write 512-byte to the last sector of each drive used in a RAID-array. During boot stage, the option ROM firmware reads this information and it knows that the user created a 2-disk RAID0 for example. It then registers this virtual RAID device to the system BIOS using interrupt 19 capture. The BIOS can do basic reads from this virtual RAID array which allows booting Windows.

During the boot-phase of Windows, once the RAID driver is loaded it becomes active and will takeover the I/O path from the BIOS. From this point forward, all I/O will be done through the RAID driver and not the BIOS. In other words: once this part of the boot-phase is complete, onboard RAID becomes 100% software RAID provided by drivers alone. If during this phase the handover fails, you get STOP 0x07D blue screen which is extremely common. This means the RAID drivers failed to attach and Windows can not boot.

The important clue here is that onboard RAID does not gain any hardware acceleration from the controller, as many people believe. In fact, you could use Intel's RAID drivers on an AMD controller or vice versa. The driver of course prohibits this, but this is an artificial limitation. In fact, there were bugs in Silicon Image FakeRAID drivers causing it to attach to other generic AHCI controllers like Intel or AMD. The drivers also hides the physical disks, otherwise you would see the RAID but also the individual disks which could be problematic.

Now, if you boot into Linux or BSD, you will see the TRUE hardware. You will see multiple disks instead of one RAID array. The Windows-only drivers only work on Windows platforms; so onboard RAID on non-Windows is nothing different than a normal SATA controller. On Linux/BSD you see the REAL hardware.

However..... to make things more interesting, Linux and BSD have their own software RAID. And they have written it so that they can read the last sector of each disk containing RAID configuration for virtually all kinds of RAIDs. Software/onboard/hardware RAID always use the last sector on each disk to store RAID metadata like stripesize, disk order, RAID level, offset, etc. So what they do is read this information and apply their own software RAID engine to mimic the way this onboard RAID would be handled under Windows.

As a result, virtually all onboard RAID is detected in Linux/BSD just fine. You can boot Ubuntu and it recognises your Intel or AMD RAID array just fine. Not only the virtual RAID array, but also the physical disks which are intentionally hidden on the Windows platform. Many people think this works because of acceleration of their 'RAID controller' on the motherboard. Haha! :D

Even more funny, if you buy a OCZ Revodrive 1 or 2, you are buying a FakeRAID controller. This uses Silicon Image fakeRAID drivers under windows. But under Linux and BSD this controller is nothing more than a SATA controller with 2 or 4 SSDs attached to it. Yes, you can actually see four 60GB SSDs on a 240GB Revodrive! Version 3 of Revodrive is different because it uses more proper LSI SAS controller.

TRIM and FakeRAID
There is no reason TRIM would not work. If TRIM does not work, it is always the fault of software design not about hardware abilities. For example, owners of OCZ Revodrive 1, 2 and 3 will NOT have TRIM support in Windows 7. However, if you boot into Linux or BSD you WILL have TRIM support because the actual problem lies in the Windows-only drivers that prevents the use of TRIM.

Today, both AMD and Intel updated their RAID drivers so that TRIM in simple RAID0 arrays should work. But there is no limitation to this; RAID5 + TRIM is also possible and works just fine on non-Windows OS. FreeBSD allows RAID-Z2 + TRIM for example, comparable to RAID6.

The limitation in Windows lies in the fact that Windows implements TRIM only on ATA controllers, while RAID drivers hook up as SCSI device. This is an old-fashioned way of doing things. There is no real RAID support; the RAID drivers simply emulate a SCSI harddrive. This always appeared to work just fine, but now we can see the disadvantages of this design with limiting TRIM, SMART, APM support. I do not know about Windows 8, but it is possible they implemented UNMAP for SCSI - which is equivalent to TRIM on ATA.
 

nOOky

Platinum Member
Aug 17, 2004
2,838
1,857
136
Just wondering what the OP needed the two drives in Raid 0 for? It's the configuration most prone to failure, and not the most reliable to begin with, not that two new drives should be 50% faulty almost right away. You lose one you lose both which is a pain.
If you want reliability over speed like you stated, you'd actually want to run 2 drives in Raid 1 or buy another storage drive.
I'd say so far the Samsung 840 series have been good for as long as they have been out, at least compared to other drives. I'd either RMA it and use the second drive as another application drive, or get your money back.
 

Hellhammer

AnandTech Emeritus
Apr 25, 2011
701
4
81
Hellhammer: I can't comment on the Plextor products but you can be right in this case. I believe Plextor had always had custom firmware enhancements?

Yeah, Plextor has a custom firmware (but so does OCZ). Plextor is really on-par with Intel/Micron/Samsung in terms of reliability. And this isn't just based on my own experience but also on what I've seen in the forums and e-tailers. They aren't that well known (yet) but they definitely have the weapons to battle against the big guys.

Even now when they migrate to Marvell controller they use write throttling causing them to give higher specifications than customers would encounter if they use more than half of their SSD. Virtually all users want to use more than half the capacity on their SSD, so what OCZ puts on the box doesn't represent actual usage.

The throttling is essentially unnoticeable. In some tests it looks like the performance is halved but that's only if you fill all the LBAs in one go (the drive needs a few minutes for reorganization). Here's some testing with Vector:

Empty
Vector%20empty.png


45% filled with sequential data followed by 4KB random writes until the reorganization started. Then the drive was given 10 minutes of idle time
Vector2%20log.png


There's of course some difference but that's already due to the fact that 50% of the LBAs have been filled. Definitely not as big difference as some people think there is.

And thanks for the explanation on "fakeRAID", learning new is always fun :)
 
Last edited: