Single SSD Samsung 840 EVO performing like RAID (Are these numbers right?)

Locobox

Junior Member
Apr 17, 2014
7
0
0
Hi everyone
I own a Samsung SSD 840 EVO 500GBs model.

Coming from an HDD as you might expect I'm very happy with this SSD.

I recently make a speed/performance test on the SSD, these are the results:

CrystalDiskMark
READ 689.3MB/s
WRITE 1140 MB/s

v2xxr8sof

subirimagenes


I know this is high because when I first got the SSD I tested it with the same program and the results were good but not that high if I remember correctly it was something like:

CrystalDiskMark
READ 500 MB/s
WRITE 450 MB/s

I´m not running any kind of RAID here besides a RAM Disk of 4GBS.

Another test performed by another program showed somehow similar results:

As SSD Bechmark
READ 964 MB/s
WRITE 1117 MB/s

4vwqvaaen

imagenes gratis

Are these results the product of a bug? don't take me wrong I'm not complaining, I mean the EVO is really fast but I dont think those numbers are correct.

What confuses me most is how dramaticaly "improved" the write times are!



Any help on this matter would be highly appreciated.




These are my computer specs:

MOBO: Asus P8Z77-m
CPU: intel i7 3770K
RAM: Corsair vengeance 8GBs @1866MHz x4 (Total RAM installed 32GBs)
SSD : SSD samsung 840 EVO 500GBs (As primary boot disc)
RDD : 4GBS of 1866MHz DDR (For swap file, windows, browser temp, etc)
HDD : Maxtor 250GBs
VGA: MSi GTX780 TF2

Windows 7 Ultimate 64Bits +SP1
System is not overclocked in any way.
 

Locobox

Junior Member
Apr 17, 2014
7
0
0
I thought that the SSD bech tool performed the test on the selected drive, however given the results I'm starting to think that's possible.

Thanks both of you for the suggestions.

I´m going to test the SSD alone without the RDD to see if that changes the results.
 

mikeymikec

Lifer
May 19, 2011
19,324
12,754
136
Considering that SATA 6Gbps can theoretically do an absolute maximum of 750MB/sec (and in reality you won't get anywhere near that figure because much more data about the data you're transferring also needs to be sent), I'd say there's something wrong with those figures :)

I usually use ATTO to benchmark disks (my 256GB Samsung 840 PRO peaks at just over 500MB to 560MB/sec), however on a RAID1 system I noticed an odd performance spike that made no sense - on a SATA 3Gbps system I had two Samsung 840 PRO SSDs in RAID1 and a single spike reckoned they were doing something crazy fast, easily in the SATA 6Gbps bracket despite the rest of the readings being in the expected 3Gbps range.

The EVO has the RAPID feature, which needs enabling I think? Do you have it enabled? From a review I read, I understand that causes some odd performance spikes, but even those were within SATA 6Gbps IIRC (it has a bit of SLC in it for caching IIRC, to make up for the slower TLC).

You could do a test yourself - set up a large file to copy, use robocopy to do the transfer, and the output from the command will give you a summary at the end which gives you the average transfer speed.
 
Last edited:

Locobox

Junior Member
Apr 17, 2014
7
0
0
@mikeymikec
I really don't know if the numbers are wrong or what. I'll look for ATTO to see the results but I just opened Samsung Magician and the performace test also show figures like those from CrystalDiskMark and As SSD Benchmark.

Also tested the SSD on a laptop with no RDD but realized that the laptop does not have SATA 3, on that machine the SAMSUNG SSD saturated the SATA 2 port, peaking at 270R/250W.

RAPID is enabled on my PC.
 

Coup27

Platinum Member
Jul 17, 2010
2,140
3
81
RAPID is enabled on my PC.
There is your answer. RAPID is a feature of the Samsung SSD Magician and is a caching process which sits in the background. If you open task manger you will see "SamsungRapidApp" and "SamsungRapidSvc". The purpose of RAPID is to cache writes to RAM and then write them to the SSD later when you're idle and to pre-cache what programs you use most frequently into RAM so they load quicker.

I have a 120GB Evo and my figures are as follows:

RAPID off
1.jpg


RAPID on
2.png
 
Last edited:

Fernando 1

Senior member
Jul 29, 2012
351
9
81
Yes, enabling the Magician "RAPID mode" lets the synthetic benchmark results jump into nearly unbelievale regions.

Here are the recent results with my 512 GB Samsung 840 PRO SSD running in "RAPID mode":
28g6blkhadw5.png


And this is what I got after having disabled the "RAPID" feature:
t9aphhvqkxj.png


The question is, if the user will realize this performace boost while working with the computer....
 

Locobox

Junior Member
Apr 17, 2014
7
0
0
I just tested the SSD with RAPID disabled and the speeds are the expected ones: 510Read /490Write.

It looks like RAPID does increase the SSD speed at least on the benchmarks.

Many thanks Coup27, Fernando1 and averyone else who helped me on this.
 

Coup27

Platinum Member
Jul 17, 2010
2,140
3
81
No problem. However, understand that RAPID does not increase the speed of the actual SSD, but uses RAM to boost the speed of reads and writes.

Instead of writing to the SSD it will write to RAM and then the RAM will write to the SSD in the background meaning the write will complete quicker than it would if it was wrote directly to the SSD (with RAPID disabled).

This is all theory however. Although RAPID makes the benchmarks look great, how much benefit there is in real world is much harder to quantify. I personally cannot notice any signicant difference with RAPID on or off, however at home I am not much of a power user these days.
 
Feb 25, 2011
16,964
1,597
126
It maxes out at 600MBps, not 750MBps.

This.

Mikey - 2 bits of every 10 in SATA3 is parity information. It's counted as data because it's being transferred, so the controller is, technically, transferring 6Gbps. (And Marketing likes big, round numbers.)

But only 4.8Gbps of what's being transferred it is actually your data. The rest is overhead innate to the protocol. So 600MB/sec.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
It's not parity information. It's added bits to help keep DC bias approximately 0V. Patterns in bits sent can allow DC to build up to significant amounts. For each set of bits sent, a longer set of encoded bits are used, to provide a long-term 0V bias. They are then decoded back from 10b to 8b at the other end. Those extra bits are purely for signaling, and have nothing to do with correctness-checking. They primarily allow for the very fast serial transmission speeds all the cool new interfaces want to use.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
I agree. The most impressive number is always used, regardless of what it tells, like MT/s.
 

Locobox

Junior Member
Apr 17, 2014
7
0
0
One can still see marketing numbers in screens sizes, where instead of height you get a diagonal measure, HDD capacity is another example that comes to mind.
 

h9826790

Member
Apr 19, 2014
139
0
41
Just wonder how much RAM the RAPID mode can use to boost the write performance.

Let's say 1 want to copy a 20G file to the SSD, I think only the very 1st few seconds can benefit from this RAPID mode.
 
Last edited:

Coup27

Platinum Member
Jul 17, 2010
2,140
3
81
Just wonder how much RAM the RAPID mode can use to boost the write performance.

Let's say 1 want to copy a 20G file to the SSD, I think only the very 1st few seconds can benefit from this RAPID mode.
Samsung's RAPID requirements ask for 2GB so that's an idea of how much it can use but regardless whose implementation, RAM caching is not designed for 20GB files. It would be entirely impractical and pointless. RAM caching is designed for small files. If you look at my CDM figures above, the caching system is much more effective on 512k, 4k and 4k QD32 than it is on sequential writes.
 

coercitiv

Diamond Member
Jan 24, 2014
6,898
15,310
136
No problem. However, understand that RAPID does not increase the speed of the actual SSD, but uses RAM to boost the speed of reads and writes.
Caching is only (the bigger) part of the story, that wouldn't explain the relatively high CPU utilization.

Taken from an Anandtech article.
RAPID tries to instead focus on combining low queue depth writes into much larger bundles of data that can be written more like large transfers across many NAND die. To test this theory I ran our 4KB random write IOmeter test at a queue depth of 1 with RAPID enabled and disabled:

Write coalescing seems to work extremely well here. With RAPID enabled the system sees even better random write performance than it would at a queue depth of 32. Average latency drops although the max observed latency was definitely higher.
 

h9826790

Member
Apr 19, 2014
139
0
41
Samsung's RAPID requirements ask for 2GB so that's an idea of how much it can use but regardless whose implementation, RAM caching is not designed for 20GB files. It would be entirely impractical and pointless. RAM caching is designed for small files. If you look at my CDM figures above, the caching system is much more effective on 512k, 4k and 4k QD32 than it is on sequential writes.

Thanks for the info. So it seems this function is purly play around with numbers

For small files, who care that speed increment? The SSD is super fast for small file anyway.

For large file, this function is completely useless o_O
 

coercitiv

Diamond Member
Jan 24, 2014
6,898
15,310
136
For small files, who care that speed increment? The SSD is super fast for small file anyway.
On the contrary, writing and reading small files is the biggest problem modern storage needs to improve on.
 
Feb 25, 2011
16,964
1,597
126
On the contrary, writing and reading small files is the biggest problem modern storage needs to improve on.
Seconded. There are 107,245 files in my C:\Windows directory, averaging around 300kB each. (And most of them are probably a lot smaller than that.)

Small file access is how stuff gets done.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
On the contrary, writing and reading small files is the biggest problem modern storage needs to improve on.
True. But, Windows will likely be stuck. Linux has options for removing its reliance on a dedicated journal (BTRFS and F2FS, off the top of my head), which is quite the bottleneck for small data writing, but I don't see much going on in MS-land.

While F2FS is not production-ready (the FSCK needs a lot of work), FI, it's already very fast for random writing to flash, much faster than other FSes available for flash devices, and only a ltitle slower on HDDs. Hopefully, within a year or two, it will be suitable for client use, with an FSCK able to correct errors. Likewise, BTRFS is already competitive with ZFS for writing, and should get a bit faster over time, as more SSD-specific work goes into it. As BTRFS is optimized, it should have a bit of an advantage for general use in that its versioning B-trees, while creating more write overhead than a log, shouldn't create any inherent corner cases (IE, very low performance should be able to be corrected without any on-disk spec changes, or with very minor backwards-compatible--and maybe forward-compatible--ones).

NVMe will help, but only by so much, and ReFS is too much designed for servers to fully replace NTFS.
 

coercitiv

Diamond Member
Jan 24, 2014
6,898
15,310
136
True. But, Windows will likely be stuck. Linux has options for removing its reliance on a dedicated journal (BTRFS and F2FS, off the top of my head), which is quite the bottleneck for small data writing, but I don't see much going on in MS-land.
This issue is platform independent, it has less to do with the OS or file system and more to do with the disk handling random reads and writes ten times slower than sequential operations. There's still major room for improvement at hardware level, even if software solutions can help alleviate the issue.