Question Why has my Samsung 970 Pro NVMe SSD Become Very Slow?

jagdpanther

Junior Member
Apr 24, 2007
7
0
66
My 1TB NVMe M.2 SSD has become VERY slow over the last month or two and I would really like to restore the original performance. Below I listed some some system information and benchmarks (sysbench random read/write tests followed by sysbench random read benchmarks.)

This Gentoo Linux system was built last fall and is based on a SuperMicro C9X299-PG300 motherboard (with latest
firemware) It has three drives:
System drive: Samsung 970 PRO NVMe M.2
Home and Data: Crucial MX500 Sata SSD
Backup: Seagate BarraCuda Pro Sata HDD

OS Kernel: 4.20.12-gentoo.

All partitions on both the NVMe M.2 SSD and Sata SSD are formated with ext4 file system and have a daily run of /sbin/fstrim. Each drive was originally formatted with parted. 10GB was not formatted on each SSD. Currently each partition on the NVMe is using less than 50% of its total space.

The daily run of fstrim gives the following type of output. (/ and /data0 are on the 970 PRO NVMe. /data1 is on
the Crucial Sata SSD)

2019-02-28_00:10
/: 13.5 GiB (14478188544 bytes) trimmed
/data0: 107.2 MiB (112410624 bytes) trimmed
/data1: 3.9 GiB (4141568000 bytes) trimmed

There are no obvious errors that I see in /var/log/messages. smartctl (part of smartmontool-6.6) does not report
any errors.

If I could boot Windows on this system I would run Samsung's Magician software. (The command line version of
Magician that does work on Linux does not support the 970 Pro.)

Here are some results of running some benchmarks:
First random read/write
sysbench fileio --file-total-size=128G prepare
sysbench fileio --file-total-size=128G --file-test-mode=rndrw --time=120 --max-requests=0 run

on the Samsung 970 PRO NVMe:
File operations:
reads/s: 93.14
writes/s: 62.09
fsyncs/s: 199.69

on the Crucial Sata SSD:
File operations:
reads/s: 171.96
writes/s: 114.64
fsyncs/s: 367.84

on the Seagate Sata HDD:
File operations:
reads/s: 81.04
writes/s: 54.02
fsyncs/s: 173.62

For comparison. On a 2nd system, (A Dell with Dell's NVMe also running Gentoo with same kernel)
File operations:
reads/s: 1684.65
writes/s: 1123.10
fsyncs/s: 3594.39

So for random read/write the Crucial Sata SSD is faster than the 970 PRO NVMe.

Now Random Read test:
sysbench fileio --file-total-size=128G --time=120 --max-requests=0 --file-test-mode=rndrd run

on the Samsung 970 PRO NVMe:
File operations:
reads/s: 644.97
writes/s: 0.00
fsyncs/s: 0.00

on the Crucial Sata SSD:
File operations:
reads/s: 478.29
writes/s: 0.00
fsyncs/s: 0.00

on the Seagate Sata HDD:
File operations:
reads/s: 93.43
writes/s: 0.00
fsyncs/s: 0.00

Read only the NVMe is a little faster, but it should be much faster.


Any suggestions on getting the Samsung 970 PRO NVMe working fast again?
 

Billy Tallis

Senior member
Aug 4, 2015
293
146
116
You need to figure out what it is that you're actually measuring. I'm not familiar with sysbench, so I don't know what its fileio test is actually doing under the hood.

Your random read test results are very telling:

970 PRO: 644.97
MX500: 478.29
HDD: 93.43

Ignore the NVMe drive results and just look at the other two numbers. Your SATA SSD is way more than 5x faster than a hard drive at performing random reads. The fact that you're only measuring a 5x difference means that most of what you're measuring here is not disk access time; there has to be a lot of caching in system RAM for the hard drive to be scoring only 5x slower rather than 50x to 150x slower. When the underlying disk's speed is not the largest factor contributing to the score on that test, it is no longer surprising to see the NVMe drive only 35% faster overall.
 

jagdpanther

Junior Member
Apr 24, 2007
7
0
66

Thanks. But I don't think it is a heat issue. The motherboard has a heat-sink that fits over the top of the NVMe SSD ... but just to make sure I have had a tiny 40mm fan blowing directly on the heat-sink since I built the system and for the first several months the NVMe was "fast".
 

jagdpanther

Junior Member
Apr 24, 2007
7
0
66
Billy Tallis: Thank you for the reply.

Sysbench info: Sysbench_at_Github

The reason I chose a 128GB total testing size
(sysbench fileio --file-total-size=128G prepare) was that the many temporary files it created added up to 128 GB which is twice the size of my RAM. This should significantly reduce the issue of OS caching. (I ran that "prepare" command in a directory on each of the three disks. (ie, HDD, Sata SSD, and NVMe SSD)

The reason I noticed this issue was that large applications (ex: Firefox, Thunderbird and Libreoffice) take noticeably longer to load after a system reboot (ie. no caching) that they used to. After opening once and in cache, the program almost instantly opens when re-launched.