WD 8tb gets slower the more full it is

loope

Junior Member
Nov 7, 2009
19
10
81
Any idea what this is?


WD80EFZX-68UW8N0 83.H0A83
2TB of free space:
-----------------------------------------------------------------------
CrystalDiskMark 5.2.1 x64 (C) 2007-2017 hiyohiyo
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 131.964 MB/s
Sequential Write (Q= 32,T= 1) : 133.586 MB/s
Random Read 4KiB (Q= 32,T= 1) : 2.215 MB/s [ 540.8 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 1.812 MB/s [ 442.4 IOPS]
Sequential Read (T= 1) : 131.491 MB/s
Sequential Write (T= 1) : 128.153 MB/s
Random Read 4KiB (Q= 1,T= 1) : 1.363 MB/s [ 332.8 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 1.777 MB/s [ 433.8 IOPS]

Test : 1024 MiB [I: 72.7% (5415.1/7451.9 GiB)] (x5) [Interval=5 sec]
Date : 2017/06/03 19:05:07
OS : Windows 7 Ultimate SP1 [6.1 Build 7601] (x64)


210GB of free space:
-----------------------------------------------------------------------
CrystalDiskMark 5.2.1 x64 (C) 2007-2017 hiyohiyo
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 92.230 MB/s
Sequential Write (Q= 32,T= 1) : 94.299 MB/s
Random Read 4KiB (Q= 32,T= 1) : 2.132 MB/s [ 520.5 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 1.784 MB/s [ 435.5 IOPS]
Sequential Read (T= 1) : 92.070 MB/s
Sequential Write (T= 1) : 93.529 MB/s
Random Read 4KiB (Q= 1,T= 1) : 1.319 MB/s [ 322.0 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 1.731 MB/s [ 422.6 IOPS]

Test : 1024 MiB [I: 97.2% (7241.7/7451.9 GiB)] (x5) [Interval=5 sec]
Date : 2017/06/04 11:45:55
OS : Windows 7 Ultimate SP1 [6.1 Build 7601] (x64)


92MB/s? If I wipe it im sure it would be 180MB/s.

HD tune, ATTO, some other disks

Having some problems with the new 4TB red also it seems it was just 140MB/s couple of days ago.
SSd works fine with 500MB/s.

This is the same on marvel or intel controller from a GA-Z87X-UD5H board.

Only found some tests with filled up drives from userbenchmarks site that seem to show performance degradation in correlation with space left but didnt do it myself because you cant select only hdds for testing.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,571
10,206
126
Uhm, yeah? That's because CrystalDiskMark uses the Filesystem to test, it reads and writes files from the filesystem. It doesn't benchmark the physical device.

Use ATTO or HDTune. (I'm not responsible if you use a physical-device-mode write test on a HDD with pre-existing data on it though.)
 

loope

Junior Member
Nov 7, 2009
19
10
81
HDTune and ATTo tests. Real world copying also seem to reflect crystaldiskmark tests, when I got the disk it was copying files at 160-170MB/s then it dropped to 130MB/s and now its at 90MB/s.
 

Smoblikat

Diamond Member
Nov 19, 2011
5,184
107
106
Yes......thats how physical platters work. The more data written to the disk, the physically further away it is from the heads parked location, meaning it takes actual time for the head to move and read the data at the edge of the disk....
 

VirtualLarry

No Lifer
Aug 25, 2001
56,571
10,206
126
HDTune and ATTo tests. Real world copying also seem to reflect crystaldiskmark tests, when I got the disk it was copying files at 160-170MB/s then it dropped to 130MB/s and now its at 90MB/s.

So, what's wrong? The HDTune graph in that picture looks fine. Min of 85MB/sec, max of 191MB/sec?

Edit: Seeems like OP doesn't understand how hard drives work, and simple physics. Sorry, not going to bother to explain, other than to re-iterated that HDDs start from the outside of the platter inwards, and that the start out with fast transfer rates on the outer sectors, and slow down towards the inside of the disk.
 
  • Like
Reactions: loope and Valantar

Valantar

Golden Member
Aug 26, 2014
1,792
508
136
OP: Look up how HDDs work, and how data positioning on the platter affects speeds. Long story short: the outside edge of the HDDs spin faster (a bigger radius at a fixed RPM will do that to you), so data can be written to and read from them faster. That's life how HDDs work. If you try to read out any of the first files you wrote to the drive (i.e. the ones that are placed on the outer parts of the platters), you'll see the same speeds you started out with.
 
  • Like
Reactions: loope

loope

Junior Member
Nov 7, 2009
19
10
81
It appears that is the cause, thanks guys.

So I tried to copy files that were first copied on the disk and the read speed is as it should be 170+.

I read that before about how speed can be different depending on where data is located on the platters but didnt expect a 45% 50% write/read decreasse.

One would hope they would make performance a little more uniform and reviewers could test the drop off at certain amounts of data.
 

Smoblikat

Diamond Member
Nov 19, 2011
5,184
107
106
It appears that is the cause, thanks guys.

So I tried to copy files that were first copied on the disk and the read speed is as it should be 170+.

I read that before about how speed can be different depending on where data is located on the platters but didnt expect a 45% 50% write/read decreasse.

One would hope they would make performance a little more uniform and reviewers could test the drop off at certain amounts of data.

They have, its called an SSD...
 

Red Squirrel

No Lifer
May 24, 2003
70,148
13,565
126
www.anyf.ca
OP: Look up how HDDs work, and how data positioning on the platter affects speeds. Long story short: the outside edge of the HDDs spin faster (a bigger radius at a fixed RPM will do that to you), so data can be written to and read from them faster. That's life how HDDs work. If you try to read out any of the first files you wrote to the drive (i.e. the ones that are placed on the outer parts of the platters), you'll see the same speeds you started out with.


I always figured it was the inner edge that was faster... but that actually makes more sense, the outer edge will have more data per track as there is a larger circumference, so given a single revolution the head is going to get to process more data than if it was on the inside. So do drives actually write from outer edge to inner edge? I always thought it was the other way around, like a CD.

As a side note some drives are designed for performance while others for power efficiency. If performance is what you want make sure you get 7200rpm drives and that it's not ones that go to sleep. The ones that go to sleep are going to be slower and in some cases even cause raid drop outs. With 8TB drives you pretty much want to be using raid, that is a LOT of data to lose (even if you have backups you still have to restore/reorganize it) when it fails. with 8TB drives I would personally buy 4 and do raid 10. Though if they use SMR, raid is going to be pretty slow... SMR drives are only really good for backups or archiving imo.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,571
10,206
126
One would hope they would make performance a little more uniform
Uhh, NO, that's how HDDs work.

If you want consistent performance, either use a RAID array that far exceeds your necessary transfer rates, or use an SSD.

Edit: Sorry, didn't see the other responses, when I posted that.
 
  • Like
Reactions: Valantar

Smoblikat

Diamond Member
Nov 19, 2011
5,184
107
106
Not sure why this myth keeps getting perpetuated. SSDs slow down in some fashion as they fill up, too:

http://www.hardwarecanucks.com/foru...ws/74653-crucial-mx300-2tb-ssd-review-10.html

Its a fact not a myth that SSD's dont have mechanical spinning platters in them, thus meaning performance is more consistent across the entire drive, which is exactly the quote I was replying to.

EDIT - So just did some testing, and apparently nothing makes sense.

525gb.jpg

512gb.jpg


There is 11gb of space left on the 525gb drive, and about 400gb left on the NVMe one, yet the 525gb one is as consistent as I expect an SSD to be. Would the OS being loaded on the NVMe one cause these weird spikes?
 
Last edited:

Red Squirrel

No Lifer
May 24, 2003
70,148
13,565
126
www.anyf.ca
Not sure why this myth keeps getting perpetuated. SSDs slow down in some fashion as they fill up, too:

http://www.hardwarecanucks.com/foru...ws/74653-crucial-mx300-2tb-ssd-review-10.html

They also degrade over write operations. SSDs are great for OS drives where you don't expect all that many writes, but for mass data storage spindle drives are still where it's at when you consider cost per TB and the life time per IO operation.

Some enterprise applications where money is no object will use SSDs, but they plan to switch out their entire infrastructure every couple years anyway.
 

Valantar

Golden Member
Aug 26, 2014
1,792
508
136
Uhh, NO, that's how HDDs work.
Yep. It's basic physics. Unless they make drives that crank up the spindle speed when you get closer to the centre of the platters (which would be pretty silly - why not then just run the platters faster all the time?), this effect is unavoidable. Or, I suppose, they could have a smaller data cell size on the inner part of the platter. Which, again, makes no sense at all.
 

loope

Junior Member
Nov 7, 2009
19
10
81
They have, its called an SSD...

Believe me If there was money I would just run a server or nas of 16TB ssds.

Just cant remember any of the other drives going down to 50% in speed. Probably because not all have that drastic of a drop. Maybe they can add a platter or have space left on a platters then restrict some part of the inner sides or something.

For NVMe with Os you can maybe check task manager - resource monitor while you test if theres other activity on it. Firefox for example with alot of tabs likes to write a bunch of data all the time until you config it.


Not sure why this myth keeps getting perpetuated. SSDs slow down in some fashion as they fill up, too:

http://www.hardwarecanucks.com/foru...ws/74653-crucial-mx300-2tb-ssd-review-10.html

Nice think I was looking into something like that when purchasing SanDisk SDSSDXP240G for Os.
 
Last edited:

Smoblikat

Diamond Member
Nov 19, 2011
5,184
107
106
Believe me If there was money I would just run a server or nas of 16TB ssds.

Just cant remember any of the other drives going down to 50% in speed. Probably because not all have that drastic of a drop. Maybe they can add a platter or have space left on a platters then restrict some part of the inner sides or something.

For NVMe with Os you can maybe check task manager - resource monitor while you test if theres other activity on it. Firefox for example with alot of tabs likes to write a bunch of data all the time until you config it.




Nice think I was looking into something like that when purchasing SanDisk SDSSDXP240G for Os.

There is somthing that exists that does exactly this, its called short stroking. When you set up a RAID array (or initialize a disk) set the maximum size to something lower than its total capacity, I did this on a 4X velociraptor array I had years ago and it actually had noticable results.
 

Valantar

Golden Member
Aug 26, 2014
1,792
508
136
There is somthing that exists that does exactly this, its called short stroking. When you set up a RAID array (or initialize a disk) set the maximum size to something lower than its total capacity, I did this on a 4X velociraptor array I had years ago and it actually had noticable results.
Why not then simply partition the drive, one "high speed" partition on the first half or so of the drive, and one "low priority" partition on the second half? That way, at least you don't lose any capacity.

As for the idea itself, it's rather silly - why give up a significant amount of capacity for what amounts to a perceived (and only perceived, as in: not actually real) increase in performance? After all, for the same parts of the drive, you'll have the same performance, the only difference being that you're voluntarily not using the slower parts of it. If a drive has a maximum capacity, but the manufacturer "seals off" the inner 400GB to increase performance, you still have a performance drop-off, albeit smaller, while being left with a 1.6TB drive for the same price. How is that a win, compared to buying a 2TB drive to begin with, and simply splitting it into two partitions?
 

UsandThem

Elite Member
May 4, 2000
16,068
7,383
146
EDIT - So just did some testing, and apparently nothing makes sense.


512gb.jpg


There is 11gb of space left on the 525gb drive, and about 400gb left on the NVMe one, yet the 525gb one is as consistent as I expect an SSD to be. Would the OS being loaded on the NVMe one cause these weird spikes?

While having the OS on it will impact it some, and I could be wrong (won't be the first or last time), but that chart looks typical of a drive throttling due to heat.
 

jkauff

Senior member
Oct 4, 2012
583
13
81
I use a Seagate 8TB external drive to back up my Blu-ray discs. When free space fell under .5 TB write performance fell way off--only 25% of normal speed.

Storage is so cheap now, I bought a Seagate 5TB for $89 on sale and off-loaded a couple of TB to it. Performance is back to normal.

BTW, I no longer buy WD external drives because of the hardware encryption issue. Also, by many reports, Seagate QC is much better these days. Only downside so far is they're very noisy, but for storage drives that's a minor flaw.
 

Valantar

Golden Member
Aug 26, 2014
1,792
508
136
While having the OS on it will impact it some, and I could be wrong (won't be the first or last time), but that chart looks typical of a drive throttling due to heat.
Does it? Don't HDTune's graphs represent a constant scan from the "start" of the drive (left side) to the "end" of the drive on the right? As in: the left-hand edge represents the first operations, so if it's overheating it ought to get worse the further right on the graph you go. Or is HDTune's access pattern not as sequential as I thought?
 

UsandThem

Elite Member
May 4, 2000
16,068
7,383
146
Does it? Don't HDTune's graphs represent a constant scan from the "start" of the drive (left side) to the "end" of the drive on the right? As in: the left-hand edge represents the first operations, so if it's overheating it ought to get worse the further right on the graph you go. Or is HDTune's access pattern not as sequential as I thought?

Not too sure, honestly. I really don't use HD Tune (I use Atto and CrysyalDiskMark). Although, I don't think HD Tune would start with one side of the drive and go to the opposite side since it is a NVMe drive (I think it would be more random with the NAND and nonsequential).

I initially thought his Samsung PM951 was hitting it's temp threshold and throttling to cool, but after reading up a bit more, there were others out there with similar readings with that drive. Also, it seems like there are a few different versions of that drive, and the only way to tell them apart is by the full model number.
 

Valantar

Golden Member
Aug 26, 2014
1,792
508
136
Not too sure, honestly. I really don't use HD Tune (I use Atto and CrysyalDiskMark). Although, I don't think HD Tune would start with one side of the drive and go to the opposite side since it is a NVMe drive (I think it would be more random with the NAND and nonsequential).

I initially thought his Samsung PM951 was hitting it's temp threshold and throttling to cool, but after reading up a bit more, there were others out there with similar readings with that drive. Also, it seems like there are a few different versions of that drive, and the only way to tell them apart is by the full model number.
In principle I would agree, but if you look at how HDTune looks while running (and taking into consideration it's a benchmark made for HDDs specifically, and not really meant for SSDs at all), I believe it does a "sequential" (as in: reads sectors in a linear, non-random pattern) scan from the outer to the inner part of the HDD platters. Of course an SSD will have mapped these sectors to spread-out physical areas for parallelism, but it's still a constant workload from left to right, and if temps played a part it should as such heat up as we move to the right on the graph.

I'm not used to seeing HDTune used on SSDs either, but I know it was used to diagnose 840 Evos as it scans the entire span of the drive and as the OS is typically on the "first" part of the SSD, it's useful for identifying read speed degradation for data stored for long periods of time.
 

Elixer

Lifer
May 7, 2002
10,371
762
126
Its a fact not a myth that SSD's dont have mechanical spinning platters in them, thus meaning performance is more consistent across the entire drive, which is exactly the quote I was replying to.

EDIT - So just did some testing, and apparently nothing makes sense.

525gb.jpg

512gb.jpg


There is 11gb of space left on the 525gb drive, and about 400gb left on the NVMe one, yet the 525gb one is as consistent as I expect an SSD to be. Would the OS being loaded on the NVMe one cause these weird spikes?
The problem here is SSDs work differently that a HD.
There is another translation layer involved that can't be seen by any utility, and is internal to the SSD's logic. This is the same reason why defragmenting a SSD makes no sense at all.
Call it a glorified lookup table that maps back the the LBA block in question to the OS.
It does this mainly for wear leveling.
Now, depending on the controller, it may wait a bit to pull down the correct location so it don't have to thrash the page boundaries and it will try to cache it if it can, otherwise, the controller gets very busy and performance becomes erratic.
All in all, it is pretty complex.
 
  • Like
Reactions: Valantar

Elixer

Lifer
May 7, 2002
10,371
762
126
Also not true, defragmenting SSDs can improve performance at the OS file system metadata level.
Hmm?
How do you figure that?
OS (or any utility) doesn't know how the SSD actually stores the data.
What may seem to be random/sequential to the OS (or a defragmenter) can in fact be sequential/random to the SSD itself. There is no 1:1 mapping here.

NTFS metadata is cached in memory, and unless you are dealing with 100's of TB of info, 99.9999% wouldn't notice any difference at all.
 
  • Like
Reactions: Valantar