• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

An unknown S.M.A.R.T. attribute of X25-M dropping

emoacht

Junior Member
Hi all,

I have been using Intel X25-M 80GB for 6 months in ThinkPad X61s and found an unknown S.M.A.R.T. attribute slowly dropping. So, I would like to hear similar case of other X25-M, if any, or some observations on this matter.

When I did the firmware update of my X25-M on April 19, five months had passed since I started to use this drive, I checked its S.M.A.R.T. by CrystalDiskInfo. The screenshot is posted below;
http://www.flickr.com/photos/35497088@N06/3559204207/

Then, when I checked S.M.A.R.T. again on May 19, just one month after the firmware update, I found E9 (Vendor Specific) attribute dropped from 97 to 96. The screen shot is below;
http://www.flickr.com/photos/35497088@N06/3559233623/

So, to see whether this unknown attribute may further change or not, I tried to intentionally wear off this drive by repeating a large amount of write operations. Practically, I restored the whole drive by Acronis True Image where the backup image contains around 160 thousand files that occupy around 48GB when restored.

After 30 times of the restore which accounts for around 1.4TB, I found E9 attribute dropped from 96 to 95. In addition, E1 attribute also dropped from 200 to 199. The screenshot of CrystalDiskInfo and HDD Health are below;
http://www.flickr.com/photos/35497088@N06/3559233699/
http://www.flickr.com/photos/35497088@N06/3560046010/

So, I wonder if this E9 (and E1) attribute has something to do with two attributes that Anand referred in his article (What Happens When Your SSD Fails?). If it is the case, E9 attribute of other X25-M may already drop to the middle of 90s or even 80s (I think the default value is 99). What is the current value of your X25-M? Someone also witness such drop?

Thanks.
 
Have you asked anyone at Intel thru the regular support contact channels as to what specifically the smart attributes mean?

I'd be very surprised if they refused to tell you. They don't usually make stuff a mystery, prolly just a matter of finding the correct, already public, technical specification document.

edit: after poking around the Intel site for a bit, the closest I could get to documentation on the SMART features for their SSD's is the following:

6.1.4 SMART Command Set
The Intel X18-M/X25-M SATA SSDs support the SMART command set, please refer to
Intel® High Performance Solid State Drive S.M.A.R.T Features User Guide - Order
Number 320520-002US
.

Unfortunately I can't locate this document, or anything even closely resembling it by google search and searches of Intel's pages directly...so beats the heck out of me where we are supposed to find 320520-002US...
 
Idontcare, thank you for your quick response.

Actually, I have red the part you mentioned in the datasheet of X25-M and also tried to find the document "Intel High Performance Solid State Drive S.M.A.R.T Features User Guide" in vain. That document may or may not be posted somewhere in Intel ftp site, I have no clue on the location though.

I had not asked Intel support on this matter because I was not sure if they would answer such a technical question to an ordinary end user... Okay, as you suggested, I just made a request at Intel web site for the way to obtain that document or any information on this matter. Well, since the name and document number of that document is clearly mentioned in publicly released documnent (datasheet), I hope that document itself would be be publicly opened.

I will come back later when I received any response from Intel. For the meantime, I will welcome any informantion about a X25-M which shows E9 value under 99.
 
mmm, well it gives a threshhold of 10 and 0 for E8 and E9... so that would be the point at which the drive needs to be replaced if i am reading it correctly.. must be some sort of lifespan measurement.
Don't TRY to wear out the cells... I calculated that with non stop writes you can wear it out in a year, but with REALISTIC user writes it should last 50 or even 500 years depending on your load.

Yes, writing 1.4TB unto an 80GB MLC drive wears out 1% of its lifespan writes. meaning you can get 1.4 PETA bytes total writes on it. this should last a long, long, while.
 
Originally posted by: taltamir
mmm, well it gives a threshhold of 10 and 0 for E8 and E9... so that would be the point at which the drive needs to be replaced if i am reading it correctly.. must be some sort of lifespan measurement.
Don't TRY to wear out the cells... I calculated that with non stop writes you can wear it out in a year, but with REALISTIC user writes it should last 50 or even 500 years depending on your load.

Yes, writing 1.4TB unto an 80GB MLC drive wears out 1% of its lifespan writes. meaning you can get 1.4 PETA bytes total writes on it. this should last a long, long, while.

Taltamir did you know that apparently you can't wear out an Intel drive, no matter how hard you try, in less than five years?

It has an internal algorithm that detects when excessive and sustained writes are ongoing and it intentionally decreases the write speed to forcibly decrease the total amount of writes that can occur per day.

Groovy huh?

3.5.4 Minimum Useful Life

A typical client usage of 20 GB writes per day is assumed.

Should the host system attempt to exceed 20 GB writes per day by a large margin for an extended period, the drive will enable the endurance management feature to adjust write performance.

By efficiently managing performance, this feature enables the device to have, at a minimum, a five year useful life.

Under normal operation conditions, the drive will not invoke this feature.

See page 11 of this Intel® X18-M/X25-M SATA Solid State Drive Product Manual
 
Taltamir did you know that apparently you can't wear out an Intel drive, no matter how hard you try, in less than five years?
Incorrect, intel aimed to make it last AT LEAST five years under NORMAL USAGE CONDITIONS and ended up making it last 50-500 years under NORMAL conditions... the 5 year estimate is based on a ludicrously high amount of writes per day.

Intel guarantees that it lasts 5 years... if you manage to somehow use it up before it they will replace it under warranty... but it is totally unrealistic even on a heavy server load.

80GB x 100,000 life-time writes * (1 data unit /1.1 write amplification overhead) = 7.27 million GB lifetime writes of data.
write speed 70 MB/s = 0.07 GB/s

7.27 million GB lifetime writes of data / 0.07 GB/s = 103.8 million seconds lifetime writes of data
103.8 x 1,000,000/1 million x minute/60 seconds x hour / 60 minutes x day / 24 hours x year / 365 days = 3.29 year lifetime writes of data. (double it for the 160GB version)

So if you max out the sequential writes at a nice flat 70MB/s which is the drive's max, and it has a write amplification of 1.1, and you do that 24/7/365 non stop, it will last 3.29 years.
of course, to put this in perspective:
0.07 GB/s x 60 seconds / minute x 60 minutes / hour x 24 hours / day = 6048 GB a day written to an 80GB drive. every day, 24/7/365 non stop writing.

mmm... how long did you say it took you to write that 1.4TB to the drive? could it be doing more than 70MB/s ?
 
PS, to the op... you say it took 6 months of normal usage to drop to 96? that figure starts at 100 and goes down to 0, which is the threshhold, it is a simple percent scale...
at 6 months per 4%, it will take you 12.5 years to get to 0.
 
Originally posted by: taltamir
Taltamir did you know that apparently you can't wear out an Intel drive, no matter how hard you try, in less than five years?
Incorrect, intel aimed to make it last AT LEAST five years under NORMAL USAGE CONDITIONS and ended up making it last 50-500 years under NORMAL conditions... the 5 year estimate is based on a ludicrously high amount of writes per day.

Intel guarantees that it lasts 5 years... if you manage to somehow use it up before it they will replace it under warranty... but it is totally unrealistic even on a heavy server load.

80GB x 100,000 life-time writes * (1 data unit /1.1 write amplification overhead) = 7.27 million GB lifetime writes of data.
write speed 70 MB/s = 0.07 GB/s

7.27 million GB lifetime writes of data / 0.07 GB/s = 103.8 million seconds lifetime writes of data
103.8 x 1,000,000/1 million x minute/60 seconds x hour / 60 minutes x day / 24 hours x year / 365 days = 3.29 year lifetime writes of data. (double it for the 160GB version)

So if you max out the sequential writes at a nice flat 70MB/s which is the drive's max, and it has a write amplification of 1.1, and you do that 24/7/365 non stop, it will last 3.29 years.
of course, to put this in perspective:
0.07 GB/s x 60 seconds / minute x 60 minutes / hour x 24 hours / day = 6048 GB a day written to an 80GB drive. every day, 24/7/365 non stop writing.

mmm... how long did you say it took you to write that 1.4TB to the drive? could it be doing more than 70MB/s ?

What is incorrect about me quoting Intel's own specifications?

It's pretty black and white, try and write more than 20GB per day and the drive's algorithms kick in to govern down the write speeds so that a minimum 5 yr life span is ensured.

I think that is what Intel says, and is what I said. So why is the first word in your post state "incorrect"?
 
okay, I missed that bit... I was saying this is incorrect
Taltamir did you know that apparently you can't wear out an Intel drive, no matter how hard you try, in less than five years?

Without noticing the the 20GB a day limitation you quoted because i was skimming your post. sorry.

anyways 20GB a day is ridiculously low amount! an 80GB should last almost 4 years writing at max speed, aka, 6TB a day. twice that for a 160GB...

I think their 1.1 write amplification factor is... overestimated though, which might require such measures.
 
Old Hippie:
Thank you for your screenshot. It made clear that this dropping happens in other X25-M.

taltamir:
Regarding your calculation on life expectancy,

80GB x 100,000 life-time writes * (1 data unit /1.1 write amplification overhead) = 7.27 million GB lifetime writes of data.
write speed 70 MB/s = 0.07 GB/s

7.27 million GB lifetime writes of data / 0.07 GB/s = 103.8 million seconds lifetime writes of data
103.8 x 1,000,000/1 million x minute/60 seconds x hour / 60 minutes x day / 24 hours x year / 365 days = 3.29 year lifetime writes of data. (double it for the 160GB version)
I think since X25-M is equipped with MLC flash, its lifetime write of data should be 80GB x 10,000 (lifetime writes) / 1.1 (write amplification) = 0.727 million GB. So, according to your formula, it will last 0.329 year, about 4 months. Am I wrong?

Regarding the time I spent for this test, one restore took around half an hour. So, 30 restore would take 15 hours if I did it continuously. Actually, it took more because I had to go out and sleep during test but less than full one day. The following is the exact processing time of each restore;

Restore 1: 26(min)55(sec)
Restore 2: n/a
Restore 3: n/a
Restore 4: 26(min)10(sec)
Restore 5: 26(min)15(sec)
Restore 6: 26(min)09(sec)
Restore 7: 25(min)55(sec)
Restore 8: 25(min)50(sec)
Restore 9: 25(min)50(sec)
Restore 10: 26(min)10(sec)
Restore 11: 26(min) 8(sec)
Restore 12: 26(min)12(sec)
Restore 13: 26(min)13(sec)
Restore 14: 26(min)12(sec)
Restore 15: 26(min)10(sec)
Restore 16: 26(min) 7(sec)
Restore 17: 26(min) 8(sec)
Restore 18: n/a
Restore 19: 26(min)14(sec)
Restore 20: 26(min)13(sec)
Restore 21: 26(min)19(sec)
Restore 22: 26(min)12(sec)
Restore 23: 26(min)13(sec)
Restore 24: 26(min)12(sec)
Restore 25: 26(min)16(sec)
Restore 26: n/a
Restore 27: 26(min)12(sec)
Restore 28: n/a
Restore 29: n/a
Restore 30: 26(min)13(sec)
("n/a" means I was away or missed the moment when the restore was completed.)

As you see, the processing time is very stable. I calculated that the processing speed is around 31MB/s, the restore process includes not only write but also read for verification though.

So, assuming the rate of dropping of E9 value will not change, if someone wishes to drop E9 value from 95 to 0, the person will need 15 x 94 = 1410(hours) = 58(days) and 18(hours). I think if the person uses more powerful machine and employs more sophisticated way, it can be shortened considerably.

Idontcare:
Regarding "the endurance management feature", I could not see such performance decrease during this test. Maybe this test is not suitable to see the change by this feature or 1.4TB write in one day may not be enough to trigger this feature.
 
Originally posted by: emoacht
taltamir:
Regarding your calculation on life expectancy,

80GB x 100,000 life-time writes * (1 data unit /1.1 write amplification overhead) = 7.27 million GB lifetime writes of data.
write speed 70 MB/s = 0.07 GB/s

7.27 million GB lifetime writes of data / 0.07 GB/s = 103.8 million seconds lifetime writes of data
103.8 x 1,000,000/1 million x minute/60 seconds x hour / 60 minutes x day / 24 hours x year / 365 days = 3.29 year lifetime writes of data. (double it for the 160GB version)
I think since X25-M is equipped with MLC flash, its lifetime write of data should be 80GB x 10,000 (lifetime writes) / 1.1 (write amplification) = 0.727 million GB. So, according to your formula, it will last 0.329 year, about 4 months. Am I wrong?

Yep, talta's MLC lifetime calc needs to be one order of magnitude lower.

Also, consider that this manner of a lifetime calculation determines how many TB's (and time) are needed for every single cell on the drive to die.

Which is not likely to be the end-user's perception (nor Intel's) of what a "usable lifetime" would mean.

I imagine that Intel, and an end-user, would consider their SSD to be significantly impacted if they lost 20% of the cells...that means your storage capacity for new files would only be 80% of the SSD's rated capacity when first purchased. That might be actually a tad generous, personally I'd be rather irritated if I lost 10% of my SSD's capacity.

So when doing lifetime calcs we really need to account for the fact Intel is aiming for a "Minimum Useful Life" per their wording and the useful part likely means some high-percentage of the drive is still capable of storing new data at the user's request.

So if by "minimum useful life" Intel was targeting the drive still having 90% of its writeable storage after five years then Talta's lifetime calcs need to be dialed down by a factor of 100x...10x for the SLC->MLC error, and another 10x to account for only allowing 10% of the cells to die in five years.

Originally posted by: taltamir
anyways 20GB a day is ridiculously low amount! an 80GB should last almost 4 years writing at max speed, aka, 6TB a day. twice that for a 160GB...

It's cool, so yeah my point was more that 20GB/day of writes is actually a ballpark feasible amount of writes day in and day out for your more heavy-users that do a lot of encoding and the like. So understanding this "feature" of Intels SSD's is relevant IMO.

If you think about it, there has always been a low-level of rumors and speculation that these MLC Intel drives artificially capped the write speeds relative to what the chips and controller could do, these arguments largely rely on the relative speed of the SLC versions of these drives.

But this would go some distance towards explaining why the write-speed is capped at 70MB/s for a fresh/new drive when the other guys can get it up over 120MB/s.

It would be interesting to see just how reactive this "efficiently managing performance" algorithm is when it comes to dialing down the write-speeds if someone were to write 500GB per day for 30 days or so.
 
Originally posted by: emoacht
Then, when I checked S.M.A.R.T. again on May 19, just one month after the firmware update, I found E9 (Vendor Specific) attribute dropped from 97 to 96.
Wild guess, it's a percentage. this sounds like Intel's estimate of remaining drive life?

 
I imagine that Intel, and an end-user, would consider their SSD to be significantly impacted if they lost 20% of the cells...that means your storage capacity for new files would only be 80% of the SSD's rated capacity when first purchased. That might be actually a tad generous, personally I'd be rather irritated if I lost 10% of my SSD's capacity.

I imagine that just like mechanical hard drives these have some capacity put aside as spare for when cells begin to die. This would mean the user wouldn't see the size of their disk decreasing until it was quite far along into its EOL.
 
Also, consider that this manner of a lifetime calculation determines how many TB's (and time) are needed for every single cell on the drive to die.
well, above that they starting being at risk, so instead of dying they start becoming read only (AFAIK this is a controller decision, not a physical property of the cell). And that is what wear leveling is for, to make sure it happens all at once to all the cells.

but yea, i messed up, for some reason i remembered it as being 100k for MLC and 1 million for SLC... but no, its 10k and 100k respectively, oops... so yea, just divide it by ten...

that being said... at a max of 20GB a day it should last MANY MANY years. it takes 6TB a day to use it up in 0.3 years. so 400GB a day to last 5 years.

although, maybe that endurance mode it engages at 20GB is just limiting the write speed so that it cannot do more than 400GB a day instead of the theoretical 6TB max?
 
Originally posted by: Idontcare

6.1.4 SMART Command Set
The Intel X18-M/X25-M SATA SSDs support the SMART command set, please refer to
Intel® High Performance Solid State Drive S.M.A.R.T Features User Guide - Order
Number 320520-002US
.

Unfortunately I can't locate this document, or anything even closely resembling it by google search and searches of Intel's pages directly...so beats the heck out of me where we are supposed to find 320520-002US...

I couldn't tell you the drive isn't slowly dying, cause it looks like it does.

Anyway here's how to figure out the SMART attributes. I've looked it up and, notice the 1-255 Intel shows on their PDF along with descriptions?? Well, with software that figure out SMART data, it shows it as hexadecimal value, e.g. E9. From their PDF "E9" is 235. It says 235 is vendor specific value though so if anyone can find more that would be great.
 
Originally posted by: IntelUser2000
Originally posted by: Idontcare

6.1.4 SMART Command Set
The Intel X18-M/X25-M SATA SSDs support the SMART command set, please refer to
Intel® High Performance Solid State Drive S.M.A.R.T Features User Guide - Order
Number 320520-002US
.

Unfortunately I can't locate this document, or anything even closely resembling it by google search and searches of Intel's pages directly...so beats the heck out of me where we are supposed to find 320520-002US...

I couldn't tell you the drive isn't slowly dying, cause it looks like it does.

Anyway here's how to figure out the SMART attributes. I've looked it up and, notice the 1-255 Intel shows on their PDF along with descriptions?? Well, with software that figure out SMART data, it shows it as hexadecimal value, e.g. E9. From their PDF "E9" is 235. It says 235 is vendor specific value though so if anyone can find more that would be great.

it doesn't. loosing 4% of expected life in 6 months means it will take ANOTHER 12 YEARS to kill it... how is this "slowly dying"?
 
Hi all,

I've been waiting Intel's reply for a month but not received it yet. It may be what they don't want to make public...

Anyway, I'd like to update what I found with my X25-M. Finally, I got 50 at E9. I did some write workload tests hoping they might confirm a hypothesis that E9 indicates something like remaining percentage of its life expectancy. But, I found it is UNLIKELY. The following is the summary of the change of E9, write workloads caused each drop, average write speed, elapsed time and how I did so.

98-96: 6 months, by daily use and some restores using True Image.
95: 1.39TiB, 28.2MiB/s, 15 hours, by restores using True Image (as described in my original post)
94: >1.60TiB, 65.0MiB/s, by 22 times full erase (0fill, etc.) using HD Tune Pro and other
93: 3.64TiB, 66.3MiB/s, 18 hours, by 50 times full erase using HD Tune Pro
92: 3.71TiB, 65.5MiB/s, 18 hours, by 51 times full erase using HD Tune Pro
91: by some writes using Iometer where there were no partition
90: 1.10TiB, 16.8MiB/s, 19 hours, by sequential and random writes using Iometer
89: 4.61TiB, 44.7MiB/s, 28 hours, by sequnetial and random writes using Iometer
88: 4.10TiB, 47.7MiB/s, 25 hours, by random writes using Iometer
87: 5.40TiB, 63.0MiB/s, 20 hours, by sequential and random writes using Iometer
86: 5.15TiB, 55.6MiB/s, 27 hours, by sequential and random writes using Iometer
85-84: 0.54TiB, by sequential and random writes using Iometer and other
83-73: by some writes using Iometer and other, not so heavy workloads though
72: 0.05TiB, 89.2MiB/s, 10 min, by sequential writes using Iometer to a partition
71: 0.10TiB, 89.0MiB/s, 20 min, by sequential and random writes using Iometer
70: 0.05TiB, 90.6MiB/s, 10 min, by sequential writes using Iometer
69: 0.06TiB, 98.4MiB/s, 10 min, by sequential writes using Iometer
68: 0.03TiB, 98.6MiB/s, 6 min, by sequential writes using Iometer
67: 0.09TiB, 98.0MiB/s, 16 min, by sequential writes using Iometer
66: 0.04TiB, 97.8MiB/s, 8 min, by sequential writes using Iometer
65-64: 0.09TiB, 97.9MiB/s, 16 min, by sequential writes using Iometer
63-62: 0.13TiB, 97.7MiB/s, 24 min, by sequential writes using Iometer
61: 0.02TiB, 97.7MiB/s, 4 min, by sequential writes using Iometer
60: 0.07TiB, 97.8MiB/s, 12 min, by sequential writes using Iometer
59: 0.05TiB, 96.0MiB/s, 10 min, by sequential writes using Iometer
58: 0.05TiB, 94.2MiB/s, 10 min, by sequential writes using Iometer
57: 0.07TiB, 119.8MiB/s, 10 min, by sequential writes using Iometer
56: 0.11TiB, 119.0MiB/s, 16 min, by sequential writes using Iometer
55: 0.05TiB, 74.3MiB/s, 12 min, by sequential writes using Iometer
54: 0.04TiB, 74.4MiB/s, 10 min, by sequential writes using Iometer
53: 2.19TiB, 84.7MiB/s, 7 hours, by sequential and random writes using Iometer.
52: 0.05TiB, 84.8MiB/s, 10 min, by sequential writes using Iometer
51: 0.10TiB, 97.7MiB/s, 18 min, by sequential writes using Iometer
50: 0.12TiB, 98.2MiB/s, 22 min, by sequential writes using Iometer
TOTAL: >34.83TiB

Note: The details of access pattern, etc. can be seen my flickr page. Write workloads are, when using Iometer, calculated from average speed and elapsed time. Regarding some average speeds which are far higher than X25-M's official spec, 1. I calculated from what Iometer exactly showed me, 2. Queue depth is different, 3. Without checking access pattern which Intel used, they cannot be directly compared.

As you see, write workloads caused each drop for 72-50 are extremely smaller than those for 94-86. The big difference is: 1. The writes were mostly sequential for 72-50 while mix of sequential and random for 94-86, 2. The writes were made onto a partition (1GB, NTFS, 4KB cluster size) for 72-50 while there were no partition for 94-86. However, I am not certain the existence of a partition may effect the usage of a drive by benchmark programs, because AFAIK, they directly access the drive even when they use a partition, creating a test file just for ensuring test space.

Well, at least I can say that write workloads for 72-50 are too small and their elapsed time are too short that each drop of E9 cannot represent one percent of lifetime writes of X25-M, although I believe that E9 has a certain relationship with the history of usage of X25-M. One more thing, E8 dropped from 100 to 99 and E1 from 200 to 193 in the couse of drop of E9.

I don't think, without information from Intel, if any, this matter can be explored further but I hope this post shed light a little bit on SMART of X25-M.
 
Back
Top