Intel X-25M erratic performance

Mako88

Member
Jan 4, 2009
129
0
0
Just installed an X-25m in my Core i7 desktop (Vista 64) and the benchmarks aren't where they should be.

Disabled prefetch/superfetch, disabled indexing/defraging, enabled write-caching and advanced performance, all the usual tweaks but the drive's write performance is about -50% of where it should be along with the read performance being erratic.

CrystalDiskMark 2.2 Results:

Seq read: 233.2
Seq write: 37.8 (should be in low 70s)
512K read: 159.3
512K write: 35.88 (should be in low 70s)
4k read: 19.43
4k write: 29.51 (a little slow, should be in high 30s)

Any thoughts?
 

Mako88

Member
Jan 4, 2009
129
0
0
Originally posted by: Denithor
And make sure it's in SATA 300 mode not SATA 150.

Using an EVGA x58, how can I check SATA speed? Likely not the culprit however as the writes aren't taxing even 1/4th of 150 atm.

Just did a fresh install of Vista 64, left everything default with the exception of turning off indexing and the page file. Benchmarks are the same, reads are ok, writes are badly off.
 
Nov 26, 2005
15,194
403
126
Originally posted by: Quiksilver
Originally posted by: BTRY B 529th FA BN
Give it about a week

???

What exactly is this going to do.

I read that once these things get settled in they pick up speed. Now mind you I read that and I don't have proof. It makes no sense to me but it seems the OP's problem is not just an isolated one.
 

Denithor

Diamond Member
Apr 11, 2004
6,298
23
81
Yeah, I've heard the same thing - SSDs do some kind of write leveling and until they have written over the entire disk the first time they have kinda choppy performance. Either give it a week or rip several DVDs onto the drive to fill it up quickly. See if it levels out then.
 

n7

Elite Member
Jan 4, 2004
21,281
4
81
Welcome to my world.
I've been informed these numbers improve over time.

But mine have not, & mine has had the same stuff running (OS drive) for over two weeks now with no improvements.

Seems these do fabulously as empty benchmark drives, but slow down considerably when actually in use.

Not that they are slow; they still feel extremely fast
But at least in CrystalDiskMark, write speeds go to crap when the drive is half or more full as a OS drive vs. empty non OS drive, at least for me.
 

Denithor

Diamond Member
Apr 11, 2004
6,298
23
81
Sounds like you guys should post in this thread as there are several happy owners in there. Maybe they've found some settings or something that yields better performance?
 

frostedflakes

Diamond Member
Mar 1, 2005
7,925
1
81
Originally posted by: n7
Welcome to my world.
I've been informed these numbers improve over time.

But mine have not, & mine has had the same stuff running (OS drive) for over two weeks now with no improvements.

Seems these do fabulously as empty benchmark drives, but slow down considerably when actually in use.

Not that they are slow; they still feel extremely fast
But at least in CrystalDiskMark, write speeds go to crap when the drive is half or more full as a OS drive vs. empty non OS drive, at least for me.
Well isn't that to be expected? Once you load an OS on it that is doing writes while you are benching, performance will go down. Same reason processor benchmarks will be lower when you have a load on your CPU vs. no load and the benchmark is able to utilize all the resources available.

I mean is this not common sense, or am I missing something? :confused:
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Seems these do fabulously as empty benchmark drives, but slow down considerably when actually in use.

Do you feel the slowdown on it??

This is what's happening.

At the chip level writes are order of magnitutde slower on a SSD than reads. Why?? Because

1. They have to erase before writing again
2. They erase in blocks even though they wrote much less

On a older SSD with simple controller the slowdown isn't as drastic. Slowdown is of course there.

So when does the slowdown happen?? As soon as you finish writing to the entire drive and it needs to erase to write again.

Performance loss happens because erasing is slow!!

Why is it pronounced more in Intel SSDs?? Because their speeds are already much faster than older SSDs so the drop is more noticeable.

Say you have a crappy controller SSD with DRAM write cache(which is also used for write levelling). As soon as the cache is full it'll perform at the rate of the chip, which is painstakingly slow!!

If you have a more advanced controller with excellent write levelling algorithms like the Intel drive, when the erase needs to occur it slows down cause its making sure that the write levelling is working well and the drive lasts longer!

The drive slows down to a level so it can perform without overloading the DRAM write buffer. Because DRAM is so much faster and the controller makes it even faster with things like 10-channel and NCQ, it gets overloaded faster.

If you give enough time, the performance should stabilize as according to Intel the controller is advanced enough to adapt to usage patterns(so long as it doesn't change drastically day after day).

Out of the new controller based SSDs, FusionIO, X25-M, and X25-E, the X25-E recovers the fastest. The X25-E can recover in mere minutes. At least in synthetics.

In reality I think the recovery time is equal to the time it takes to write to the entire drive at least once, which is 80GB of writes.

Make no mistake, your numbers even after the slowdowns are several times better than other drives.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: IntelUser2000
Seems these do fabulously as empty benchmark drives, but slow down considerably when actually in use.

Do you feel the slowdown on it??

This is what's happening.

At the chip level writes are order of magnitutde slower on a SSD than reads. Why?? Because

1. They have to erase before writing again
2. They erase in blocks even though they wrote much less

On a older SSD with simple controller the slowdown isn't as drastic. Slowdown is of course there.

So when does the slowdown happen?? As soon as you finish writing to the entire drive and it needs to erase to write again.

Performance loss happens because erasing is slow!!

Why is it pronounced more in Intel SSDs?? Because their speeds are already much faster than older SSDs so the drop is more noticeable.

Say you have a crappy controller SSD with DRAM write cache(which is also used for write levelling). As soon as the cache is full it'll perform at the rate of the chip, which is painstakingly slow!!

If you have a more advanced controller with excellent write levelling algorithms like the Intel drive, when the erase needs to occur it slows down cause its making sure that the write levelling is working well and the drive lasts longer!

The drive slows down to a level so it can perform without overloading the DRAM write buffer. Because DRAM is so much faster and the controller makes it even faster with things like 10-channel and NCQ, it gets overloaded faster.

If you give enough time, the performance should stabilize as according to Intel the controller is advanced enough to adapt to usage patterns(so long as it doesn't change drastically day after day).

Out of the new controller based SSDs, FusionIO, X25-M, and X25-E, the X25-E recovers the fastest. The X25-E can recover in mere minutes. At least in synthetics.

In reality I think the recovery time is equal to the time it takes to write to the entire drive at least once, which is 80GB of writes.

Make no mistake, your numbers even after the slowdowns are several times better than other drives.

IntelUser2000 this is semi-related to the OP but admittadly off-topic...is there some reason that the SSD drives don't take advantage of idle periods to pre-erase blocks of known-unused (OS deleted) data? Why do these drives wait for a write request to commit an erase command on a block? (or alternatively to prepare empty erased blocks in advance by shuffling scattered data around and compiling full blocks of data...time intensive but only done when the drive is idle for a minimum period of time)

I am in no way assuming Intel and others did not already think of this, I am merely wondering why such a pre-erasing approach turned out to not be viable with current SSD's.
 

WaitingForNehalem

Platinum Member
Aug 24, 2008
2,497
0
71
It's a flaw of SLC's.

Unlike with mechanical or MLC drives where data can be stored in multiple states, the SLC memory only modulates between written and unwritten. This means that once the drive has written to each cell, rewriting to them means the drive must first set the cell from written, to unwritten and then back to written in accordance with the new data, doubling the write times the second time the drive needs to write to that particular cell.

While fully formatting the drive resets all the cells to unwritten and brings back that awesome write performance evident in our graphs, it?s hardly reasonable for end users to have to format their drive every time it fills up and although we realise this is a very specialised enterprise level drive, it?s still a disappointing flaw that?s unfortunately part of using SLC memory in the first place.

http://www.bit-tech.net/hardwa...25-e-32gb-ssd-review/7
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Originally posted by: Idontcare
IntelUser2000 this is semi-related to the OP but admittadly off-topic...is there some reason that the SSD drives don't take advantage of idle periods to pre-erase blocks of known-unused (OS deleted) data? Why do these drives wait for a write request to commit an erase command on a block? (or alternatively to prepare empty erased blocks in advance by shuffling scattered data around and compiling full blocks of data...time intensive but only done when the drive is idle for a minimum period of time)

I am in no way assuming Intel and others did not already think of this, I am merely wondering why such a pre-erasing approach turned out to not be viable with current SSD's.

They actually do that in idle periods, but with hard enough usage it can overwhelm it. And you need some "free space" to do that.

That method is apparently called "Garbage Collection". In the X25-M drive, the difference between the manufacturer's storage capacity(which is in decimal bytes) vs. actual capacity(which is in binary bytes) is where its used for Garbage Collection data.

X25-M uses 20 32Gbit(4GB) NAND chips. That is 80GB of data. But the drive capacity in reality is 74.5GB. The 5.5GB is used for Garbage Collection.

The X25-E has a much greater space for that. X25-E 32GB uses 20 16Gbit(2GB) NAND chips which makes it 40GB, but the actual capacity is 32x1000x1000x1000 making in reality only 29.8GB and the rest is free space used for Garbage Collection. Plus, its SLC which is in fundamental terms faster than MLC.

There is of course a way to format the X25-M to have greater free space at the cost of your drive storage capacity. Anandtech in their X25-M review had the description: http://www.anandtech.com/cpuch...howdoc.aspx?i=3403&p=4


This "free space" has nothing to do with the user capacity(the amount seen by the OS) of the drive. It's the space used by the controller for Garbage Collection data.


"Intel actually includes additional space on the drive, on the order of 7.5 - 8% more (6 - 6.4GB on an 80GB drive) specifically for reliability purposes. If you start running out of good blocks to write to (nearing the end of your drive's lifespan), the SSD will write to this additional space on the drive. One interesting sidenote, you can actually increase the amount of reserved space on your drive to increase its lifespan. First secure erase the drive and using the ATA SetMaxAddress command just shrink the user capacity, giving you more spare area."

Originally posted by: WaitingForNehalem
It's a flaw of SLC's.

That's BS, X25-M behaves similar. X25-E will respond better and faster. SLC has faster write performance and it'll behave better in "Garbage Collection Mode" because of that. It also has bigger free space.
 

WaitingForNehalem

Platinum Member
Aug 24, 2008
2,497
0
71
Originally posted by: IntelUser2000
Originally posted by: Idontcare
IntelUser2000 this is semi-related to the OP but admittadly off-topic...is there some reason that the SSD drives don't take advantage of idle periods to pre-erase blocks of known-unused (OS deleted) data? Why do these drives wait for a write request to commit an erase command on a block? (or alternatively to prepare empty erased blocks in advance by shuffling scattered data around and compiling full blocks of data...time intensive but only done when the drive is idle for a minimum period of time)

I am in no way assuming Intel and others did not already think of this, I am merely wondering why such a pre-erasing approach turned out to not be viable with current SSD's.

They actually do that in idle periods, but with hard enough usage it can overwhelm it. And you need some "free space" to do that.

That method is apparently called "Garbage Collection". In the X25-M drive, the difference between the manufacturer's storage capacity(which is in decimal bytes) vs. actual capacity(which is in binary bytes) is where its used for Garbage Collection data.

X25-M uses 20 32Gbit(4GB) NAND chips. That is 80GB of data. But the drive capacity in reality is 74.5GB. The 5.5GB is used for Garbage Collection.

The X25-E has a much greater space for that. X25-E 32GB uses 20 16Gbit(2GB) NAND chips which makes it 40GB, but the actual capacity is 32x1000x1000x1000 making in reality only 29.8GB and the rest is free space used for Garbage Collection. Plus, its SLC which is in fundamental terms faster than MLC.

There is of course a way to format the X25-M to have greater free space at the cost of your drive storage capacity. Anandtech in their X25-M review had the description: http://www.anandtech.com/cpuch...howdoc.aspx?i=3403&p=4


This "free space" has nothing to do with the user capacity(the amount seen by the OS) of the drive. It's the space used by the controller for Garbage Collection data.


"Intel actually includes additional space on the drive, on the order of 7.5 - 8% more (6 - 6.4GB on an 80GB drive) specifically for reliability purposes. If you start running out of good blocks to write to (nearing the end of your drive's lifespan), the SSD will write to this additional space on the drive. One interesting sidenote, you can actually increase the amount of reserved space on your drive to increase its lifespan. First secure erase the drive and using the ATA SetMaxAddress command just shrink the user capacity, giving you more spare area."

Originally posted by: WaitingForNehalem
It's a flaw of SLC's.

That's BS, X25-M behaves similar. X25-E will respond better and faster. SLC has faster write performance and it'll behave better in "Garbage Collection Mode" because of that. It also has bigger free space.<

it?s still a disappointing flaw that?s unfortunately part of using SLC memory in the first place.

That's what bittech says.

 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: IntelUser2000
They actually do that in idle periods, but with hard enough usage it can overwhelm it. And you need some "free space" to do that.

Thanks for the thorough and well stated explaination! I understand now. Somehow I managed to read the AT review and never absorbed the information regarding garbage collection. Me wonders what else I am failing to register anymore... Alas many thanks!
 

yh125d

Diamond Member
Dec 23, 2006
6,886
0
76
Originally posted by: Mako88
Originally posted by: Denithor
And make sure it's in SATA 300 mode not SATA 150.

Using an EVGA x58, how can I check SATA speed? Likely not the culprit however as the writes aren't taxing even 1/4th of 150 atm.

Just did a fresh install of Vista 64, left everything default with the exception of turning off indexing and the page file. Benchmarks are the same, reads are ok, writes are badly off.


You realize sata-150 is 150 megabytes/second max right?