• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Vertex 3 Degradation on Marvell 9128 Controller

jadawgis732

Junior Member
Hi guys,
I have a Vertex 3 which I run off of an X58 board. The Sata 3 controller is provided by a Marvell 9128 controller which I understand does not support trim. I took AS-SSD benches when I first got the drive, on both controllers.

x58 SATA II 6 Months Ago
x58before.jpg


Marvell 9128 SATA III 6 Months Ago
marvell9128before.jpg


I've been running it on off the Marvell 9128 controller for the past 6 months, which, again, I am told does not support trim. Now I am seeing these numbers.

x58 SATA II Today
x58after.jpg


Marvell 9128 SATA III Today
marvell9128after.jpg


My question is, will this drive return to its former greatness after a period of time on the x58 controller, or is trim not going to salvage anything at this point? Thanks
 
Last edited:
In my experience it does. However, my experience is different. I had several different Intel SSDs in a PS3. There's obviously no TRIM support, but luckily Intel's GC has been excellent. Anyhow after removing it from the PS3, and benching I have to format it to use it in Windows which issues the TRIM command and restores the drive.

I suggest doing the same. If you are on Win7 and use the image backup, during the restore, format the drive, which will issue the TRIM command, then restore. Otherwise you could try to go ahead and use it with TRIM now being passed. It'll be interesting. Best of luck.
 
From what I have read around here you would be better off running the drive off your Intel ports even if they aren't SATA 3. Maybe someone can confirm this.

I had an AMD 790 board with a Marvell 9128 controller and I got better speeds off of the AMD SATA 2 ports - just using a SATA 2 SSD however (Intel X-25M).
 
In my experience it does. However, my experience is different. I had several different Intel SSDs in a PS3. There's obviously no TRIM support, but luckily Intel's GC has been excellent. Anyhow after removing it from the PS3, and benching I have to format it to use it in Windows which issues the TRIM command and restores the drive.

I suggest doing the same. If you are on Win7 and use the image backup, during the restore, format the drive, which will issue the TRIM command, then restore. Otherwise you could try to go ahead and use it with TRIM now being passed. It'll be interesting. Best of luck.

Sounds like a pretty good idea. I have a WHS machine that takes nightly backups, so if it doesn't improve in the next week or so, I will try that.
 
Couple of sandforce facts for anyone interested in them.

TRIM will not save you much/quickly on any Sandforce based drive after it is already degraded due to dirty blocks.

Reformatting will also not immediately TRIM the drive and force recovery. Sandforce drives do not work like that and the mapping/fresh block expenditure will only get even more taxed than it already is.

Secure erase(SE) is the ONLY way to wipe the controller's maps and restore factory fresh performance. TRIM and massive amounts of dedicated GC recovery time can help.. but SE is best.

Many don't realize that Sandforce can be backed into a corner with certain usage scenario's. Such as Windows power options being kept at defaults of 20 minute sleeps while the CMOS is configured for S3/S4 sleep/hibernate. Cut power to the drive in any way while you're trying to recover it with GC?.. and it eliminates the internal algorithms altogether. S1 sleeps with logoff idles are best for allowing GC to keep the drive nice and consistent. And that rule still applies even if you have TRIM pass-through to the drive.

Yes.. Intel's sata 2.0 ports are generally faster than the Marvell's sata 3.0 ports when used for an OS volume. Faster writes, deeper reaching q-depth performance, better caching, and lower latency are all very worthwhile trades for the sequential read speed gains of the Marvell.
 
Last edited:
Jumped 41 points in 2 days. Plus the process for running Secure Erase on a boot drive is complicated. I will just wait it out.
x58after2days.jpg
 
I was bored today so I ran the HDDErase program and restored from a WHS Backup. As promised, I am back to par.
x58secureerase.jpg

Thanks for the help.
 
One last piece of advice?

Would be set your sleep/power options to high performance and in advanced options.. to never let the drive power down(which it will still do after it enters S3/S4 anyways).

Then be sure to change the default 20 min sleep settings to something like 2-4 hrs. This should allow the drive more than sufficient time per week(as long as you sleep it.. which is not my actual preference with ANY Sandforce based drive/s due to increased power transition variables) to remain consistent in all but the most atypical write load environments.

Basic rule with any Sandforce drive is to keep a mental tally of "ballpark write loads" in the back of your mind so that when you do beat them up more than usual?.. you can allow additional dedicated recovery time following that particular worksession. Basically just adjust your recovery time on a day to day basis to coincide with the write loads that the drive/s actually see. This keeps the drives fresh block pool that much larger and avoids most throttling/performance issues over the long run. :thumbsup:

Or you could just take the easy way out and set the bios to an S1 state. Then the drive will never lose power regardless of what the rest of the power options are set for. That will easily enable the rec'd 10-20 hours per week of logoff idle time to keep you feeling daisy fresh. 🙂

Raiders/heavy users who have too small a volume for the amount of statically stored data and/or amount of data written per session(because total ignorance of the hardware's limitation deserves a double whammy, I guess.. lol).. make use of idle recovery and it works very well indeed.

During recovery mode the drive also gives the bonus of static data rotation and partial block consolidations to promote physical file structure efficiency and improve wear leveling. At least, that's what all the hot-chicks I hang out with do with their Sandforce based hardware. Dirty pun intended of course. 😛

Good luck with it from here on out.
 
Last edited:
One last piece of advice?

Would be set your sleep/power options to high performance and in advanced options.. to never let the drive power down(which it will still do after it enters S3/S4 anyways).

Then be sure to change the default 20 min sleep settings to something like 2-4 hrs. This should allow the drive more than sufficient time per week(as long as you sleep it.. which is not my actual preference with ANY Sandforce based drive/s due to increased power transition variables) to remain consistent in all but the most atypical write load environments.

Basic rule with any Sandforce drive is to keep a mental tally of "ballpark write loads" in the back of your mind so that when you do beat them up more than usual?.. you can allow additional dedicated recovery time following that particular worksession. Basically just adjust your recovery time on a day to day basis to coincide with the write loads that the drive/s actually see. This keeps the drives fresh block pool that much larger and avoids most throttling/performance issues over the long run. :thumbsup:

Or you could just take the easy way out and set the bios to an S1 state. Then the drive will never lose power regardless of what the rest of the power options are set for. That will easily enable the rec'd 10-20 hours per week of logoff idle time to keep you feeling daisy fresh. 🙂

Raiders/heavy users who have too small a volume for the amount of statically stored data and/or amount of data written per session(because total ignorance of the hardware's limitation deserves a double whammy, I guess.. lol).. make use of idle recovery and it works very well indeed.

During recovery mode the drive also gives the bonus of static data rotation and partial block consolidations to promote physical file structure efficiency and improve wear leveling. At least, that's what all the hot-chicks I hang out with do with their Sandforce based hardware. Dirty pun intended of course. 😛

Good luck with it from here on out.

You sound like you know what you are talking about, so I don't want to go against you, but is there any way to make sure the drive is getting good power, while still allowing sleep after 20 minutes? My PC consumes like 160W at idle, so I'd like it to go into S3 when I am away from my desk.

Here are the current options:
sleepoptions.jpg
 
Last edited:
short answer is no.

This is because those power mgmt settings are system wide. You would need to use S1 sleeps only which will keep power to the fans and most mobo devices to enable constant power.

If you use S3/S4?.. the system will power down the drive at whatever time you set it to implement S3/S4.

It's a not often talked about issue and is why so many users get into trouble when they just boot up or login with little thought for the drives recovery time. TRIM alone will not save you in all scenarios and on all SSD controllers.

GC does everything TRIM does and more.
 
(completely nefarious thread hijack forthcoming) Hi groberts101, I have another question re my m4 array: Am I going to get adequate GC time after I install seti@home on the rig? That program is almost constantly writing small amounts of data to the drive, probably enough to end up at 2-3 TB /year. I know on a smaller drive that can be problematic, but with nearly 500gb of actual storage on my 2x256gb ssd's I wonder if that sort of write schedule isn't actually manageable. I have a 2tb wd20ears and 640gb wd6400aaks lying around that I can use if necessary, but my strong preference is to be just ssd for everything these days if possible.
 
Back
Top