Question USB flash media - performance and reliability

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

mikeymikec

Lifer
May 19, 2011
21,013
16,265
136
The only USB drives I've had go faulty on me were the physically longer drives, and because I typically put them in the money pocket of my jeans, when I sat down they'd get bent a little. Since I discovered that, I kept the rubber cap from one of the drives that died and bought this:


It's pretty sturdy and with the cap on there's no chance any crap can get in the plug end.

I agree with corkyg though, when you can buy a flash drive for a fiver, I wouldn't expect it to be the most durable bit of kit known to man. My flash drive's contents are mostly backed up regularly and the remaining data I doubt I'd lose much sleep over.

In terms of durability, I have one other bit of advice: Avoid the ones with plastic USB plugs.

I still have my first work flash drive, 4GB and is shaped like an elongated M&M (about an inch long).
 

mindless1

Diamond Member
Aug 11, 2001
8,752
1,759
136
Definitely avoid the ones with plastic USB plugs, or the cheap slider type like this Teamgroup C145 I got recently:

It is so flimsy, I thought for sure that the slightest bump in a USB port would break it. However I popped the clamshell open, put it in the extended (versus retracted) position, cut off the excess empty space in the back, and filled it with epoxy. It is now one of my most durable flash drives, and an unintended (though not surprising) side effect was that with the cut end filled with clear epoxy, the entire end glows from the access LED which is a nice effect.

I would have just returned it but 128GB for $10 at the time and I knew I could epoxy fill since I had done this before with different drives where the clamshell cracked apart or wouldn't stay in the extended position. You just have to make sure that no epoxy gets into the USB plug. I usually put a sliver of transparent tape on the rear of the plug to form a dam preventing epoxy seeping in. Seems like a lot of work but really only a couple minutes.
 
Last edited:
  • Like
Reactions: VirtualLarry

mindless1

Diamond Member
Aug 11, 2001
8,752
1,759
136
Reliability update: That cheap Teamgroup C145 I mentioned in the prior post, that I epoxy filled, started service a little over 8 months ago, and I've been making weekly automated backups with it (redundant backup, I didn't trust it for a primary backup copy but maybe I could have).

So far it has survived 34 cycles of that, but besides my initial full-fill write to it to make sure it had the stated capacity, I'm just putting 34+GB on it, so all that extra free space, is a big buffer for # of write cycles it will endure, so just a convenient and inexpensive way to have a redundant backup. Although I had initial regrets buying it, it is serving its purpose for a flash drive that doesn't need any particular performance level, just to keep working.

As far as reliability goes, my Sandisk Cruiser Extreme CZ80 32GB is the leader, has been plugged into a 24/7 running system since 2014 (system changed since then but it was always plugged into in my most-used system the whole time), constantly accessed (practically) daily to run Thunderbird portable email client, among other backup activities and is still multiple times faster than the Teamgroup C145 and most other flash drives I have. It was $20 (on sale) back in 2014, money well spent.

That experience led me to buy newer Sandisk products like Ultra Flair and it's crap in comparison, overheats and writes slow down after roughly 1GB+ written to it, though it hasn't failed yet so that's something, but I'd never leave it plugged in 24/7 due to the temperature issue.
 

taisingera

Golden Member
Dec 27, 2005
1,141
35
91
If money is no object go for the Sandisk Extreme Pro. I bought a couple of Mushkin Ventura Plus about 5 years ago that had great read and good sequential write speeds but lately they are acting flaky on my new motherboard, Asus H470M, unless I plug them into the ports on the motherboard. Past couple of flash drives have been very disappointing, Sandisk Ultra Flair 32GB that slows down after 1GB write, a couple of Samsung Bar (32 and 64) whose write speeds are about 25MB/s and have these micro stutters while writing, and then a Gorilla 64GB USB3.0 that only writes at 19MB/s and has even longer stutters. Luckily I just picked up a Samsung FIT Plus 128GB that reads at the rated 400MB/s and writes at 64MB/s, which I consider good enough. Anything over 50MB/s is good enough for me on a flash drive.
 
  • Like
Reactions: mindless1

mindless1

Diamond Member
Aug 11, 2001
8,752
1,759
136
Reliability update: That cheap Teamgroup C145 I mentioned in the prior post, that I epoxy filled, started service a little over 8 months ago, and I've been making weekly automated backups with it (redundant backup, I didn't trust it for a primary backup copy but maybe I could have)

Reliability update 2: That Teamgroup C145 is now, almost exactly 2 years old, has been filled a few times for temporarily moving files around but otherwise had about 0.5GB written weekly, about 104 times. Plugged it in today and about 3GB of files are missing or corrupt. No error message but my backup program thinks they need sync'd again, and they are not files written recently so it's not like the same areas these files were on, where in free space written and erased a lot.

No real data loss but no trust left. I'll keep using it in the same redundant backup role just to see what happens in the future, but only as an experiment.
 
Jul 27, 2020
28,173
19,201
146
Optane memory with M.2 to USB 3.0. :)
I have that actually. Got 64 GB Optane used for around $50 with more than 95% life remaining. Thing I disliked very much is that Intel's retail price for these was very high yet they are giving users less than 60GB formatted capacity for a product advertised as having 64 GB. I mean, what the heck? If you are gonna charge this much, at least don't skimp on the bits.
 

nosirrahx

Senior member
Mar 24, 2018
304
75
101
I have that actually. Got 64 GB Optane used for around $50 with more than 95% life remaining. Thing I disliked very much is that Intel's retail price for these was very high yet they are giving users less than 60GB formatted capacity for a product advertised as having 64 GB. I mean, what the heck? If you are gonna charge this much, at least don't skimp on the bits.
It really is too bad that Optane could not compete with NAND in any key metrics. Optane had untouchable endurance and latency compared to NAND, but loses in key categories namely price to capacity ratio and package size to total storage ratio.

The only way to get anything approaching reasonable capacity and fast sequential speed was to go with PCIe cards or U.2 drives. Looking at the 905P 22110 380GB drive it looks like it would have been possible to build a 2280 256GB drive, but that would have been crazy expensive for the capacity and sold even worse than the 118GB SSDs.

I did a test to see what kind of potential there was combining NAND and Optane in RAID and while the results were pretty awesome, actually building a device like this would have been impossibly expensive. On a single PICe cards you would have:

NAND controller
NAND chips
Optane controller
Optane chips
RAID controller

The results were pretty great though. I used a 980 pro and a 905P 22110 SSD. In RAID 0 the sequential read was a little better than the 980 pro and the 4KQ1T1 was almost as good as the 905P. All in all this still only doubled the capacity of the 905P SSD so not even 1 TB for the price of a 4TB SSD, and that doesn't even include the cost of a RAID controller.
 
  • Wow
Reactions: igor_kavinski
Jul 27, 2020
28,173
19,201
146
The results were pretty great though. I used a 980 pro and a 905P 22110 SSD. In RAID 0 the sequential read was a little better than the 980 pro and the 4KQ1T1 was almost as good as the 905P.
Thanks for that bit of info. Really cool if you can expand capacity and get the most important benefit of Optane.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Thing I disliked very much is that Intel's retail price for these was very high yet they are giving users less than 60GB formatted capacity for a product advertised as having 64 GB. I mean, what the heck? If you are gonna charge this much, at least don't skimp on the bits.

You know all storage drives do that right? My "128GB" is actually 118GB. In the hard drive space, 1GB = 1 billion bytes, while elsewhere it's 1,073,741,824 bytes.

It really is too bad that Optane could not compete with NAND in any key metrics. Optane had untouchable endurance and latency compared to NAND, but loses in key categories namely price to capacity ratio and package size to total storage ratio.

Well, I sincerely believe if they stuck to it for few years and opened it up so AMD and other companies can use it, it would have succeeded.

NAND took over TEN years to succeed, and 5 additional years to get mainstream Optane has been on the market for just 4 years now.

It would never reach NAND on a price/GB basis, but the much enhanced performance would have justified it.

Actually the storage models never made sense. The real thing was the DIMMs.

But again, Pat Gelsinger is trained under Andy Grove, the same legendary CEO that got Intel off the memory train, and the company benefitted enormously over it. Gelsinger said they're never going to have such memory related side projects ever again. Actually I am sad about that but if it allows the company to focus on it's core strength and get back on track(and more) I'm more than ok with it.
 
Jul 27, 2020
28,173
19,201
146
You know all storage drives do that right?
Which in my opinion is sleazy. Intel should have done better. There is no rule that there can't be non-standard sizes. Crucial came out with 750GB and 1050GB SSDs. 512GB SSDs are also available that offer the user bit more space than 500GB SSDs. My point was, Intel could have given the user some value in return for the high price.
 
  • Haha
Reactions: Pohemi
Jul 27, 2020
28,173
19,201
146
Actually I am sad about that but if it allows the company to focus on it's core strength and get back on track(and more) I'm more than ok with it.
Hopefully after they are back on track and have tons of cash to waste, maybe they will return to figuring out what to do with Optane. They have a huge opportunity to make deals with HDD makers to speed up their drives with high endurance enterprise level Optane. With tens of millions of HDDs shipped every year, Intel has volume right there to make large scale Optane manufacturing feasible and profitable.
 
  • Like
Reactions: VirtualLarry

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
Which in my opinion is sleazy. Intel should have done better. There is no rule that there can't be non-standard sizes. Crucial came out with 750GB and 1050GB SSDs. 512GB SSDs are also available that offer the user bit more space than 500GB SSDs. My point was, Intel could have given the user some value in return for the high price.

That was just because Micron/Intel 1st gen 3D NAND happened to use a 48Gbit die size. Hence the different then usual* size drives. There is nothing to stop anyone from using "odd" sizes. You could do a 96/144Gbit die if anyone cared to.

*I actually liked the 180GB capacity ones. Could fit the OS and still have 128GB+ available for the rest.
 
Jul 27, 2020
28,173
19,201
146
*I actually liked the 180GB capacity ones. Could fit the OS and still have 128GB+ available for the rest.
It's such an odd size that if you see a used older laptop advertising that amount of space, you can be 99% sure that it is using an Intel SSD. I got a cheap Dell Latitude laptop like that for a guy and Intel's SSD toolbox showed it had 90% remaining life. He used it for 3 or 4 years before it started feeling slow for him.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
They were barely making any money on the thousand-dollar 905P drives, nevermind the comparatively dirt cheap Optane Memory branded client parts.

With tens of millions of HDDs shipped every year, Intel has volume right there to make large scale Optane manufacturing feasible and profitable.

Did you read what I said? Gelsinger said he'd put an end to any memory-like ventures in the future, period.
 
  • Haha
Reactions: igor_kavinski
Jul 27, 2020
28,173
19,201
146
I did something either really stupid or pretty clever. Went to a shop that doesn't see much business. Overpaid slightly for a 64GB and a 32GB Sandisk UFD.

Can anyone guess the logic behind my decision? :p

You folks have 24 hours to mull this over :D

Winner gets my deepest respect for figuring out how my brain works :)
 

nosirrahx

Senior member
Mar 24, 2018
304
75
101
Well, I sincerely believe if they stuck to it for few years and opened it up so AMD and other companies can use it, it would have succeeded.

NAND took over TEN years to succeed, and 5 additional years to get mainstream Optane has been on the market for just 4 years now.

It would never reach NAND on a price/GB basis, but the much enhanced performance would have justified it.

Actually the storage models never made sense. The real thing was the DIMMs.

But again, Pat Gelsinger is trained under Andy Grove, the same legendary CEO that got Intel off the memory train, and the company benefitted enormously over it. Gelsinger said they're never going to have such memory related side projects ever again. Actually I am sad about that but if it allows the company to focus on it's core strength and get back on track(and more) I'm more than ok with it.

Its such a tough call but sometimes you have to have faith in your technology and push through the growing pains. I was a big supporter of Optane all the way through but I never got to the point where I felt comfortable suggesting it to anyone due the problems I mentioned above. Out of everything that killed the possibility of Optane SSDs being the storage choice for high end systems it was the capacity in the M.2 form factor. There are people that build with no real budget and they would have bought Optane drives for OS/apps if those drives could compete with high end M.2 drives but the reality was never even close. Even if they could do it, imagine the cost of a 4TB M.2 Optane drive.
 
  • Like
Reactions: igor_kavinski

nosirrahx

Senior member
Mar 24, 2018
304
75
101
*I actually liked the 180GB capacity ones. Could fit the OS and still have 128GB+ available for the rest.

I think I might be the only person on earth to install 2 P1600X Optane drives in RAID 0 on a NUC8i7HVK. You get great sequential and most of the 4KQ1T1 performance that Optane provides and a capacity that you can actually use ..... sort of. That is the problem though, its insanely expensive to force Optane into a working configuration.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Out of everything that killed the possibility of Optane SSDs being the storage choice for high end systems it was the capacity in the M.2 form factor. There are people that build with no real budget and they would have bought Optane drives for OS/apps if those drives could compete with high end M.2 drives but the reality was never even close. Even if they could do it, imagine the cost of a 4TB M.2 Optane drive.

The capacity issue is two fold I assume.

One, is that the high capacity ones are too expensive, and the volumes would be very low.

Two, is that at least for now, the only available die config is the 16GB size. That's why it had the large M2.110 form factor. Four dies with 64GB(16GB dies in 4-Hi config).

There seemed to be evidence of an 8-Hi version, but it wasn't common.

Time would have solved the capacity issue. Gen 2 Optane with 4-layers would have got us 32GB dies. Then again, storage capacity continually increases so 1TB today is 2TB tomorrow meaning they'd never be able to have a big enough Optane drive in the M.2 80 form factor.

But I NEVER believed in the SSD market being the strength. Because the PCI Express bottleneck is too great. 50-100x latency difference compared to DIMM version wastes the cost of Optane, because you are paying for the tech but not getting it unleashed.

I think the alternate future would have been, or if say I was a client manager of Optane, I would have strongly pushed for a DIMM version and work with Microsoft and Linux vendors to have a true instant boot system. We'd start initially with something like "Optane Sleep/Hybrid Sleep" mode in the Start Menu option where the system would store in Optane DIMMs and it can load from it, completely eliminating boot time, and sleep power use, which would have been great in laptops.

Then starting from there, we'd expand using application support with software partners. An optimized system would in theory completely eliminate loading time, because the Storage-->Memory transfer that's deeply embedded in operating systems would be rearchitected to skip that completely.

That would have been something they could have called "Optane Memory 2.0" or something. I think they missed the opportunity with the plain vanilla Optane Memory, as I think they could have used the Memory Drive software to allow Optane Memory to act as slow memory and have systems treating is as slow memory, so it accesses that rather than pagefiling the storage. Yes you can do that manually but doing it automatic will get way more support. They could also have advertised as a "Physical Virtual Memory" or something, so the system would prioritize that before going into pagefile. Then Optane Memory 2.0 would make it into real memory with DIMMs on boards.

That's the grand vision I saw, and why I didn't like to acknowledge it's demise.
 
Last edited:
  • Like
Reactions: VirtualLarry

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
If money is no object go for the Sandisk Extreme Pro. I bought a couple of Mushkin Ventura Plus about 5 years ago that had great read and good sequential write speeds but lately they are acting flaky on my new motherboard, Asus H470M, unless I plug them into the ports on the motherboard. Past couple of flash drives have been very disappointing, Sandisk Ultra Flair 32GB that slows down after 1GB write, a couple of Samsung Bar (32 and 64) whose write speeds are about 25MB/s and have these micro stutters while writing, and then a Gorilla 64GB USB3.0 that only writes at 19MB/s and has even longer stutters. Luckily I just picked up a Samsung FIT Plus 128GB that reads at the rated 400MB/s and writes at 64MB/s, which I consider good enough. Anything over 50MB/s is good enough for me on a flash drive.

This is an old post in the thread, but figured I'd write my own post that I've had a 64GB Sandisk Extreme start giving me read errors with data. The drive is 9 years old, and has served almost every work day of its life as a Portable Apps drive with things like this browser I'm typing the post on now, in addition to its myriad of data shuffling uses. I'm definitely pleased with the life I got out of it.

Fast forward all these years and I can get a 512GB Sandisk Extreme PRO for the price I paid for a 64GB Sandisk Extreme 9 years ago. Of course, it's all but certain that while the 64GB Extreme was 64GB, the current Extreme PRO is likely TLC. I know nothing is gained for free. That said, my hope is that TLC is so mature now, that the combination of decently sourced NAND and the sheer massive amount of Overprovisioning available compared to my Storage use case will mean that a replacement will serve as well for many years to come.
 
  • Like
Reactions: Insert_Nickname

nosirrahx

Senior member
Mar 24, 2018
304
75
101
I wonder if Optane had some potential as an intelligent cache solution. I envision the OS writing a file to disk and if that file is below a certain threshold in size, a 2nd copy is mirrored to an Optane drive. On disk read, the OS then reads from Optane first if the file is mirrored there.

This would solve all of the Optane cache problems -> platform, OS and generation specific, SATA acceleration only, only small (and kind of slow) Optane cache drives supported.

Related to this, I tried to use a P1600X Optane drive as Optane cache but it seems to be the only M.2 form factor drive that is not Optane cache software compatible.

Instead, I am trialing relocating files and folders prone to small random reads/writes. The OS is on a SABRENT Rocket 4 Plus and I moved by official means or by junctioning things like prefetch, pagefile, swapfile, all the temp folders, edge cache and pretty much anything that rarely gets used and ends up being GBs in size. Booting up is a little faster but other than that, the only real benefit is getting 36GB of data off my main OS drive. One interesting note is on the LCU folder. This thing is used when Windows updates are unpacked and gets filled with an absolutely insane number of small files. Enumerating it on the P1600X is easily 3 times faster so I expect Windows updates to also install faster.

As with everything Optane though, this is totally not worth the cost.
 
  • Like
Reactions: igor_kavinski

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
I wonder if Optane had some potential as an intelligent cache solution. I envision the OS writing a file to disk and if that file is below a certain threshold in size, a 2nd copy is mirrored to an Optane drive. On disk read, the OS then reads from Optane first if the file is mirrored there.

This would solve all of the Optane cache problems -> platform, OS and generation specific, SATA acceleration only, only small (and kind of slow) Optane cache drives supported.

Related to this, I tried to use a P1600X Optane drive as Optane cache but it seems to be the only M.2 form factor drive that is not Optane cache software compatible.

Instead, I am trialing relocating files and folders prone to small random reads/writes. The OS is on a SABRENT Rocket 4 Plus and I moved by official means or by junctioning things like prefetch, pagefile, swapfile, all the temp folders, edge cache and pretty much anything that rarely gets used and ends up being GBs in size. Booting up is a little faster but other than that, the only real benefit is getting 36GB of data off my main OS drive. One interesting note is on the LCU folder. This thing is used when Windows updates are unpacked and gets filled with an absolutely insane number of small files. Enumerating it on the P1600X is easily 3 times faster so I expect Windows updates to also install faster.

As with everything Optane though, this is totally not worth the cost.

Yeah, I would say Optane has been used quite a bit as part of a caching solution. Primocache is a Windows focused product that shows a lot of success with it, but as a block device it simply uses most frequently requested data (or incoming writes in deferred write mode) to determine placement. As far as I'm aware the software does not use size as a determination at this time: https://www.romexsoftware.com/en-us...C-50-18-01-PrimoCache-and-Optane-SSD-v1.2.pdf

In ZFS land, Optane is nearly perfect for the ZIL when utilizing it on storage systems that need it. Now OpenZFS has the Special vdev Class that you can make out of things like Optane devices, to make things like metadata writes, and optionally files up to a size you specify, much faster as they go to that Special vdev instead of your other likely slower vdevs (made out of HDDs for instance). https://openzfs.github.io/openzfs-docs/man/7/zpoolconcepts.7.html#Special_Allocation_Class

VMware vSAN has also advertised using Optane as a Read / Write Cache device for their vSAN Device Groups when used in combination with an All-NVMe topology (Optane for Cache device, TLC NVMe for the capacity devices).

Dell EMC Powermax (the EMC VMAX Successor) uses both Optane DIMMs and Optane SSD's. NetApp uses Optane DIMMs. Pure uses Optane SSDs, but only as a read cache.

So the Enterprise is definitely using Optane, though not indefinitely with Intel's announcement. Most will migrate to alternatives based on HPE/Sandisk, Everspin, Micron, and Kioxia. Since Intel was using a lot of Development Funds to enable manufacturers to create Optane solutions, they had advertising slicks all over the place. I don't think we'll see that sort of fanfare with future SCM based products.
 
  • Like
Reactions: nosirrahx

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
This is an old post in the thread, but figured I'd write my own post that I've had a 64GB Sandisk Extreme start giving me read errors with data. The drive is 9 years old, and has served almost every work day of its life as a Portable Apps drive with things like this browser I'm typing the post on now, in addition to its myriad of data shuffling uses. I'm definitely pleased with the life I got out of it.

As a portable apps drive it has worked harder then most. 9 years is well done in those conditions.

I've always had pretty good luck with Sandisk drives. Glad to see I'm not the only one.
 

mindless1

Diamond Member
Aug 11, 2001
8,752
1,759
136
Reliability update 2: That Teamgroup C145 is now, almost exactly 2 years old, has been filled a few times for temporarily moving files around but otherwise had about 0.5GB written weekly, about 104 times. Plugged it in today and about 3GB of files are missing or corrupt. No error message but my backup program thinks they need sync'd again, and they are not files written recently so it's not like the same areas these files were on, where in free space written and erased a lot.

No real data loss but no trust left. I'll keep using it in the same redundant backup role just to see what happens in the future, but only as an experiment.
This is just weird. One day short of a year since my last Treamgroup reliability update, and it had been working flawlessly until today.

I plugged it in to sync a backup, same as always, and it showed there was about 1.8GB worth of files changed that hadn't been. I ran Check Disk and it found nothing wrong, so I decided to format it and start the backup over, then see if the same problem persists. It's still just an experiment, not a storage device I need to rely upon.

It really shouldn't have exhausted the write cycle limit yet, if it has any remotely modern level of wear leveling, as the typical backup data size was under 1GB and it had around 40GB of free space still. I had a 2nd flash drive as a parallel 2nd copy of same data and confirmed it is the Teamgorup flash drive, not the files source.

Anyone know of a more comprehensive test for flash drives, besides just copying some files and doing checksum comparisons? I'd have thought Check Disk would have found problems but it didn't.