Current tape systems like LTO store data linear, not random, and are actually very very fast, faster than HDD max sequential even. Well over 100 MB/sec uncompressed. Getting off tape would be very fast... getting it to the tape in the first place would still requiring reading off this HDD at 2 MB/sec...
However, if the HDD were to store the files in a manner similar to how they were accessed, or had a set of rules for directories filled with small files, you could get similar speed from a HDD, and an HDD would still have the ability to do random seeks, on top of that. In some ways, it was better back in the days of 'dumb' allocators, putting writes practically in sequence, even at the expense of high fragmentation for large often-edited files. It is a solvable problem, there's just not much interest, since it would add great complexity for the benefit of a small portion of users.
If SSDs were not an option on the horizon, it likely would have been dealt with, and I'm about 99% sure that NTFS could handle it in a backwards-compatible way
(+10MB or there-abouts extents dedicated to small files, such that several levels of a directory tree could be in a single extent, and they would be optimally re-grouped during defrag passes; the extent would need to be read to read one file, but if there was enough RAM, that could allow all other files to be cached with it, allowing faster editing and copying; if plugged into an older version of windows, metadata about the extent could be ignored or removed, and you wouldn't get the benefits when editing/copying those files in said older OS version). The manhours required to develop, test, and maintain this sort of thing, however, would be enough that I could see people working with FSes considering it, and then deciding that it's too much work for too little gain.
What really sucks on your end, though, is that business users who would most benefit from high-performance SSDs
(IE, those old Kingstons and the like with high WA and HDD-like 4KB random performance would not quality, but you don't need the latest and greatest) are all too often either stuck with a vendor that makes them impossible options, or that are only available in higher-end models than you might want
(you just want an SSD for C:, not a Xeon, Quadro, and four computers-worth of cooling),
and then they won't even tell you what you're getting. The best I've seen from big vendors is that Lenovo tells you it uses MLC flash...well, that's more than Dell tells you, but for your needs, you want a real make and model[, dammit, ]and want it on a lesser computer. So, if you finally convince the guys in charge of the wonders of good SSDs, then what are the actual SSD options you'll have, when they go to buy new computers? Even with an uphill battle, if you can specify one from Newegg, you'll be better off than many places.
Poor software engineering practices don't help either. Software developers have gotten careless because disk SPACE is free, but don't think about the consequences to disk IO when you have to download, decompress, and install that 500 MB printer driver... space means nothing when you don't have the IOs to utilize it IMO.
I see you use HP printers
😛. I think HP still makes some great small workhorses, but for drivers and software, come on over to Samsung and Brother.
Management of software is also quite often to blame. When you have to work with what already exists, you don't have the option to rip the guts out and make it smaller and better. You might also have tight deadlines, and not enough time to write all the code properly. You might have had poor communication about requirements, too, without time to make changes the right way. You may also be given coding and regulatory requirements which stupidly enforce
(or your management considers them to do so) technical constraints that serve no real purpose but imaginary CYA on their part. On top of that, it's so often easier to convince non-technical people that gradual modification is superior to updating requirements and re-implementing. How much software development can improve in short timespans is quite often lost on those in charge, and projects of a certain size can't be stealthily rewritten through conspiracies between devs, admins, and users, hidden from management
(been there, done that 🙂). On top of all that, you could be dealing framework lovers, or people who think in some other language, and write the language you're using as if it were that
(never worked with anyone like that, but have fixed horrible buggy bloated code made by such people).
For the pictured case, Java, like some other languages, forces many more files than really should exist, with many of them being only a handful of real lines of code, and half of those just naming wrappers of various kinds. While java isn't alone in this file=module thing, Java is insane when it comes to how many files you end up needing for what other languages let you do in a few dozen lines of code in one file.