Does anyone have any real data backing up the assertion that it would take too many resources to do lossless texture decompression? It doesn't seem like LZW/LZMA/DEFLATE would be used due to dictionary size, so it would have to be something quite different, which would make it hard to compare.
The problem is one of unknowns. With both lossy and lossless, you can have an unknown amount of memory to decompress, an unknown amount of time to decompress, and an unknown amount of space that the compressed input data will take up (or unknown output size, but for textures or other imagery, that would not make sense). It's less that there even could be such data, than it is that it's not generally going to be seen as a worthwhile effort--at least not for gaming hardware. If you can only save a little bit of space anyway, why even bother compressing it? It will just waste GPU resources.
On top of that, if you fix decompression complexity, you guarantee that you will not get as much compression. So now, instead of saving 30% of your space, you save 15%. Wow. Meanwhile, lossy are giving you 2x, 4x, 8x, etc. the pixels to work with, in any given amount of video memory. Even with 1-2GB being common, video memory is not free.
It's much
simpler to know how much space the input data will take, how much space will be needed to do the decompression, and to have a minimum of complexity in decompressing it (part of which is to make the complexity a fixed value for any given chunk of compressed data). To do so, sure, you end up with a single image that looks worse than lossless, but you can have
many times the pixels of said lossless image to work with, and do so with less time and power consumed (it's basically free on modern GPUs). The minimal complexity results in basically no impact on performance, and the larger data set allows more detailed textures, which can then be used to give
more samples per outputted pixel for a given amount of memory space and bandwidth consumed, allowing the game devs to give the game better IQ than with no compression, or lossless.
If you are seeing artifacts from compression, the texture is too small, and/or is going from lossy to lossy (nothing like plain old JPEGs to work from...ugh!). If you are seeing blotchiness, it's probably undersampled by a great deal. If you are seeing aliasing w/ AA on, the shaders are probably written in a way that leaves them undersampled (not necessarily something texture compression can fix, and a common IQ problem seen in console ports). If you have many pixels to sample for each pixel on your display, you are far better off than having one pixel to sample for each pixel on your display, and even in that case, you are better than having a pixel get fuzzed out across tens of pixels of your display.