I have the feeling that we are going to be hitting the "brick wall" Moore predicted sooner than everyone realizes as it comes to die shrinks. Not meaning that it will be impossible to produce smaller transistors, but that it will not be cost effective compared to other methods of gaining performance.
If we were all smart and want to continue the trend of approximately double processing power every 18 months we need to start exploring more options than the essentially free gains we've been seeing from die shrinking over the past decades. I know chip makers love die shrinks because it lets them get more yield, but that's only if they can manufacture effectively at a certain size. I'm sure there are lots of architecture changes and software changes that can be made to enhance performance. Another note is that with trend towards these general purpose programmable units predicted in the future we may not need such a high proportion of cpu power anyway.
Even if processing power does hit a brick wall, it will just force software designers to get off their butts and make more efficient software. Has anyone noticed (at least on windows) how bloated everything has become? I forget where, but I saw a test where someone ran a circia 1999 computer with Office against a circa 2008 computer with Office 2007 and the older computer could get the tasks done in half the time.. with all this abundance of processing power lately it seems like some programmers have forgotten the word optimization.
Anyways that's my prediction... based on very little sound research of course
to the OP, one thing I found interesting is that (according to Wikipidea, the principle source of knowledge on the internet) the principle cause of errors in DRAM memory is due to cosmic radiation. Apparently DRAM is most effected by it because each bit is essentially a tiny capacitor, so intercepting some cosmic energy can cause it to flip state.