I don't see parallel programming becoming any more "mainstream" unless we come up with a new, easier model to work with it. Dealing directly with threads, synchronization primitives, etc. is incredibly difficult even for programmers who are good and experienced. Introducing things like race conditions into your code is very easy, but the bugs caused by them are typically a huge pain to debug. In many (most, really) cases parallel programming isn't even necessary or gives little to no benefit.
Take the GUI example - GUIs are already multithreaded, and have been for a long time. At least, all of the decent ones that don't lock up every time you click a button have been. It's fairly rare for a GUI to need to do long running CPU intensive work. Threading in this context is simply so that the UI stays responsive even though your file load or network request may take a second or two. In those situations it's usually much easier to use something like C#'s async/await rather than "true" multithreading. Yet even async/await has pitfalls and edge cases that I'd say the vast majority of C# developers aren't aware of.
Another example, since we're talking C#. Microsoft made a big deal a few years ago when they added async/await support for ASP.NET MVC/WebAPI applications. I guess it does work for making highly scalable applications (think Stack Overflow), but I've always read that if your web application is hitting a single database server (which is the great majority of business apps, I'd wager), async/await gives basically no benefit because the the database will become a bottleneck before the web server's thread pool does.
Anyway, I sort of wandered off topic there. I think the biggest shift that's underway is the integration of functional programming practices into mainstream programming. I really don't see a language like Haskell ever taking off for mainstream development, but a lot of their ideas (pure methods, avoid modifying state, etc) are already leaking over to languages like C++ and C#.