While it is true that memory latency and branching are two important factors holding back microarchitectural performance, they are not issues in algorithm design per se. When dealing with the asymptotic behavior of algorithm analysis (Big-O notation), running time and memory usage with respect to input size are the only issues; constant multiplicative factors are dropped....order 2 * n becomes order n; after all, an algorithm with a running time of order 2 * 1.5^n is going to be faster than another of order 1.6^n for large n.
While it's hard to say with any certainty if there's a global trend towards time or space complexity, one area, artificial intelligence, often puts more emphasis in space complexity in my experience (though that's not to say that time complexity isn't important, it certainly is). Using unintelligent searches (breadth-first and depth-first), you may have the patience to wait a few hours or days it would take to search through a state space with a high branching factor down to a moderate level, but you probably don't have the hundreds of gigabytes of memory it would require (in the case of breadth-first...depth-first uses less memory, but it's not a complete and admissible state search). Some modifications of intelligent searching agents and game playing algorithms (iterative-deepening A*, alpha-beta minimax) are specifically designed to prune the search tree and conserve memory usage (though decreased time complexity is a bonus as well 🙂).
Then again, while good heuristic functions used for intelligent agents are designed to minimize the branching factor (related to space complexity), they must still be fast to compute at each state in the search space (related to time complexity).