• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Have algortihms become more CPU friendly or memory friendly over time?

Shalmanese

Platinum Member
We were just talking about algorithmic compexity and cost in CS today and I was just wondering has the current trend been towards algorithms that are very CPU efficient but waste vast amounts of memory or memory efficient but using more CPU cycles (depth first searchs for example).

I know it is kind of hard to say but just an overview would be fine.
 
No overview, just my idea's.

I think that, in practice, you are more likely to encounter a very computationaly intensive problem than a problem that requires vast amounts of memory. Add to this the fact that memory capacities grow faster than CPU performance. For these reasons I would expect most research on algorithms to focus on time complexity rather than on space compexity. This imbalance in research will probably result in a imbalance in results too...

Personally, in all the algorithm oriented applications I have done I would have gladly doubled the memory requirement of the algorithm if that reduced it's running time by 10%.
 
I think the trend is towards offloading as much as possible to the CPU's cache. With cache latency and internal transfer so many times greater than accessing main memory and increasing daily, I think the focus has been to downsize as much of the program as possible until it fits into the cache. I've also heard that something like 10% of a program is used 90% of the time so I would think the focus would be to get that 10% to fit into the cache and run as fast as possible as try to use the other 90% as little as possible. Anymore than that 10% or so of the program would be a bonus, but not affect performance as much.
That's just my two cents.
 
Probably just the opposite ... cpu & architectures have adapted to perform better on common algorithms. For example, SIMD capabilities (MMX/SSE/SSE2 3DNow) which were largely tailored for 3D graphics. You could probably find other examples in the CPU architecture ... I bet the branch predictor woiuld be a good place to look.
 
While it is true that memory latency and branching are two important factors holding back microarchitectural performance, they are not issues in algorithm design per se. When dealing with the asymptotic behavior of algorithm analysis (Big-O notation), running time and memory usage with respect to input size are the only issues; constant multiplicative factors are dropped....order 2 * n becomes order n; after all, an algorithm with a running time of order 2 * 1.5^n is going to be faster than another of order 1.6^n for large n.

While it's hard to say with any certainty if there's a global trend towards time or space complexity, one area, artificial intelligence, often puts more emphasis in space complexity in my experience (though that's not to say that time complexity isn't important, it certainly is). Using unintelligent searches (breadth-first and depth-first), you may have the patience to wait a few hours or days it would take to search through a state space with a high branching factor down to a moderate level, but you probably don't have the hundreds of gigabytes of memory it would require (in the case of breadth-first...depth-first uses less memory, but it's not a complete and admissible state search). Some modifications of intelligent searching agents and game playing algorithms (iterative-deepening A*, alpha-beta minimax) are specifically designed to prune the search tree and conserve memory usage (though decreased time complexity is a bonus as well 🙂).

Then again, while good heuristic functions used for intelligent agents are designed to minimize the branching factor (related to space complexity), they must still be fast to compute at each state in the search space (related to time complexity).
 
Generally algorithms have been less CPU and memory friendly because the amount of available CPU cycles and memory have grown so much.

The only people who really tune their programs are game developers and companies like Oracle that need to outperform MS SQL (which isn't hard when you can install Oracle on a 32 CPU Alpha and MS SQL is stuck on Intel boxes =))
 
Nothinman is correct. It's unfortunate but a vast number of programmers these days take ever-increasing CPU power and memory for granted and don't bother to develop optimized code. I'm not talking about video driver or game designers where performance still sells the product. I'm talking more along the lines of contractor and general software development shops where the majority of the software used in the world is developed.

Who's to blame? Good question. I can offer a few suggestions:

1) Education. College courses tend not to emphasize optimization as much as I would like. Outside of an algorithms course where you may be required to do $TASK in O(logn) time, there are plenty of courses where all that's important is that your program deliver the expected results regardless of how elegantly you did it. Are the professors at fault? Maybe. I saw with my own eyes a new employee fresh out of college implement a byte-reversal algorithm (to handle data travelling between different endian architectures) by treating the integer as a byte array, manually copy each byte to another array in reverse order, then cast the resulting array back to a pointer to an integer and finally dereference the pointer. I shudder to think how many machine instructions that code generated.

2) Ever-higher-level languages. Object-oriented languages like C++ are a double-edged sword. On the one hand there exist class libraries filled with highly-efficient algorithms that remove alot of the burden of designing efficient search/hash/storage frameworks. On the other hand, these languages carry with them a certain amount of extra baggage that must be accounted for. Exception handling is certainly a nice feature. But the compiler has to generate additional underlying CODE to support it. Then there are things like implicit constructors, virtual function jump tables... It all adds up. It's a big trade-off.

3) Like I mentioned above, the ever-increasing power and storage of computers is a huge factor. While I was a student, I worked as a co-op at a software shop where the team decided there was a performance problem with the product in development. The fix would have been to redesign a large chunk of the core backend code. But the project leaders decided that since the product wasn't scheduled for release for another 12-15 months, the average customer computer will be faster and the performance problem wouldn't be as noticible.

With the computing power available in today's desktops those of us arguing for efficient program design are fighting a losing battle. Personally I keep a P133 machine in my office to test with occasionally just to make sure things run smoothly. Embedded software developers, where the systems are much less powerful, still need to be concerned. But even then, it's not uncommon to see tiny embedded systems more powerful than than my desktop workstations were 6-8 years ago.

Okay. I'll get off my soapbox now 😀

 
While this may be true for some areas (I was planning a thread on this later on), there are still many algorithms which takes days and months to compute (SETI, genome folding, code cracking etc) and I was wondering if, in these areas, there has been a push for less time or space complexity?
 
These days, when programmers have to choose between tight memory effcient code, or fast code; fast wins. The market has changed considerably since the days of 16bit addressing, now that we can have gigabytes of memory in our desktops at (fairly) reasnoble prices who cares if you waist a little memory? Of course at the same time, with more computing power available programmers start having their code do more.....



<< We were just talking about algorithmic compexity and cost in CS today and I was just wondering has the current trend been towards algorithms that are very CPU efficient but waste vast amounts of memory or memory efficient but using more CPU cycles (depth first searchs for example).

I know it is kind of hard to say but just an overview would be fine.
>>

 
It depends on what you mean by trend...

Microsoft is screwy, but they have so much market exposure in terms of code.

Then there are java applications.... those are just... UGLY.

Then there are applications ppl use for specific purposes..

I think there is no real trend. There are sooooooooo many different pieces of software that require completely different things. If you want speed, you optimized for the processor, that is small enough to fit into the cache... but of course some processors have cache measured in MB. Then there are applications that handle large data sets which use a buttload of memory.

Of course, now that processors are sooooooo fast, I must say that most are more optimized for memory. How many spare CPU cycles do YOU have?
 
The problem with many of the replies in this thread is that they've taken general examples of software suites, and then drawn a logical conclusion about algorithms.

I won't comment in-depth on algorithms because I'm not an expert. However, I will say that fundamental algorithms stay the same over time. New algorithms in new areas are developed (most notably MPEG encoding).

Software doesn't get bigger because algorithms are changing. Software bloat happens because companies and programmers make software bigger. It's the classic 80/20 (or even 90/10) rule in effect. 80% of users use 20% of the features in software. The rest is often bloat.

On the issue of tuning code, that doesn't really have to do with algorithms. That has to do with profiling code to find the bottleneck, and then modifying a local section of code to reduce or eliminate the bottleneck. Sometimes, that means swapping out a slow, naive but simple algorithm for a faster, difficult-to-understand and larger algorithm. That doesn't mean algorithms have gotten any more CPU or memory friendly. It just means a skilled programmer has optimized a critical section of code properly.

I do agree that performance tuning isn't well-taught these days because Moore's Law has spoiled most of us, myself definitely included.

I do agree with the point that higher-level languages can be a dual-edged sword. While they clearly benefit productivity, it's often forgotten that a line of high-level code is executed as thousands of lines of machine code. In reality though, most of the time (again the 80/20 rule), this is okay as the increased functionality by far outweighs the risks. Programmers are usually best served using the language libraries rather than their own implementations.
 
Back
Top