Does anyone care about code bloat anymore?

KevinMU1

Senior member
Sep 23, 2001
673
0
0
Is there anyone out there that cares about making small, efficient code anymore, or have we just gone to the model of assuming that everyone has never-infinite resources? This is a purely theoretical question, because I understand that there isn't really much real-world difference, but it's just something I'm stuck on from the old days and that I think still has some merit?

Or does it? Is there any merit in making tight code, or is that a thing of the past? What does anyone else think?
 

QwErTyBk

Member
Jun 20, 2001
192
0
0
It is still being taught in school to some degree. When we have a project, our grade often includes how efficient our algorithms are. A student who writes an order n algorithm when the rest of the class has order n^2 or worse will almost always do better on the assignment. I think in the 'real world' of programming it isnt likely that people will spend time doing such proofs for every algorithm they write. There is a point where the improvement in efficiency isnt worth the time/resources it takes to do so. I suspect most companies are more than willing to sacrifice 'super efficiency' to save time and money. So although efficiency and such is stressed in academia, in the real world, time and money are far more important. Efficiency is something I strive for at design time, but I dont brood over whether or not it is the 'best.' The tradeoff between how much time that takes and how much efficiency I gain is USUALLY not worth it.
 

BuckleDownBen

Banned
Jun 11, 2001
519
0
0
If code iis straightforward to read, I don't care if it is more code or takes longer to execute. Like you said, with todays computers you can't really tell the difference 99% of the time. Most of the time, code that is meant to be fast or be small will be "tricky" code, which can be a nightmare to maintain. If I had to choose between complicated source code and code bloat, I'll choose code bloat.

This is an interesting topic though, and I'd like to hear some other people's opinions.
 

LostHiWay

Golden Member
Apr 22, 2001
1,544
0
76
I need bloatware. I have to justify having a 1.6ghz machine with 1GB RAM somehow.
 

manly

Lifer
Jan 25, 2000
13,330
4,100
136
If you write software for embedded systems, then it's extremely important.

Otherwise, blame Moore's Law. ;) I wouldn't dismiss the problem as theoretical or irrelevant though. If you write general software that can't well run on a 3 year old computer, then you obviously are part of the problem.

It's okay for games to assume a high-performance system, but there's little reason for general applications to abusive of CPU cycles.

Also, the way you framed your question is problematic. Every software developer should write clean code that's reasonably efficient. Where code is bloated is not usually because programmers suck, but because the overall design has too many features. For example, did anyone want an annoyng dancing paperclip? Especially in business software?
 

GeSuN

Senior member
Feb 4, 2002
317
0
0
We're learning assembly at our university... If you have a good algorithm I don't think there's a more powerful way to write it else than assembler... but it's just in my opinion though....
 

Descartes

Lifer
Oct 10, 1999
13,968
2
0
Extending what manly said...

Primarily on Windows platforms, a developer is forced to assimilate a slew of libraries that facilitate every API a developer may use in their application. Often, much of this library is superfluous to the task at hand, and thus you have bloat. Indeed, you can't discern exactly how bloated a program is by simply looking at the size of the binary, as many other dynamic libraries may be imported.

There are many factors in determining whether or not a given application is indeed bloated. As I said, the binary size alone is not enough. As manly said, one has to consider the functionality present in the application. The old unix philosophy was that of orthogonality. You had many discreet applications that all performed a task, and performed it well. Present day shows that one often has many applications, many of which do roughly the same thing + something else.

As far as "tight code" is concerned, many 4GL languages simply don't allow you the ability to optimize (other than the obvious snafu). Indeed, many software engineers will not make an attempt to "optimize", for the compiler is certainly (in most cases) better at optimizing than you. In a general application, it's simply not cost effect to overoptimize and try to shave off a few cpu cycles. This isn't to say a programmer should wantonly go about his day writing code without regard for performance, I'm simply saying that there's a "sweet spot" between code that is both readable, and performant.

In enterprise architecture, there are greater issues at hand; that of scalability. As one who designs, and implements, software architectures that employ many native and third-party libraries, I simply have to trust that said libraries are "good enough" [1]. I often have to sacrifice the computationally unpalatable inclusion of libraries in order to remain extensible, but yet support existing clients.

[1] The idea of "good enough" software has been discussed many times, but The Pragmatic Programmer is my main point of reference.
 

KevinMU1

Senior member
Sep 23, 2001
673
0
0
Well, I woud also make the argument that a non-bloated program will be easier to maintain and update because the code will be cleaner. That to me would be a good argument alone for trying to keep the code tight, in my book. And that's talking about long-term money, ie, spending now to not spend later, which is something not many people seem willing to do, honestly. And I'm not sure that tight code necessarily has to be cryptic--remember that cryptic code probably gets somewhat broken down by the compiler, so why not do the breakdown yourself and maintain the readability?

I think that any software should reasonably by able to run on a 486. Think about it, what's all our software doing that it NEEDS all that power? I mean, c'mon people. Sometimes I think code bloat is just a big conspiracy, where the software people bloat software so we need to buy new hardware. ;)

Assembly is good for algorithms, but useless for "real" programming. I'd like to see someone write Diablo in scratch in Assembly. It's just not going to happen.

Descartes-- you make some excellent arguments, I very much enjoyed reading your post. I myself thought of the API thing after my original post. It's like MFC for Windows, although that stuff can be dynamically loaded. I agree on the sweet spot too--I'm just thinking that many people write something and try to get it to work, regardless of whether or not the resulting code is easy to read, efficient, or extendable. I have made a note of that book and will check it out next time I'm at a bookstore.
 

manly

Lifer
Jan 25, 2000
13,330
4,100
136
I feel there are various issues that weren't really touched upon. First off, remember God (Donald Knuth's quote): "Premature optimization is the root of all evil."

Usually, code bloat is pretty simple to work around.

0. Design just the features you need.
1. Write code in favorite language (that can support the solution).
2. Prefer standard libraries to home-made solutions.
3. Profile code to find performance-critical sections.
4. Optimize whereever it's necessary or particularly beneficial. This can entail replacing use of the standard library, or even rewriting in a higher-performance language (i.e. Assembly where appropriate).

Admittedly, #3 and #4 aren't really well-taught by academia. It's usually acquired the "hard way", through hands-on experience. Also, the tools to support profiling tilt heavily to the languages/platforms with broad commercial support. If you're not coding on such a platform, you (or the resident guru) may have to develop your own profiling mechanisms.

If you adhere to some guidelines such as these, and the resulting application is too slow, then either the language chosen was inappropriate, the design was too rich, the code is of poor quality, it wasn't well-optimized where crucial or simply the algorithms are too demanding. I.e. MPEG4 encoding just isn't cheap, and you can't make it "fast" no matter how good you are.

On the topic of standard libraries and higher-level languages, it's a dual-edged sword. You definitely should use higher-level languages with rich standard libraries; you're more productive and standard libraries are well-implemented (relative to what the average developer can do with limited time). However, standard libraries usually implement general algorithms that support a variety of types of usage. There will be some cases where you'll need to hand-roll a more specific solution to wring out more performance.

Where the problem with 3GLs exists is because the developer doesn't conceptualize what one line of source code really expands to (potentially hundreds or thousands of lines of machine code). We trade-off increased features and convenience with performance. At least 80% of the time, the trade-off is highly worthwhile.

On the topic of compilers and optimizations, there really are too different issues. Compilers (and JITs) optimize lower-level constructs better than most programmers could. It's silly for me to worry about hand-tuning a loop unless I have direct knowledge that the loop needs to run faster. However, where programmers would optimize is usually at a higher-level than where compilers optimize. For example, the low-level design might be a little naive or too heavy, or the libraries used too generic. Those are the types of optimizations a skilled developer would know how to deal with, not micro-level instructions. And again, until you profile, you don't know where to optimize.

Again, I'd say that most trained developers write decent code, esp. if they are using standard libraries. Performance problems arise because the design itself is flawed, or because they haven't been trained to profile and optimize the code to find the performance hotspots. If I had to guess, I'd say that for the typical trained programmer, writing good comments and error handling are more significant problems than simply low-performing code. With higher-level languages and libraries, we're really writing just a lot of glue code and business logic.

If anyone has good book references on debugging and profiling, preferably for Java, then I'm all ears.
 

Pauli

Senior member
Oct 14, 1999
836
0
0



<< We're learning assembly at our university... If you have a good algorithm I don't think there's a more powerful way to write it else than assembler... but it's just in my opinion though.... >>



I'd love to see a COM subroutine call in Assembly...
 

KevinMU1

Senior member
Sep 23, 2001
673
0
0
well while we're playing that game, how about opening a socket in assembly? Aren't I glad I had a whole semester on assembly program here in college? ;)