Extending what manly said...
Primarily on Windows platforms, a developer is forced to assimilate a slew of libraries that facilitate every API a developer may use in their application. Often, much of this library is superfluous to the task at hand, and thus you have bloat. Indeed, you can't discern exactly how bloated a program is by simply looking at the size of the binary, as many other dynamic libraries may be imported.
There are many factors in determining whether or not a given application is indeed bloated. As I said, the binary size alone is not enough. As manly said, one has to consider the functionality present in the application. The old unix philosophy was that of orthogonality. You had many discreet applications that all performed a task, and performed it well. Present day shows that one often has many applications, many of which do roughly the same thing + something else.
As far as "tight code" is concerned, many 4GL languages simply don't allow you the ability to optimize (other than the obvious snafu). Indeed, many software engineers will not make an attempt to "optimize", for the compiler is certainly (in most cases) better at optimizing than you. In a general application, it's simply not cost effect to overoptimize and try to shave off a few cpu cycles. This isn't to say a programmer should wantonly go about his day writing code without regard for performance, I'm simply saying that there's a "sweet spot" between code that is both readable, and performant.
In enterprise architecture, there are greater issues at hand; that of scalability. As one who designs, and implements, software architectures that employ many native and third-party libraries, I simply have to trust that said libraries are "good enough" [1]. I often have to sacrifice the computationally unpalatable inclusion of libraries in order to remain extensible, but yet support existing clients.
[1] The idea of "good enough" software has been discussed many times, but
The Pragmatic Programmer is my main point of reference.