• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Scaling of Programming

TuxDave

Lifer
We're always so interested in the scaling of processors and its corresponding performance. It's interesting to note that using only performance as a metric has become less interesting and the power/performance ratio more interesting. However, there is still a strong demand (and market appeal) for faster and faster processors for home computing....

So what are the programmers gonna do with all this extra performance and bonus cache? Do programs scale in the opposite direction where programs no longer become tighter and more efficient, but instead becomes sloppier and slower?
 
I can't remember who said it first, but there's a quote in computer science that says "Data expands to fill the space available to it." Programs tend to be much the same way -- back when processors were a LOT slower, everything was hand-coded in assembly, and everything had to be tweaked for the maximum possible speed, because even the simplest programs took a LONG time to run, and shaving a few cycles here and there could cut the running time of a long job by hours. This gives the highest possible speed, but development takes FOREVER for any sort of sizable program, and good luck maintaining and debugging something like this.

As CPUs have gotten faster and systems have gotten more flexible (adding things like transparent virtual memory, multithreading, etc.), programming has moved almost entirely to high-level languages like C, and nowadays to object-oriented languages like C++, and even interpreted OO languages like Java -- with a heavy dependence on optimizing compliers to increase performance. While from a processing efficiency standpoint these languages are less efficient (I guess you could call them 'sloppier and slower'), it's proven to be more efficient from a programming and debugging perspective to write simpler and less optimized programs and to get a faster computer if performance is unacceptable (up to a certain point). This philosophy stems from the fact that computer time is cheap compared to a programmer's time, and gets cheaper by the day, while human labor costs tend to stay flat or even rise over time.

On the other hand, there are certain market segments (such as scientific computing, raytracing/rendering, and computer gaming) where every CPU (or GPU) cycle *does* count, and you still see heavily hand-tweaked and optimized programs. But I think such applications are going to become increasingly rare as computers become more and more powerful. I mean, even computer games have followed this trend to some extent -- Carmack wrote the graphics engine for Doom by hand, in assembler, but Doom3 uses OpenGL (a high-level graphics programming language) and most of the game engine is written in C++.
 
Compilers are usually very good at optomizing speed. At best an assembly programmer might beable to squeeze a few clock cycles out of a loop, but in general it's very hard to beat a compiler at it's game. What compilers stink at is optomizing for size.

Think of it this way. The purpose of a game engine is generally to mimic physics. So when a game engine models everything down to the atom or even further then programmers will start running out of things to do with extra performance.
 
Originally posted by: AyashiKaibutsu
Think of it this way. The purpose of a game engine is generally to mimic physics. So when a game engine models everything down to the atom or even further then programmers will start running out of things to do with extra performance.

They're a long, long way from modelling any object visible with the naked eye down to atomic size with real physics, and there's no point in modelling objects so far below human vision for a game engine.

Lattice QCD models physics on the subatomic level, and a 16x16x16 lattice (where units are approximately the dimension of a proton), took days to process for a couple of dozen time units (measured in the amount of time it takes light to cross a proton) on a supercomputer with 256 150MHz Alpha processors in 1995. Modern supercomputers are better, but the smallest visible object woud require a lattice that has tens of trillions of units per dimension. Computers have only increased in speed a few hundred times since then, but you need to scale up performance by a factor of 10^40 to simulate a visible object for a few trillionths of a nanosecond in only a few days.
 
Originally posted by: AyashiKaibutsu
It's not for visuals sake that you would want to model something that low. It's for having an accurate physics engine.

If we're talking about visually observable objects, there's no physics reason either even if we could do it. Scale is essential in physics, since objects behave differently at different length (or equivalently energy) scales. Objects you can see are modelled using classical physics, while atomic size objects are modelled using quantum physics. Subatomic objects are modelled according to various quantum field theories.
 
As a software engineer, I promise you, we will use the new cpu power somehow 🙂

Voice recognition, more interactive and intelligent displays, and maybe even a human-like AI to manage your life would all take a lot more cpu power than we have now. Computers are never fast enough, but we often get caught in the chicken and the egg arguments. Hardware gets faster, then we bog it down with more "features."
 
Originally posted by: AyashiKaibutsu
It's not for visuals sake that you would want to model something that low. It's for having an accurate physics engine.

That's one thing that I've been worried about. Some programmers may start going to arbitrary levels of detail just because they can even though the user may not be able to differentiate between a very accurate simulation and a semi-accurate model. It's kind of the brute force method of using up available CPU cycles.

Originally posted by: tkotitan2
As a software engineer, I promise you, we will use the new cpu power somehow 🙂

Voice recognition, more interactive and intelligent displays, and maybe even a human-like AI to manage your life would all take a lot more cpu power than we have now. Computers are never fast enough, but we often get caught in the chicken and the egg arguments. Hardware gets faster, then we bog it down with more "features."

Does that also mean in the future, programmers won't have to think as much anymore? Given many computer cycles, if they had to implement a very accurate voice recognition, they would start leaning towards the brute force method since it's becoming more feasible?
 
Does that also mean in the future, programmers won't have to think as much anymore?

No, we'll do what we always do: add a layer of indirection and slow things down that way. We moved from machine language entered via switches to assembly mnemonics to high level languages like FORTRAN and now to Virtual Machine based environments like Java and .NET.

Given many computer cycles, if they had to implement a very accurate voice recognition, they would start leaning towards the brute force method since it's becoming more feasible?

There are plenty of tasks where we're dozens of orders of magnitude away from having the CPU power to do them using brute force. I'm not sure if voice recognition is one of them or not, but I do know that quantum physics is, which is a big issue since we're at the brink of needing to understanding the QM of many atom systems to build smaller transistors.
 
Development of software has indeed been moving in this direction for some time now. If you look at Agile Development methodologies in particular, you see a focus on readability of code rather than performance.

In practice, you make it work, make it "right", and then worry about performance only if it becomes an issue.
 
Originally posted by: AbsolutDealage
Development of software has indeed been moving in this direction for some time now. If you look at Agile Development methodologies in particular, you see a focus on readability of code rather than performance.

In practice, you make it work, make it "right", and then worry about performance only if it becomes an issue.

Yeah, I figured as much that turnaround time and programming ease is becoming much more valuable in this competitive market.
 
It is the patterns you use that matters. An example is that rather than using getElement(), you get the pointer and read the elements in a efficient manner.

You could scale up or scale out. Scaling up requires hero programmers to do insane and heroic tasks. Often these heros owns the critical part of the system. Unfortunate, if the hero dies in some freaky farm accident due to extreme fatigue, the team will be crippled. I would prefer to scale out the project and each person must package and document their components properly. Package is only requires for core components.

My experiences tells me that methodologies are not one size fits all. You need to adapt and adjust for different projects. Do only what is needed, a requirement document is no good if you don't refer to and update it.

 
Originally posted by: AbsolutDealage
Development of software has indeed been moving in this direction for some time now. If you look at Agile Development methodologies in particular, you see a focus on readability of code rather than performance.

In practice, you make it work, make it "right", and then worry about performance only if it becomes an issue.

Unfortunately, performance is a issue for me. My clients' machine range from 733MHz to 3.2GHz. Some even have Windows 95. Just spend 30 mins thinking about performance issues do good.
 
I have so much to say about this, and about things said in this thread, that I don't know where even to begin. I also doubt that massive typing effort would even have been read much...

So I'm going to be lazy and condense it all to simply state my central opinion:
We do want and we do need much more powerful CPUs, no matter what. I also think: Always!
I have no problems seeing old and new uses for more CPU power.
 
there is always need for more processing power, especially for applications like seti, etc.
However, thye bulk of most software today is more slowed by i/o than cpu power.
We need to erase the overall system bottlenecks.

Disk read/write, networking/modem, user input devices, adn all system busses are still way behind the cpu in terms of processing throughput. for current & future software to become much faster, we need to improve these areas first.

 
Originally posted by: tinyabs
Originally posted by: AbsolutDealage
Development of software has indeed been moving in this direction for some time now. If you look at Agile Development methodologies in particular, you see a focus on readability of code rather than performance.

In practice, you make it work, make it "right", and then worry about performance only if it becomes an issue.

Unfortunately, performance is a issue for me. My clients' machine range from 733MHz to 3.2GHz. Some even have Windows 95. Just spend 30 mins thinking about performance issues do good.

Our target is Win2k/XP, so we have a little more to deal with. Even in this situation though, we could run into an old 700 MHz system on a factory floor somewhere.

Fortunately, our app doesn't do any serious crunching, but obviously your target user's platform and the nature of the application will create a need for performance considerations.

Still, the general rule in any Agile project is to do as little work as possible to get the application to behave the way you need it to. Code that has lacking performance but is extremely easy to understand/debug/change is preferable to code that is obfuscated but performs better.

We write our code to be human readable - not fast. If your customer makes complaints about performance, then the code in question is refactored to squeeze some more performance out, but without sacrificing the readability of the code (as much as possible, anyways).
 
Originally posted by: AbsolutDealage

Our target is Win2k/XP, so we have a little more to deal with. Even in this situation though, we could run into an old 700 MHz system on a factory floor somewhere.

Fortunately, our app doesn't do any serious crunching, but obviously your target user's platform and the nature of the application will create a need for performance considerations.

Still, the general rule in any Agile project is to do as little work as possible to get the application to behave the way you need it to. Code that has lacking performance but is extremely easy to understand/debug/change is preferable to code that is obfuscated but performs better.

We write our code to be human readable - not fast. If your customer makes complaints about performance, then the code in question is refactored to squeeze some more performance out, but without sacrificing the readability of the code (as much as possible, anyways).

I often don't have the time to refactor the code. Performance issue manifest only when the work load gets significantly large like 10K of items a month. For this, I get better at writing better code the first time because I knew what is good and not.
 
Back
Top