• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

The worst thing I've ever seen, possibly

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
I have to interface with Great Plains for a variety of projects. This is a major companies accounting package, now run by Microsoft, and the architecture is nutty. All the tables are random named things like UPR00010, RM000101, AF10000. Luckily they do have nonvarchar columns like date columns, but a NULL date is actually stored in the database as 1/1/1900.

GP is notoriously awful to integrate with. EConnect has helped somewhat.
 
You aren't correct here.

In particular, floating point division is (usually) faster than integer division. Integer division consumes somewhere around 27->80 clock cycles whereas floating point division can be done in 15->45 clock cycles.

On top of that, floating point multiplication can be done in 4->8 clock cycles. So if the value you are dividing by is constant, multiplying by the inverse can get you really fast results.

Of course, there are exceptions (powers of two division for example) but by and large floats have really good performance for most mathematical operations. The only place they struggle is addition and subtraction.

See http://www.agner.org/optimize/optimizing_cpp.pdf for a reference on the performance characteristics of various x86 uarchs.

Depends highly on language base...however; floating point math is limited by how many significant figures the language handles for that variable type.

For most applications using integers and then representing them as a float is more accurate.

This is a program I wrote for a class to do just that in Python:

http://www.codeskulptor.org/#user38_tWFMOjIU99Z5fVX.py

The goal of the game is to stop on a time that ends in .0
 
Anyone seen these before? Pretty funny 🙂

Z6XT0Cd.png


Po9iQZJ.png


ZPoFe2l.png
 
Depends highly on language base...however; floating point math is limited by how many significant figures the language handles for that variable type.

For most applications using integers and then representing them as a float is more accurate.

This is a program I wrote for a class to do just that in Python:

http://www.codeskulptor.org/#user38_tWFMOjIU99Z5fVX.py

The goal of the game is to stop on a time that ends in .0

Not as much as you would think. Language overhead is pretty constant at the end of the day. In fact, if anything, language overhead will make floating point overhead even less of an issue (because in the grand scheme of things, it isn't what is slow).

Where it might make a difference is in language that have the notion of infinite decimal places. But there are very few that actually do that. Most languages are constrained to using either 32 or 64 bit floats (which have no performance differences).

In your game example, you are spending far more computation time doing the text rendering, the timer thread, and the user input. You could have used just about any numeric representation and you would have been just as fast.
 
Anyone seen these before? Pretty funny 🙂

Z6XT0Cd.png
I actually did something like this in production code recently. Except worse because I did something like 1+1+1 for 3, etc.

When you want to
stripe a table in CSS, and it's not marked with even/odd rows, and you need to support IE8,
you don't have a lot of options. 🙄
 
Programmers have gotten to far away from the hardware operations to understand efficiency/optimization.😡

all that power has gone to their heads.()🙂:thumbsdown:

I was reading a thread in this forum where several people agreed that there's no reason a programmer should need to learn memory management. Smartphones with 8GB RAM are on the horizon.
 
Last edited:
I actually did something like this in production code recently. Except worse because I did something like 1+1+1 for 3, etc.

When you want to
stripe a table in CSS, and it's not marked with even/odd rows, and you need to support IE8,
you don't have a lot of options. 🙄

You do have an option. Programmatically generate the CSS... 😉
 
I was reading a thread in this forum where several people agreed that there's no reason a programmer should need to learn memory management. Smartphones with 8GB RAM are on the horizon.

This is an age old problem. As memory and processing capabilities increase, so do the data sets that we operate on. For example, the first camera phones took 640x480 pics. Now we have phones that hit 50 megapixels. It's a wash.
 
I was reading a thread in this forum where several people agreed that there's no reason a programmer should need to learn memory management. Smartphones with 8GB RAM are on the horizon.

I don't recall which thread that was, but in all likelihood the point was about garbage collection, not simply larger amounts of available memory.
 
I don't recall which thread that was, but in all likelihood the point was about garbage collection, not simply larger amounts of available memory.

The general argument was that "there's no reason to learn memory management when the operating system handles that for you."
 
The general argument was that "there's no reason to learn memory management when the operating system handles that for you."

Right, but memory management and memory efficiency aren't the same thing. Few programmers will ever be in a position of manually managing memory again. Probably also very few will ever be in a position of actually caring how much memory their application uses. There is always scarcity, of course, but today it is bandwidth and battery, and to some extent user attention.
 
Right, but memory management and memory efficiency aren't the same thing. Few programmers will ever be in a position of manually managing memory again. Probably also very few will ever be in a position of actually caring how much memory their application uses. There is always scarcity, of course, but today it is bandwidth and battery, and to some extent user attention.

You can't be efficient without managing your resources.

When programmers fail to learn basic programming practices, you end up with some of the examples in this thread.
 
And yes, "few programmers" will need to know a lot of things. 99% of the time, I don't care whether a multiply is faster than a divide, but when the time comes, it's good to know the difference.
 
And yes, "few programmers" will need to know a lot of things. 99% of the time, I don't care whether a multiply is faster than a divide, but when the time comes, it's good to know the difference.

Well since we're bringing up old threads, I think you're the fellow who posted that programming was "mindless busy work" in an OT thread. So, it hardly seems to be important to know much of anything to engage in it 🙂.
 
The general argument was that "there's no reason to learn memory management when the operating system handles that for you."

In general, it's a good argument when starting something. Everybody needs to start somewhere.
1 - learn to make code that works when used correctly
2 - learn to make code that works when used incorrect (repeats the loop instead of crashing when the user tells it to divide by 0)
3 - learn to make code that doesn't have memory leaks
4 - learn to make code you can actually read 6 months from now
5 - learn to make code that is broken into functions or classes that can be used later in other programs
6 - learn to make code that is as restrictive as possible; narrow the scope of things, avoid global variables
7 - learn to make code that doesn't have gigantic security holes
8 - learn to make code that can effectively use 20 processor cores
...
...
398 - try to optimize memory management

Of course it depends on what you're doing. People probably were not stressing about getting notepad.exe optimized. The game Quake would be on the opposite end. The requirements for that game were so ridiculously high that some of the code was written in assembly.
I should hope people are putting the same effort into things like Folding@Home. Nobody cares if notepad.exe is 20% faster, but making Folding 20% faster makes the program 20% better.
 
Im a big fan of making sure that code works first. That's my first priority. Then, when commenting it, revise it and see if the code can be bettered.

At this age and time, except maybe in Java, memory management is a afterthough.
 
In general, it's a good argument when starting something. Everybody needs to start somewhere.
1 - learn to make code that works when used correctly
2 - learn to make code that works when used incorrect (repeats the loop instead of crashing when the user tells it to divide by 0)
3 - learn to make code that doesn't have memory leaks
4 - learn to make code you can actually read 6 months from now
5 - learn to make code that is broken into functions or classes that can be used later in other programs
6 - learn to make code that is as restrictive as possible; narrow the scope of things, avoid global variables
7 - learn to make code that doesn't have gigantic security holes
8 - learn to make code that can effectively use 20 processor cores
...
...
398 - try to optimize memory management

Security holes should be #1 and your list is pretty bad in general.
 
Back
Top