• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Algorithm Complexity Analysis

I am wondering why we use the binary representation of our input for the input size that will be used in the algorithm analysis. I understand that since we represent everything in binary, and that would be a reason for this practice. But it seems to complicate things and in my mind give less accurate results.

For example primality testing. If the binary string is 8 digits long there is a huge range of numbers, but the complexity will be the same for all of those numbers.

I guess I'm just a little confused. If someone can offer up an explanation that would be great.
 
Well, because binary representation is the only way to encode som instance of a problem on a computer the most accurate results are given by considering the length of the binary input string. A computer will work on the binary representation and the longer it is the more time it will take.

Consider the following two numbers:

10101
11101

And as an algorithm lets consider one that multiplies it's input by 11001.

Then you say that since the first number is smaller than the second it should be faster to perform the multiplication? Well, that's not how a computer works. Both numbers will take the same amount of time to multiply by the constant.

On the other hand if we had numbers like:

110
11101

Then the first one would of cause be faster to multiply, but the length of the binary representation of that number is also shorter than the representation for the second.
 
Back
Top