Big-O Algorithm & Time Complexities

Josh123

Diamond Member
Aug 4, 2002
3,030
2
76
Ok guys, so I we have been going over this stuff in my Data Structures & Algorithms class and I'm completely lost.

I'm going to get into the tutoring lab as much as possible since we have an exam over it a week from Friday.

Here is the link to the .pdf worksheet. If anyone can explain it and how to determine the big-O measure for the functions as if I'm a 5 year old, I'd really appreciate it.

https://drive.google.com/file/d/0BzD0FjUel2BqWGJjc3V5QU83OVE/view?usp=sharing
 

Josh123

Diamond Member
Aug 4, 2002
3,030
2
76
I just looked at the PDF. Looks foreign lol.

Lol ya, I think I understand most of it but I'm going to need to see if my wife can help translate it since she's a mathematician. She might have a better idea about what's going on.
 

WaitingForNehalem

Platinum Member
Aug 24, 2008
2,497
0
71
It is basically asking which function grows so fast that other one becomes irrelevant.

Say f(N) = N^2 + N + 1
In big O that would just be O(N^2) since N^2 will grow much faster than N, which is just linear or 1, which is a constant.
 

Apathetic

Platinum Member
Dec 23, 2002
2,587
6
81
This.

Just look at the exponents. The largest one wins.

17x^5 + 4x + 1 <-- 17x^5 is the "dominant" term
99x^3 + 99x <-- 99x^3 is the "dominant" term
x^6 + 120x <-- x^6 is the "dominant" term

So when comparing two or more expressions, find the dominant term in each. The largest of those determines which expression dominates the others. In this case, it's x^6.

Dave

It is basically asking which function grows so fast that other one becomes irrelevant.

Say f(N) = N^2 + N + 1
In big O that would just be O(N^2) since N^2 will grow much faster than N, which is just linear or 1, which is a constant.
 

Gronnie

Member
Jan 21, 2013
91
0
16
This.

Just look at the exponents. The largest one wins.

17x^5 + 4x + 1 <-- 17x^5 is the "dominant" term
99x^3 + 99x <-- 99x^3 is the "dominant" term
x^6 + 120x <-- x^6 is the "dominant" term

So when comparing two or more expressions, find the dominant term in each. The largest of those determines which expression dominates the others. In this case, it's x^6.

Dave

Drop the constants too.
17x^5 is equivalent to x^5
99x^3 is equivalent to x^3

Wait until you take Theory of Algorithms if your school offers it, that is where this stuff gets really interesting.
 

Josh123

Diamond Member
Aug 4, 2002
3,030
2
76
Ok so we got our papers back and I got a 94 on it but I'm not sure about some that I got wrong.

How would you determine which of these dominate the other?
A. (X^4 + x^2 - 17) / X^3
B. X^2
 

Gronnie

Member
Jan 21, 2013
91
0
16
No problem. Study hard on this topic, it is really important. I am graduating this December and just accepted a position where in the interview they asked me tons of Big O stuff.
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
No problem. Study hard on this topic, it is really important. I am graduating this December and just accepted a position where in the interview they asked me tons of Big O stuff.

Sure, it's important to know this stuff, but to this day at any job I've had I've never sat down and had to work out Big-O problems.
 

Gronnie

Member
Jan 21, 2013
91
0
16
Sure, it's important to know this stuff, but to this day at any job I've had I've never sat down and had to work out Big-O problems.

Right. I guess I should have been more clear. Being able to think through this type of thing in your head and make a decent algorithm that isn't O(n^x) or O(n!) if it doesn't absolutely need to be is really important. Especially if you will be writing code that needs to be fast.
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
Sure, it's important to know this stuff, but to this day at any job I've had I've never sat down and had to work out Big-O problems.

Yeah agreed, but he did say he was asked in an interview. That's exactly the venue in which stuff like this comes up most often, so it pays for a new graduate to be up on it. Once you have a few years under your belt they are less likely to drag you through it.
 

SteveGrabowski

Diamond Member
Oct 20, 2014
8,957
7,667
136
I'm not a huge fan of Big O notation, as it suppresses so much important detail. I mean you can make a complicated O(N) time median finder that completely sucks in real life in comparison to a really straightforward O(N log N) expectation one, though the O(N) algorithm is freaking beautiful. A quicksort with Dutch National Flag partitioning will usually kill a heapsort in practice with repeated keys even though they're both O(N log N) with high probability (100% probability for heapsort). If you ever have the time, Robert Sedgewick has a great algorithms and data structures class on coursera where he goes into a lot of depth on getting good real world performance. It does use a flat memory model though, as caches make things way more complicated. Really fun programs though; loved doing the kd trees and the seam carving algorithm for resizing photos was incredible.
 
Last edited:

SteveGrabowski

Diamond Member
Oct 20, 2014
8,957
7,667
136
Ok so we got our papers back and I got a 94 on it but I'm not sure about some that I got wrong.

How would you determine which of these dominate the other?
A. (X^4 + x^2 - 17) / X^3
B. X^2

It depends... is |X| large or is it small?

If |X| >> 1 is large we have

A. (X^4 + X^2 - 17)/X^3 = X + 1/X - 17/X^3 = O(X)
B. X^2 = O(X^2)

So if |X| is large then B is way larger

But if |X| << 1 is small then we have
A. (X^4 + X^2 - 17)/X^3 = X + 1/X - 17/X^3 = O(1/X^3)
B. X^2 = O(X^2)

So that A is way bigger when |X| << 1 is very small
 
Last edited:

Cogman

Lifer
Sep 19, 2000
10,286
145
106
I'm not a huge fan of Big O notation, as it suppresses so much important detail. I mean you can make a complicated O(N) time median finder that completely sucks in real life in comparison to a really straightforward O(N log N) expectation one, though the O(N) algorithm is freaking beautiful. A quicksort with Dutch National Flag partitioning will usually kill a heapsort in practice with repeated keys even though they're both O(N log N) with high probability (100% probability for heapsort). If you ever have the time, Robert Sedgewick has a great algorithms and data structures class on coursera where he goes into a lot of depth on getting good real world performance. It does use a flat memory model though, as caches make things way more complicated. Really fun programs though; loved doing the kd trees and the seam carving algorithm for resizing photos was incredible.

This. Big Oh is pushed very hard in the academia world probably mostly because it is provable. But it almost worthless when it comes to measuring performance. People gloss over cache locality because it is complex, but the truth is it plays a huge factor in performance. Heck, java is going to introduce value types specifically because the performance difference of cache optimized data structures is so huge.

As you point out, there are so many O(n log n) sorting algorithms yet the one that usually beats them all is a combination of 2 O(n^2) algorithms (quicksort + insertion sort).

Matrices is another place where you see this silliness. The optimum algorithm for matrix multiplication only becomes quick when you are talking about something ridiculous like a 100 million x 100 million matrix. The naive multiplication algorithm often beats even the slightly more complex strassen algorithm for matricies smaller that 100x100. Yet the abomination that is the Coppersmith–Winograd exist not out of practicality, but because "Yeah, we can get is smaller big O value!". I mean, the only use of this algorithm is to prove lower bounds for algorithms that use matrix multiplication... How worthless is that? A worthless algorithm used to make other algorithms look better even though they would never use this worthless algorithm in practice, but hey, we can get a smaller number and make this thing look faster than it really is!

It is semi-sort-of useful. But really one of these days we need to come up with a better algorithm performance metric. Unfortunately one that is actaully useful would likely be more complex than most academics would like.
 

SteveGrabowski

Diamond Member
Oct 20, 2014
8,957
7,667
136
This. Big Oh is pushed very hard in the academia world probably mostly because it is provable. But it almost worthless when it comes to measuring performance. People gloss over cache locality because it is complex, but the truth is it plays a huge factor in performance. Heck, java is going to introduce value types specifically because the performance difference of cache optimized data structures is so huge.

As you point out, there are so many O(n log n) sorting algorithms yet the one that usually beats them all is a combination of 2 O(n^2) algorithms (quicksort + insertion sort).

Matrices is another place where you see this silliness. The optimum algorithm for matrix multiplication only becomes quick when you are talking about something ridiculous like a 100 million x 100 million matrix. The naive multiplication algorithm often beats even the slightly more complex strassen algorithm for matricies smaller that 100x100. Yet the abomination that is the Coppersmith–Winograd exist not out of practicality, but because "Yeah, we can get is smaller big O value!". I mean, the only use of this algorithm is to prove lower bounds for algorithms that use matrix multiplication... How worthless is that? A worthless algorithm used to make other algorithms look better even though they would never use this worthless algorithm in practice, but hey, we can get a smaller number and make this thing look faster than it really is!

It is semi-sort-of useful. But really one of these days we need to come up with a better algorithm performance metric. Unfortunately one that is actaully useful would likely be more complex than most academics would like.

Do you know any good algorithms books that factor caches in well?